problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_35539
|
rasdani/github-patches
|
git_diff
|
saleor__saleor-4404
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Some invalid phone numbers throw an exception and result in server error
### What I'm trying to achieve
Type in a fake number +1 123 456 7890, into the shipping form. Should fail gracefully with a red input box
### Steps to reproduce the problem
1. Type in a fake number +1 123 456 7890, into the shipping form
2. Receive ValueError at /en/checkout/shipping-address/ “+11234567890” is not a valid phone number.
### What I expected to happen
Should fail gracefully with a red input box
### What happens
500 Internal server error
**System information**
Operating system: Ubuntu 18.04
Browser: Firefox
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/account/validators.py`
Content:
```
1 from django.core.exceptions import ValidationError
2 from django.utils.translation import ugettext_lazy as _
3 from phonenumber_field.phonenumber import to_python
4 from phonenumbers.phonenumberutil import is_possible_number
5
6
7 def validate_possible_number(value, country=None):
8 phone_number = to_python(value, country)
9 if phone_number and not is_possible_number(phone_number):
10 raise ValidationError(
11 _("The phone number entered is not valid."), code="invalid_phone_number"
12 )
13
```
Path: `saleor/account/i18n.py`
Content:
```
1 from collections import defaultdict
2
3 import i18naddress
4 from django import forms
5 from django.forms.forms import BoundField
6 from django.utils.translation import pgettext_lazy, ugettext_lazy as _
7 from django_countries import countries
8 from phonenumber_field.phonenumber import PhoneNumber
9 from phonenumbers import NumberParseException
10 from phonenumbers.phonenumberutil import is_possible_number
11
12 from .models import Address
13 from .widgets import DatalistTextWidget, PhonePrefixWidget
14
15 COUNTRY_FORMS = {}
16 UNKNOWN_COUNTRIES = set()
17
18 AREA_TYPE_TRANSLATIONS = {
19 "area": pgettext_lazy("Address field", "Area"),
20 "county": pgettext_lazy("Address field", "County"),
21 "department": pgettext_lazy("Address field", "Department"),
22 "district": pgettext_lazy("Address field", "District"),
23 "do_si": pgettext_lazy("Address field", "Do/si"),
24 "eircode": pgettext_lazy("Address field", "Eircode"),
25 "emirate": pgettext_lazy("Address field", "Emirate"),
26 "island": pgettext_lazy("Address field", "Island"),
27 "neighborhood": pgettext_lazy("Address field", "Neighborhood"),
28 "oblast": pgettext_lazy("Address field", "Oblast"),
29 "parish": pgettext_lazy("Address field", "Parish"),
30 "pin": pgettext_lazy("Address field", "PIN"),
31 "postal": pgettext_lazy("Address field", "Postal code"),
32 "prefecture": pgettext_lazy("Address field", "Prefecture"),
33 "province": pgettext_lazy("Address field", "Province"),
34 "state": pgettext_lazy("Address field", "State"),
35 "suburb": pgettext_lazy("Address field", "Suburb"),
36 "townland": pgettext_lazy("Address field", "Townland"),
37 "village_township": pgettext_lazy("Address field", "Village/township"),
38 "zip": pgettext_lazy("Address field", "ZIP code"),
39 }
40
41
42 class PossiblePhoneNumberFormField(forms.CharField):
43 """A phone input field."""
44
45 def __init__(self, *args, **kwargs):
46 super().__init__(*args, **kwargs)
47 self.widget.input_type = "tel"
48
49
50 class CountryAreaChoiceField(forms.ChoiceField):
51 widget = DatalistTextWidget
52
53 def valid_value(self, value):
54 return True
55
56
57 class AddressMetaForm(forms.ModelForm):
58 # This field is never visible in UI
59 preview = forms.BooleanField(initial=False, required=False)
60
61 class Meta:
62 model = Address
63 fields = ["country", "preview"]
64 labels = {"country": pgettext_lazy("Country", "Country")}
65
66 def clean(self):
67 data = super().clean()
68 if data.get("preview"):
69 self.data = self.data.copy()
70 self.data["preview"] = False
71 return data
72
73
74 class AddressForm(forms.ModelForm):
75
76 AUTOCOMPLETE_MAPPING = [
77 ("first_name", "given-name"),
78 ("last_name", "family-name"),
79 ("company_name", "organization"),
80 ("street_address_1", "address-line1"),
81 ("street_address_2", "address-line2"),
82 ("city", "address-level2"),
83 ("postal_code", "postal-code"),
84 ("country_area", "address-level1"),
85 ("country", "country"),
86 ("city_area", "address-level3"),
87 ("phone", "tel"),
88 ("email", "email"),
89 ]
90
91 class Meta:
92 model = Address
93 exclude = []
94 labels = {
95 "first_name": pgettext_lazy("Personal name", "Given name"),
96 "last_name": pgettext_lazy("Personal name", "Family name"),
97 "company_name": pgettext_lazy(
98 "Company or organization", "Company or organization"
99 ),
100 "street_address_1": pgettext_lazy("Address", "Address"),
101 "street_address_2": "",
102 "city": pgettext_lazy("City", "City"),
103 "city_area": pgettext_lazy("City area", "District"),
104 "postal_code": pgettext_lazy("Postal code", "Postal code"),
105 "country": pgettext_lazy("Country", "Country"),
106 "country_area": pgettext_lazy("Country area", "State or province"),
107 "phone": pgettext_lazy("Phone number", "Phone number"),
108 }
109 placeholders = {
110 "street_address_1": pgettext_lazy(
111 "Address", "Street address, P.O. box, company name"
112 ),
113 "street_address_2": pgettext_lazy(
114 "Address", "Apartment, suite, unit, building, floor, etc"
115 ),
116 }
117
118 phone = PossiblePhoneNumberFormField(widget=PhonePrefixWidget, required=False)
119
120 def __init__(self, *args, **kwargs):
121 autocomplete_type = kwargs.pop("autocomplete_type", None)
122 super().__init__(*args, **kwargs)
123 # countries order was taken as defined in the model,
124 # not being sorted accordingly to the selected language
125 self.fields["country"].choices = sorted(
126 COUNTRY_CHOICES, key=lambda choice: choice[1]
127 )
128 autocomplete_dict = defaultdict(lambda: "off", self.AUTOCOMPLETE_MAPPING)
129 for field_name, field in self.fields.items():
130 if autocomplete_type:
131 autocomplete = "%s %s" % (
132 autocomplete_type,
133 autocomplete_dict[field_name],
134 )
135 else:
136 autocomplete = autocomplete_dict[field_name]
137 field.widget.attrs["autocomplete"] = autocomplete
138 field.widget.attrs["placeholder"] = (
139 field.label if not hasattr(field, "placeholder") else field.placeholder
140 )
141
142 def clean(self):
143 data = super().clean()
144 phone = data.get("phone")
145 country = data.get("country")
146 if phone:
147 try:
148 data["phone"] = clean_phone_for_country(phone, country)
149 except forms.ValidationError as error:
150 self.add_error("phone", error)
151 return data
152
153
154 def clean_phone_for_country(phone, country):
155 error = _("The phone number entered is not valid.")
156 error_code = "invalid_phone_number"
157 if phone:
158 try:
159 phone = PhoneNumber.from_string(phone, country)
160 except NumberParseException:
161 raise forms.ValidationError(error, code=error_code)
162 else:
163 if not is_possible_number(phone):
164 raise forms.ValidationError(error, code=error_code)
165 return phone
166
167
168 class CountryAwareAddressForm(AddressForm):
169
170 I18N_MAPPING = [
171 ("name", ["first_name", "last_name"]),
172 ("street_address", ["street_address_1", "street_address_2"]),
173 ("city_area", ["city_area"]),
174 ("country_area", ["country_area"]),
175 ("company_name", ["company_name"]),
176 ("postal_code", ["postal_code"]),
177 ("city", ["city"]),
178 ("sorting_code", []),
179 ("country_code", ["country"]),
180 ]
181
182 class Meta:
183 model = Address
184 exclude = []
185
186 def add_field_errors(self, errors):
187 field_mapping = dict(self.I18N_MAPPING)
188 for field_name, error_code in errors.items():
189 local_fields = field_mapping[field_name]
190 for field in local_fields:
191 try:
192 error_msg = self.fields[field].error_messages[error_code]
193 except KeyError:
194 error_msg = pgettext_lazy(
195 "Address form", "This value is invalid for selected country"
196 )
197 self.add_error(field, error_msg)
198
199 def validate_address(self, data):
200 try:
201 data["country_code"] = data.get("country", "")
202 if data["street_address_1"] or data["street_address_2"]:
203 data["street_address"] = "%s\n%s" % (
204 data["street_address_1"],
205 data["street_address_2"],
206 )
207 data = i18naddress.normalize_address(data)
208 del data["sorting_code"]
209 except i18naddress.InvalidAddress as exc:
210 self.add_field_errors(exc.errors)
211 return data
212
213 def clean(self):
214 data = super().clean()
215 return self.validate_address(data)
216
217
218 def get_address_form_class(country_code):
219 return COUNTRY_FORMS[country_code]
220
221
222 def get_form_i18n_lines(form_instance):
223 country_code = form_instance.i18n_country_code
224 try:
225 fields_order = i18naddress.get_field_order({"country_code": country_code})
226 except ValueError:
227 fields_order = i18naddress.get_field_order({})
228 field_mapping = dict(form_instance.I18N_MAPPING)
229
230 def _convert_to_bound_fields(form, i18n_field_names):
231 bound_fields = []
232 for field_name in i18n_field_names:
233 local_fields = field_mapping[field_name]
234 for local_name in local_fields:
235 local_field = form_instance.fields[local_name]
236 bound_field = BoundField(form, local_field, local_name)
237 bound_fields.append(bound_field)
238 return bound_fields
239
240 if fields_order:
241 return [_convert_to_bound_fields(form_instance, line) for line in fields_order]
242
243
244 def update_base_fields(form_class, i18n_rules):
245 for field_name, label_value in AddressForm.Meta.labels.items():
246 field = form_class.base_fields[field_name]
247 field.label = label_value
248
249 for field_name, placeholder_value in AddressForm.Meta.placeholders.items():
250 field = form_class.base_fields[field_name]
251 field.placeholder = placeholder_value
252
253 if i18n_rules.country_area_choices:
254 form_class.base_fields["country_area"] = CountryAreaChoiceField(
255 choices=i18n_rules.country_area_choices
256 )
257
258 labels_map = {
259 "country_area": i18n_rules.country_area_type,
260 "postal_code": i18n_rules.postal_code_type,
261 "city_area": i18n_rules.city_area_type,
262 }
263
264 for field_name, area_type in labels_map.items():
265 field = form_class.base_fields[field_name]
266 field.label = AREA_TYPE_TRANSLATIONS[area_type]
267
268 hidden_fields = i18naddress.KNOWN_FIELDS - i18n_rules.allowed_fields
269 for field_name in hidden_fields:
270 if field_name in form_class.base_fields:
271 form_class.base_fields[field_name].widget = forms.HiddenInput()
272
273 country_field = form_class.base_fields["country"]
274 country_field.choices = COUNTRY_CHOICES
275
276
277 def construct_address_form(country_code, i18n_rules):
278 class_name = "AddressForm%s" % country_code
279 base_class = CountryAwareAddressForm
280 form_kwargs = {
281 "Meta": type(str("Meta"), (base_class.Meta, object), {}),
282 "formfield_callback": None,
283 }
284 class_ = type(base_class)(str(class_name), (base_class,), form_kwargs)
285 update_base_fields(class_, i18n_rules)
286 class_.i18n_country_code = country_code
287 class_.i18n_fields_order = property(get_form_i18n_lines)
288 return class_
289
290
291 for country in countries.countries.keys():
292 try:
293 country_rules = i18naddress.get_validation_rules({"country_code": country})
294 except ValueError:
295 country_rules = i18naddress.get_validation_rules({})
296 UNKNOWN_COUNTRIES.add(country)
297
298 COUNTRY_CHOICES = [
299 (code, label)
300 for code, label in countries.countries.items()
301 if code not in UNKNOWN_COUNTRIES
302 ]
303 # Sort choices list by country name
304 COUNTRY_CHOICES = sorted(COUNTRY_CHOICES, key=lambda choice: choice[1])
305
306 for country, label in COUNTRY_CHOICES:
307 country_rules = i18naddress.get_validation_rules({"country_code": country})
308 COUNTRY_FORMS[country] = construct_address_form(country, country_rules)
309
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/saleor/account/i18n.py b/saleor/account/i18n.py
--- a/saleor/account/i18n.py
+++ b/saleor/account/i18n.py
@@ -3,13 +3,11 @@
import i18naddress
from django import forms
from django.forms.forms import BoundField
-from django.utils.translation import pgettext_lazy, ugettext_lazy as _
+from django.utils.translation import pgettext_lazy
from django_countries import countries
-from phonenumber_field.phonenumber import PhoneNumber
-from phonenumbers import NumberParseException
-from phonenumbers.phonenumberutil import is_possible_number
from .models import Address
+from .validators import validate_possible_number
from .widgets import DatalistTextWidget, PhonePrefixWidget
COUNTRY_FORMS = {}
@@ -145,26 +143,12 @@
country = data.get("country")
if phone:
try:
- data["phone"] = clean_phone_for_country(phone, country)
+ data["phone"] = validate_possible_number(phone, country)
except forms.ValidationError as error:
self.add_error("phone", error)
return data
-def clean_phone_for_country(phone, country):
- error = _("The phone number entered is not valid.")
- error_code = "invalid_phone_number"
- if phone:
- try:
- phone = PhoneNumber.from_string(phone, country)
- except NumberParseException:
- raise forms.ValidationError(error, code=error_code)
- else:
- if not is_possible_number(phone):
- raise forms.ValidationError(error, code=error_code)
- return phone
-
-
class CountryAwareAddressForm(AddressForm):
I18N_MAPPING = [
diff --git a/saleor/account/validators.py b/saleor/account/validators.py
--- a/saleor/account/validators.py
+++ b/saleor/account/validators.py
@@ -4,9 +4,14 @@
from phonenumbers.phonenumberutil import is_possible_number
-def validate_possible_number(value, country=None):
- phone_number = to_python(value, country)
- if phone_number and not is_possible_number(phone_number):
+def validate_possible_number(phone, country=None):
+ phone_number = to_python(phone, country)
+ if (
+ phone_number
+ and not is_possible_number(phone_number)
+ or not phone_number.is_valid()
+ ):
raise ValidationError(
_("The phone number entered is not valid."), code="invalid_phone_number"
)
+ return phone_number
|
{"golden_diff": "diff --git a/saleor/account/i18n.py b/saleor/account/i18n.py\n--- a/saleor/account/i18n.py\n+++ b/saleor/account/i18n.py\n@@ -3,13 +3,11 @@\n import i18naddress\n from django import forms\n from django.forms.forms import BoundField\n-from django.utils.translation import pgettext_lazy, ugettext_lazy as _\n+from django.utils.translation import pgettext_lazy\n from django_countries import countries\n-from phonenumber_field.phonenumber import PhoneNumber\n-from phonenumbers import NumberParseException\n-from phonenumbers.phonenumberutil import is_possible_number\n \n from .models import Address\n+from .validators import validate_possible_number\n from .widgets import DatalistTextWidget, PhonePrefixWidget\n \n COUNTRY_FORMS = {}\n@@ -145,26 +143,12 @@\n country = data.get(\"country\")\n if phone:\n try:\n- data[\"phone\"] = clean_phone_for_country(phone, country)\n+ data[\"phone\"] = validate_possible_number(phone, country)\n except forms.ValidationError as error:\n self.add_error(\"phone\", error)\n return data\n \n \n-def clean_phone_for_country(phone, country):\n- error = _(\"The phone number entered is not valid.\")\n- error_code = \"invalid_phone_number\"\n- if phone:\n- try:\n- phone = PhoneNumber.from_string(phone, country)\n- except NumberParseException:\n- raise forms.ValidationError(error, code=error_code)\n- else:\n- if not is_possible_number(phone):\n- raise forms.ValidationError(error, code=error_code)\n- return phone\n-\n-\n class CountryAwareAddressForm(AddressForm):\n \n I18N_MAPPING = [\ndiff --git a/saleor/account/validators.py b/saleor/account/validators.py\n--- a/saleor/account/validators.py\n+++ b/saleor/account/validators.py\n@@ -4,9 +4,14 @@\n from phonenumbers.phonenumberutil import is_possible_number\n \n \n-def validate_possible_number(value, country=None):\n- phone_number = to_python(value, country)\n- if phone_number and not is_possible_number(phone_number):\n+def validate_possible_number(phone, country=None):\n+ phone_number = to_python(phone, country)\n+ if (\n+ phone_number\n+ and not is_possible_number(phone_number)\n+ or not phone_number.is_valid()\n+ ):\n raise ValidationError(\n _(\"The phone number entered is not valid.\"), code=\"invalid_phone_number\"\n )\n+ return phone_number\n", "issue": "Some invalid phone numbers throw an exception and result in server error\n### What I'm trying to achieve\r\nType in a fake number +1 123 456 7890, into the shipping form. Should fail gracefully with a red input box\r\n\r\n### Steps to reproduce the problem\r\n1. Type in a fake number +1 123 456 7890, into the shipping form\r\n2. Receive ValueError at /en/checkout/shipping-address/ \u201c+11234567890\u201d is not a valid phone number.\r\n\r\n### What I expected to happen\r\nShould fail gracefully with a red input box\r\n\r\n### What happens\r\n500 Internal server error\r\n\r\n**System information**\r\nOperating system: Ubuntu 18.04\r\nBrowser: Firefox\r\n\n", "before_files": [{"content": "from django.core.exceptions import ValidationError\nfrom django.utils.translation import ugettext_lazy as _\nfrom phonenumber_field.phonenumber import to_python\nfrom phonenumbers.phonenumberutil import is_possible_number\n\n\ndef validate_possible_number(value, country=None):\n phone_number = to_python(value, country)\n if phone_number and not is_possible_number(phone_number):\n raise ValidationError(\n _(\"The phone number entered is not valid.\"), code=\"invalid_phone_number\"\n )\n", "path": "saleor/account/validators.py"}, {"content": "from collections import defaultdict\n\nimport i18naddress\nfrom django import forms\nfrom django.forms.forms import BoundField\nfrom django.utils.translation import pgettext_lazy, ugettext_lazy as _\nfrom django_countries import countries\nfrom phonenumber_field.phonenumber import PhoneNumber\nfrom phonenumbers import NumberParseException\nfrom phonenumbers.phonenumberutil import is_possible_number\n\nfrom .models import Address\nfrom .widgets import DatalistTextWidget, PhonePrefixWidget\n\nCOUNTRY_FORMS = {}\nUNKNOWN_COUNTRIES = set()\n\nAREA_TYPE_TRANSLATIONS = {\n \"area\": pgettext_lazy(\"Address field\", \"Area\"),\n \"county\": pgettext_lazy(\"Address field\", \"County\"),\n \"department\": pgettext_lazy(\"Address field\", \"Department\"),\n \"district\": pgettext_lazy(\"Address field\", \"District\"),\n \"do_si\": pgettext_lazy(\"Address field\", \"Do/si\"),\n \"eircode\": pgettext_lazy(\"Address field\", \"Eircode\"),\n \"emirate\": pgettext_lazy(\"Address field\", \"Emirate\"),\n \"island\": pgettext_lazy(\"Address field\", \"Island\"),\n \"neighborhood\": pgettext_lazy(\"Address field\", \"Neighborhood\"),\n \"oblast\": pgettext_lazy(\"Address field\", \"Oblast\"),\n \"parish\": pgettext_lazy(\"Address field\", \"Parish\"),\n \"pin\": pgettext_lazy(\"Address field\", \"PIN\"),\n \"postal\": pgettext_lazy(\"Address field\", \"Postal code\"),\n \"prefecture\": pgettext_lazy(\"Address field\", \"Prefecture\"),\n \"province\": pgettext_lazy(\"Address field\", \"Province\"),\n \"state\": pgettext_lazy(\"Address field\", \"State\"),\n \"suburb\": pgettext_lazy(\"Address field\", \"Suburb\"),\n \"townland\": pgettext_lazy(\"Address field\", \"Townland\"),\n \"village_township\": pgettext_lazy(\"Address field\", \"Village/township\"),\n \"zip\": pgettext_lazy(\"Address field\", \"ZIP code\"),\n}\n\n\nclass PossiblePhoneNumberFormField(forms.CharField):\n \"\"\"A phone input field.\"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.widget.input_type = \"tel\"\n\n\nclass CountryAreaChoiceField(forms.ChoiceField):\n widget = DatalistTextWidget\n\n def valid_value(self, value):\n return True\n\n\nclass AddressMetaForm(forms.ModelForm):\n # This field is never visible in UI\n preview = forms.BooleanField(initial=False, required=False)\n\n class Meta:\n model = Address\n fields = [\"country\", \"preview\"]\n labels = {\"country\": pgettext_lazy(\"Country\", \"Country\")}\n\n def clean(self):\n data = super().clean()\n if data.get(\"preview\"):\n self.data = self.data.copy()\n self.data[\"preview\"] = False\n return data\n\n\nclass AddressForm(forms.ModelForm):\n\n AUTOCOMPLETE_MAPPING = [\n (\"first_name\", \"given-name\"),\n (\"last_name\", \"family-name\"),\n (\"company_name\", \"organization\"),\n (\"street_address_1\", \"address-line1\"),\n (\"street_address_2\", \"address-line2\"),\n (\"city\", \"address-level2\"),\n (\"postal_code\", \"postal-code\"),\n (\"country_area\", \"address-level1\"),\n (\"country\", \"country\"),\n (\"city_area\", \"address-level3\"),\n (\"phone\", \"tel\"),\n (\"email\", \"email\"),\n ]\n\n class Meta:\n model = Address\n exclude = []\n labels = {\n \"first_name\": pgettext_lazy(\"Personal name\", \"Given name\"),\n \"last_name\": pgettext_lazy(\"Personal name\", \"Family name\"),\n \"company_name\": pgettext_lazy(\n \"Company or organization\", \"Company or organization\"\n ),\n \"street_address_1\": pgettext_lazy(\"Address\", \"Address\"),\n \"street_address_2\": \"\",\n \"city\": pgettext_lazy(\"City\", \"City\"),\n \"city_area\": pgettext_lazy(\"City area\", \"District\"),\n \"postal_code\": pgettext_lazy(\"Postal code\", \"Postal code\"),\n \"country\": pgettext_lazy(\"Country\", \"Country\"),\n \"country_area\": pgettext_lazy(\"Country area\", \"State or province\"),\n \"phone\": pgettext_lazy(\"Phone number\", \"Phone number\"),\n }\n placeholders = {\n \"street_address_1\": pgettext_lazy(\n \"Address\", \"Street address, P.O. box, company name\"\n ),\n \"street_address_2\": pgettext_lazy(\n \"Address\", \"Apartment, suite, unit, building, floor, etc\"\n ),\n }\n\n phone = PossiblePhoneNumberFormField(widget=PhonePrefixWidget, required=False)\n\n def __init__(self, *args, **kwargs):\n autocomplete_type = kwargs.pop(\"autocomplete_type\", None)\n super().__init__(*args, **kwargs)\n # countries order was taken as defined in the model,\n # not being sorted accordingly to the selected language\n self.fields[\"country\"].choices = sorted(\n COUNTRY_CHOICES, key=lambda choice: choice[1]\n )\n autocomplete_dict = defaultdict(lambda: \"off\", self.AUTOCOMPLETE_MAPPING)\n for field_name, field in self.fields.items():\n if autocomplete_type:\n autocomplete = \"%s %s\" % (\n autocomplete_type,\n autocomplete_dict[field_name],\n )\n else:\n autocomplete = autocomplete_dict[field_name]\n field.widget.attrs[\"autocomplete\"] = autocomplete\n field.widget.attrs[\"placeholder\"] = (\n field.label if not hasattr(field, \"placeholder\") else field.placeholder\n )\n\n def clean(self):\n data = super().clean()\n phone = data.get(\"phone\")\n country = data.get(\"country\")\n if phone:\n try:\n data[\"phone\"] = clean_phone_for_country(phone, country)\n except forms.ValidationError as error:\n self.add_error(\"phone\", error)\n return data\n\n\ndef clean_phone_for_country(phone, country):\n error = _(\"The phone number entered is not valid.\")\n error_code = \"invalid_phone_number\"\n if phone:\n try:\n phone = PhoneNumber.from_string(phone, country)\n except NumberParseException:\n raise forms.ValidationError(error, code=error_code)\n else:\n if not is_possible_number(phone):\n raise forms.ValidationError(error, code=error_code)\n return phone\n\n\nclass CountryAwareAddressForm(AddressForm):\n\n I18N_MAPPING = [\n (\"name\", [\"first_name\", \"last_name\"]),\n (\"street_address\", [\"street_address_1\", \"street_address_2\"]),\n (\"city_area\", [\"city_area\"]),\n (\"country_area\", [\"country_area\"]),\n (\"company_name\", [\"company_name\"]),\n (\"postal_code\", [\"postal_code\"]),\n (\"city\", [\"city\"]),\n (\"sorting_code\", []),\n (\"country_code\", [\"country\"]),\n ]\n\n class Meta:\n model = Address\n exclude = []\n\n def add_field_errors(self, errors):\n field_mapping = dict(self.I18N_MAPPING)\n for field_name, error_code in errors.items():\n local_fields = field_mapping[field_name]\n for field in local_fields:\n try:\n error_msg = self.fields[field].error_messages[error_code]\n except KeyError:\n error_msg = pgettext_lazy(\n \"Address form\", \"This value is invalid for selected country\"\n )\n self.add_error(field, error_msg)\n\n def validate_address(self, data):\n try:\n data[\"country_code\"] = data.get(\"country\", \"\")\n if data[\"street_address_1\"] or data[\"street_address_2\"]:\n data[\"street_address\"] = \"%s\\n%s\" % (\n data[\"street_address_1\"],\n data[\"street_address_2\"],\n )\n data = i18naddress.normalize_address(data)\n del data[\"sorting_code\"]\n except i18naddress.InvalidAddress as exc:\n self.add_field_errors(exc.errors)\n return data\n\n def clean(self):\n data = super().clean()\n return self.validate_address(data)\n\n\ndef get_address_form_class(country_code):\n return COUNTRY_FORMS[country_code]\n\n\ndef get_form_i18n_lines(form_instance):\n country_code = form_instance.i18n_country_code\n try:\n fields_order = i18naddress.get_field_order({\"country_code\": country_code})\n except ValueError:\n fields_order = i18naddress.get_field_order({})\n field_mapping = dict(form_instance.I18N_MAPPING)\n\n def _convert_to_bound_fields(form, i18n_field_names):\n bound_fields = []\n for field_name in i18n_field_names:\n local_fields = field_mapping[field_name]\n for local_name in local_fields:\n local_field = form_instance.fields[local_name]\n bound_field = BoundField(form, local_field, local_name)\n bound_fields.append(bound_field)\n return bound_fields\n\n if fields_order:\n return [_convert_to_bound_fields(form_instance, line) for line in fields_order]\n\n\ndef update_base_fields(form_class, i18n_rules):\n for field_name, label_value in AddressForm.Meta.labels.items():\n field = form_class.base_fields[field_name]\n field.label = label_value\n\n for field_name, placeholder_value in AddressForm.Meta.placeholders.items():\n field = form_class.base_fields[field_name]\n field.placeholder = placeholder_value\n\n if i18n_rules.country_area_choices:\n form_class.base_fields[\"country_area\"] = CountryAreaChoiceField(\n choices=i18n_rules.country_area_choices\n )\n\n labels_map = {\n \"country_area\": i18n_rules.country_area_type,\n \"postal_code\": i18n_rules.postal_code_type,\n \"city_area\": i18n_rules.city_area_type,\n }\n\n for field_name, area_type in labels_map.items():\n field = form_class.base_fields[field_name]\n field.label = AREA_TYPE_TRANSLATIONS[area_type]\n\n hidden_fields = i18naddress.KNOWN_FIELDS - i18n_rules.allowed_fields\n for field_name in hidden_fields:\n if field_name in form_class.base_fields:\n form_class.base_fields[field_name].widget = forms.HiddenInput()\n\n country_field = form_class.base_fields[\"country\"]\n country_field.choices = COUNTRY_CHOICES\n\n\ndef construct_address_form(country_code, i18n_rules):\n class_name = \"AddressForm%s\" % country_code\n base_class = CountryAwareAddressForm\n form_kwargs = {\n \"Meta\": type(str(\"Meta\"), (base_class.Meta, object), {}),\n \"formfield_callback\": None,\n }\n class_ = type(base_class)(str(class_name), (base_class,), form_kwargs)\n update_base_fields(class_, i18n_rules)\n class_.i18n_country_code = country_code\n class_.i18n_fields_order = property(get_form_i18n_lines)\n return class_\n\n\nfor country in countries.countries.keys():\n try:\n country_rules = i18naddress.get_validation_rules({\"country_code\": country})\n except ValueError:\n country_rules = i18naddress.get_validation_rules({})\n UNKNOWN_COUNTRIES.add(country)\n\nCOUNTRY_CHOICES = [\n (code, label)\n for code, label in countries.countries.items()\n if code not in UNKNOWN_COUNTRIES\n]\n# Sort choices list by country name\nCOUNTRY_CHOICES = sorted(COUNTRY_CHOICES, key=lambda choice: choice[1])\n\nfor country, label in COUNTRY_CHOICES:\n country_rules = i18naddress.get_validation_rules({\"country_code\": country})\n COUNTRY_FORMS[country] = construct_address_form(country, country_rules)\n", "path": "saleor/account/i18n.py"}], "after_files": [{"content": "from django.core.exceptions import ValidationError\nfrom django.utils.translation import ugettext_lazy as _\nfrom phonenumber_field.phonenumber import to_python\nfrom phonenumbers.phonenumberutil import is_possible_number\n\n\ndef validate_possible_number(phone, country=None):\n phone_number = to_python(phone, country)\n if (\n phone_number\n and not is_possible_number(phone_number)\n or not phone_number.is_valid()\n ):\n raise ValidationError(\n _(\"The phone number entered is not valid.\"), code=\"invalid_phone_number\"\n )\n return phone_number\n", "path": "saleor/account/validators.py"}, {"content": "from collections import defaultdict\n\nimport i18naddress\nfrom django import forms\nfrom django.forms.forms import BoundField\nfrom django.utils.translation import pgettext_lazy\nfrom django_countries import countries\n\nfrom .models import Address\nfrom .validators import validate_possible_number\nfrom .widgets import DatalistTextWidget, PhonePrefixWidget\n\nCOUNTRY_FORMS = {}\nUNKNOWN_COUNTRIES = set()\n\nAREA_TYPE_TRANSLATIONS = {\n \"area\": pgettext_lazy(\"Address field\", \"Area\"),\n \"county\": pgettext_lazy(\"Address field\", \"County\"),\n \"department\": pgettext_lazy(\"Address field\", \"Department\"),\n \"district\": pgettext_lazy(\"Address field\", \"District\"),\n \"do_si\": pgettext_lazy(\"Address field\", \"Do/si\"),\n \"eircode\": pgettext_lazy(\"Address field\", \"Eircode\"),\n \"emirate\": pgettext_lazy(\"Address field\", \"Emirate\"),\n \"island\": pgettext_lazy(\"Address field\", \"Island\"),\n \"neighborhood\": pgettext_lazy(\"Address field\", \"Neighborhood\"),\n \"oblast\": pgettext_lazy(\"Address field\", \"Oblast\"),\n \"parish\": pgettext_lazy(\"Address field\", \"Parish\"),\n \"pin\": pgettext_lazy(\"Address field\", \"PIN\"),\n \"postal\": pgettext_lazy(\"Address field\", \"Postal code\"),\n \"prefecture\": pgettext_lazy(\"Address field\", \"Prefecture\"),\n \"province\": pgettext_lazy(\"Address field\", \"Province\"),\n \"state\": pgettext_lazy(\"Address field\", \"State\"),\n \"suburb\": pgettext_lazy(\"Address field\", \"Suburb\"),\n \"townland\": pgettext_lazy(\"Address field\", \"Townland\"),\n \"village_township\": pgettext_lazy(\"Address field\", \"Village/township\"),\n \"zip\": pgettext_lazy(\"Address field\", \"ZIP code\"),\n}\n\n\nclass PossiblePhoneNumberFormField(forms.CharField):\n \"\"\"A phone input field.\"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.widget.input_type = \"tel\"\n\n\nclass CountryAreaChoiceField(forms.ChoiceField):\n widget = DatalistTextWidget\n\n def valid_value(self, value):\n return True\n\n\nclass AddressMetaForm(forms.ModelForm):\n # This field is never visible in UI\n preview = forms.BooleanField(initial=False, required=False)\n\n class Meta:\n model = Address\n fields = [\"country\", \"preview\"]\n labels = {\"country\": pgettext_lazy(\"Country\", \"Country\")}\n\n def clean(self):\n data = super().clean()\n if data.get(\"preview\"):\n self.data = self.data.copy()\n self.data[\"preview\"] = False\n return data\n\n\nclass AddressForm(forms.ModelForm):\n\n AUTOCOMPLETE_MAPPING = [\n (\"first_name\", \"given-name\"),\n (\"last_name\", \"family-name\"),\n (\"company_name\", \"organization\"),\n (\"street_address_1\", \"address-line1\"),\n (\"street_address_2\", \"address-line2\"),\n (\"city\", \"address-level2\"),\n (\"postal_code\", \"postal-code\"),\n (\"country_area\", \"address-level1\"),\n (\"country\", \"country\"),\n (\"city_area\", \"address-level3\"),\n (\"phone\", \"tel\"),\n (\"email\", \"email\"),\n ]\n\n class Meta:\n model = Address\n exclude = []\n labels = {\n \"first_name\": pgettext_lazy(\"Personal name\", \"Given name\"),\n \"last_name\": pgettext_lazy(\"Personal name\", \"Family name\"),\n \"company_name\": pgettext_lazy(\n \"Company or organization\", \"Company or organization\"\n ),\n \"street_address_1\": pgettext_lazy(\"Address\", \"Address\"),\n \"street_address_2\": \"\",\n \"city\": pgettext_lazy(\"City\", \"City\"),\n \"city_area\": pgettext_lazy(\"City area\", \"District\"),\n \"postal_code\": pgettext_lazy(\"Postal code\", \"Postal code\"),\n \"country\": pgettext_lazy(\"Country\", \"Country\"),\n \"country_area\": pgettext_lazy(\"Country area\", \"State or province\"),\n \"phone\": pgettext_lazy(\"Phone number\", \"Phone number\"),\n }\n placeholders = {\n \"street_address_1\": pgettext_lazy(\n \"Address\", \"Street address, P.O. box, company name\"\n ),\n \"street_address_2\": pgettext_lazy(\n \"Address\", \"Apartment, suite, unit, building, floor, etc\"\n ),\n }\n\n phone = PossiblePhoneNumberFormField(widget=PhonePrefixWidget, required=False)\n\n def __init__(self, *args, **kwargs):\n autocomplete_type = kwargs.pop(\"autocomplete_type\", None)\n super().__init__(*args, **kwargs)\n # countries order was taken as defined in the model,\n # not being sorted accordingly to the selected language\n self.fields[\"country\"].choices = sorted(\n COUNTRY_CHOICES, key=lambda choice: choice[1]\n )\n autocomplete_dict = defaultdict(lambda: \"off\", self.AUTOCOMPLETE_MAPPING)\n for field_name, field in self.fields.items():\n if autocomplete_type:\n autocomplete = \"%s %s\" % (\n autocomplete_type,\n autocomplete_dict[field_name],\n )\n else:\n autocomplete = autocomplete_dict[field_name]\n field.widget.attrs[\"autocomplete\"] = autocomplete\n field.widget.attrs[\"placeholder\"] = (\n field.label if not hasattr(field, \"placeholder\") else field.placeholder\n )\n\n def clean(self):\n data = super().clean()\n phone = data.get(\"phone\")\n country = data.get(\"country\")\n if phone:\n try:\n data[\"phone\"] = validate_possible_number(phone, country)\n except forms.ValidationError as error:\n self.add_error(\"phone\", error)\n return data\n\n\nclass CountryAwareAddressForm(AddressForm):\n\n I18N_MAPPING = [\n (\"name\", [\"first_name\", \"last_name\"]),\n (\"street_address\", [\"street_address_1\", \"street_address_2\"]),\n (\"city_area\", [\"city_area\"]),\n (\"country_area\", [\"country_area\"]),\n (\"company_name\", [\"company_name\"]),\n (\"postal_code\", [\"postal_code\"]),\n (\"city\", [\"city\"]),\n (\"sorting_code\", []),\n (\"country_code\", [\"country\"]),\n ]\n\n class Meta:\n model = Address\n exclude = []\n\n def add_field_errors(self, errors):\n field_mapping = dict(self.I18N_MAPPING)\n for field_name, error_code in errors.items():\n local_fields = field_mapping[field_name]\n for field in local_fields:\n try:\n error_msg = self.fields[field].error_messages[error_code]\n except KeyError:\n error_msg = pgettext_lazy(\n \"Address form\", \"This value is invalid for selected country\"\n )\n self.add_error(field, error_msg)\n\n def validate_address(self, data):\n try:\n data[\"country_code\"] = data.get(\"country\", \"\")\n if data[\"street_address_1\"] or data[\"street_address_2\"]:\n data[\"street_address\"] = \"%s\\n%s\" % (\n data[\"street_address_1\"],\n data[\"street_address_2\"],\n )\n data = i18naddress.normalize_address(data)\n del data[\"sorting_code\"]\n except i18naddress.InvalidAddress as exc:\n self.add_field_errors(exc.errors)\n return data\n\n def clean(self):\n data = super().clean()\n return self.validate_address(data)\n\n\ndef get_address_form_class(country_code):\n return COUNTRY_FORMS[country_code]\n\n\ndef get_form_i18n_lines(form_instance):\n country_code = form_instance.i18n_country_code\n try:\n fields_order = i18naddress.get_field_order({\"country_code\": country_code})\n except ValueError:\n fields_order = i18naddress.get_field_order({})\n field_mapping = dict(form_instance.I18N_MAPPING)\n\n def _convert_to_bound_fields(form, i18n_field_names):\n bound_fields = []\n for field_name in i18n_field_names:\n local_fields = field_mapping[field_name]\n for local_name in local_fields:\n local_field = form_instance.fields[local_name]\n bound_field = BoundField(form, local_field, local_name)\n bound_fields.append(bound_field)\n return bound_fields\n\n if fields_order:\n return [_convert_to_bound_fields(form_instance, line) for line in fields_order]\n\n\ndef update_base_fields(form_class, i18n_rules):\n for field_name, label_value in AddressForm.Meta.labels.items():\n field = form_class.base_fields[field_name]\n field.label = label_value\n\n for field_name, placeholder_value in AddressForm.Meta.placeholders.items():\n field = form_class.base_fields[field_name]\n field.placeholder = placeholder_value\n\n if i18n_rules.country_area_choices:\n form_class.base_fields[\"country_area\"] = CountryAreaChoiceField(\n choices=i18n_rules.country_area_choices\n )\n\n labels_map = {\n \"country_area\": i18n_rules.country_area_type,\n \"postal_code\": i18n_rules.postal_code_type,\n \"city_area\": i18n_rules.city_area_type,\n }\n\n for field_name, area_type in labels_map.items():\n field = form_class.base_fields[field_name]\n field.label = AREA_TYPE_TRANSLATIONS[area_type]\n\n hidden_fields = i18naddress.KNOWN_FIELDS - i18n_rules.allowed_fields\n for field_name in hidden_fields:\n if field_name in form_class.base_fields:\n form_class.base_fields[field_name].widget = forms.HiddenInput()\n\n country_field = form_class.base_fields[\"country\"]\n country_field.choices = COUNTRY_CHOICES\n\n\ndef construct_address_form(country_code, i18n_rules):\n class_name = \"AddressForm%s\" % country_code\n base_class = CountryAwareAddressForm\n form_kwargs = {\n \"Meta\": type(str(\"Meta\"), (base_class.Meta, object), {}),\n \"formfield_callback\": None,\n }\n class_ = type(base_class)(str(class_name), (base_class,), form_kwargs)\n update_base_fields(class_, i18n_rules)\n class_.i18n_country_code = country_code\n class_.i18n_fields_order = property(get_form_i18n_lines)\n return class_\n\n\nfor country in countries.countries.keys():\n try:\n country_rules = i18naddress.get_validation_rules({\"country_code\": country})\n except ValueError:\n country_rules = i18naddress.get_validation_rules({})\n UNKNOWN_COUNTRIES.add(country)\n\nCOUNTRY_CHOICES = [\n (code, label)\n for code, label in countries.countries.items()\n if code not in UNKNOWN_COUNTRIES\n]\n# Sort choices list by country name\nCOUNTRY_CHOICES = sorted(COUNTRY_CHOICES, key=lambda choice: choice[1])\n\nfor country, label in COUNTRY_CHOICES:\n country_rules = i18naddress.get_validation_rules({\"country_code\": country})\n COUNTRY_FORMS[country] = construct_address_form(country, country_rules)\n", "path": "saleor/account/i18n.py"}]}
| 3,894 | 551 |
gh_patches_debug_25061
|
rasdani/github-patches
|
git_diff
|
elastic__apm-agent-python-813
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Flower hangs from version 5.4.0.
**Describe the bug**:
Flower hangs (no answer from http connection to browser) when a version >= 5.4.0 is installed
**To Reproduce**
1. pip install elastic-apm==5.4.0
2. restart flower and try to access
**Environment (please complete the following information)**
- OS: Ubuntu 18.04
- Python version: 3.6
- Framework and version: Django 2.2
- APM Server version: NA
- Agent version: 5.4.0+
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticapm/instrumentation/packages/tornado.py`
Content:
```
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30 """
31 Instrumentation for Tornado
32 """
33 import elasticapm
34 from elasticapm.conf import constants
35 from elasticapm.instrumentation.packages.asyncio.base import AbstractInstrumentedModule, AsyncAbstractInstrumentedModule
36 from elasticapm.traces import capture_span
37 from elasticapm.utils.disttracing import TraceParent
38
39
40 class TornadoRequestExecuteInstrumentation(AsyncAbstractInstrumentedModule):
41 name = "tornado_request_execute"
42 creates_transactions = True
43 instrument_list = [("tornado.web", "RequestHandler._execute")]
44
45 async def call(self, module, method, wrapped, instance, args, kwargs):
46 # Late import to avoid ImportErrors
47 from elasticapm.contrib.tornado.utils import get_data_from_request, get_data_from_response
48
49 request = instance.request
50 trace_parent = TraceParent.from_headers(request.headers)
51 client = instance.application.elasticapm_client
52 client.begin_transaction("request", trace_parent=trace_parent)
53 elasticapm.set_context(
54 lambda: get_data_from_request(instance, request, client.config, constants.TRANSACTION), "request"
55 )
56 # TODO: Can we somehow incorporate the routing rule itself here?
57 elasticapm.set_transaction_name("{} {}".format(request.method, type(instance).__name__), override=False)
58
59 ret = await wrapped(*args, **kwargs)
60
61 elasticapm.set_context(
62 lambda: get_data_from_response(instance, client.config, constants.TRANSACTION), "response"
63 )
64 result = "HTTP {}xx".format(instance.get_status() // 100)
65 elasticapm.set_transaction_result(result, override=False)
66 client.end_transaction()
67
68 return ret
69
70
71 class TornadoHandleRequestExceptionInstrumentation(AbstractInstrumentedModule):
72 name = "tornado_handle_request_exception"
73
74 instrument_list = [("tornado.web", "RequestHandler._handle_request_exception")]
75
76 def call(self, module, method, wrapped, instance, args, kwargs):
77
78 # Late import to avoid ImportErrors
79 from tornado.web import Finish, HTTPError
80 from elasticapm.contrib.tornado.utils import get_data_from_request
81
82 e = args[0]
83 if isinstance(e, Finish):
84 # Not an error; Finish is an exception that ends a request without an error response
85 return wrapped(*args, **kwargs)
86
87 client = instance.application.elasticapm_client
88 request = instance.request
89 client.capture_exception(
90 context={"request": get_data_from_request(instance, request, client.config, constants.ERROR)}
91 )
92 if isinstance(e, HTTPError):
93 elasticapm.set_transaction_result("HTTP {}xx".format(int(e.status_code / 100)), override=False)
94 elasticapm.set_context({"status_code": e.status_code}, "response")
95 else:
96 elasticapm.set_transaction_result("HTTP 5xx", override=False)
97 elasticapm.set_context({"status_code": 500}, "response")
98
99 return wrapped(*args, **kwargs)
100
101
102 class TornadoRenderInstrumentation(AbstractInstrumentedModule):
103 name = "tornado_render"
104
105 instrument_list = [("tornado.web", "RequestHandler.render")]
106
107 def call(self, module, method, wrapped, instance, args, kwargs):
108 if "template_name" in kwargs:
109 name = kwargs["template_name"]
110 else:
111 name = args[0]
112
113 with capture_span(name, span_type="template", span_subtype="tornado", span_action="render"):
114 return wrapped(*args, **kwargs)
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/elasticapm/instrumentation/packages/tornado.py b/elasticapm/instrumentation/packages/tornado.py
--- a/elasticapm/instrumentation/packages/tornado.py
+++ b/elasticapm/instrumentation/packages/tornado.py
@@ -43,6 +43,11 @@
instrument_list = [("tornado.web", "RequestHandler._execute")]
async def call(self, module, method, wrapped, instance, args, kwargs):
+ if not hasattr(instance.application, "elasticapm_client"):
+ # If tornado was instrumented but not as the main framework
+ # (i.e. in Flower), we should skip it.
+ return await wrapped(*args, **kwargs)
+
# Late import to avoid ImportErrors
from elasticapm.contrib.tornado.utils import get_data_from_request, get_data_from_response
@@ -74,6 +79,10 @@
instrument_list = [("tornado.web", "RequestHandler._handle_request_exception")]
def call(self, module, method, wrapped, instance, args, kwargs):
+ if not hasattr(instance.application, "elasticapm_client"):
+ # If tornado was instrumented but not as the main framework
+ # (i.e. in Flower), we should skip it.
+ return wrapped(*args, **kwargs)
# Late import to avoid ImportErrors
from tornado.web import Finish, HTTPError
|
{"golden_diff": "diff --git a/elasticapm/instrumentation/packages/tornado.py b/elasticapm/instrumentation/packages/tornado.py\n--- a/elasticapm/instrumentation/packages/tornado.py\n+++ b/elasticapm/instrumentation/packages/tornado.py\n@@ -43,6 +43,11 @@\n instrument_list = [(\"tornado.web\", \"RequestHandler._execute\")]\n \n async def call(self, module, method, wrapped, instance, args, kwargs):\n+ if not hasattr(instance.application, \"elasticapm_client\"):\n+ # If tornado was instrumented but not as the main framework\n+ # (i.e. in Flower), we should skip it.\n+ return await wrapped(*args, **kwargs)\n+\n # Late import to avoid ImportErrors\n from elasticapm.contrib.tornado.utils import get_data_from_request, get_data_from_response\n \n@@ -74,6 +79,10 @@\n instrument_list = [(\"tornado.web\", \"RequestHandler._handle_request_exception\")]\n \n def call(self, module, method, wrapped, instance, args, kwargs):\n+ if not hasattr(instance.application, \"elasticapm_client\"):\n+ # If tornado was instrumented but not as the main framework\n+ # (i.e. in Flower), we should skip it.\n+ return wrapped(*args, **kwargs)\n \n # Late import to avoid ImportErrors\n from tornado.web import Finish, HTTPError\n", "issue": "Flower hangs from version 5.4.0.\n**Describe the bug**: \r\nFlower hangs (no answer from http connection to browser) when a version >= 5.4.0 is installed\r\n\r\n**To Reproduce**\r\n\r\n1. pip install elastic-apm==5.4.0\r\n2. restart flower and try to access\r\n\r\n**Environment (please complete the following information)**\r\n- OS: Ubuntu 18.04\r\n- Python version: 3.6\r\n- Framework and version: Django 2.2\r\n- APM Server version: NA\r\n- Agent version: 5.4.0+ \r\n\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\"\"\"\nInstrumentation for Tornado\n\"\"\"\nimport elasticapm\nfrom elasticapm.conf import constants\nfrom elasticapm.instrumentation.packages.asyncio.base import AbstractInstrumentedModule, AsyncAbstractInstrumentedModule\nfrom elasticapm.traces import capture_span\nfrom elasticapm.utils.disttracing import TraceParent\n\n\nclass TornadoRequestExecuteInstrumentation(AsyncAbstractInstrumentedModule):\n name = \"tornado_request_execute\"\n creates_transactions = True\n instrument_list = [(\"tornado.web\", \"RequestHandler._execute\")]\n\n async def call(self, module, method, wrapped, instance, args, kwargs):\n # Late import to avoid ImportErrors\n from elasticapm.contrib.tornado.utils import get_data_from_request, get_data_from_response\n\n request = instance.request\n trace_parent = TraceParent.from_headers(request.headers)\n client = instance.application.elasticapm_client\n client.begin_transaction(\"request\", trace_parent=trace_parent)\n elasticapm.set_context(\n lambda: get_data_from_request(instance, request, client.config, constants.TRANSACTION), \"request\"\n )\n # TODO: Can we somehow incorporate the routing rule itself here?\n elasticapm.set_transaction_name(\"{} {}\".format(request.method, type(instance).__name__), override=False)\n\n ret = await wrapped(*args, **kwargs)\n\n elasticapm.set_context(\n lambda: get_data_from_response(instance, client.config, constants.TRANSACTION), \"response\"\n )\n result = \"HTTP {}xx\".format(instance.get_status() // 100)\n elasticapm.set_transaction_result(result, override=False)\n client.end_transaction()\n\n return ret\n\n\nclass TornadoHandleRequestExceptionInstrumentation(AbstractInstrumentedModule):\n name = \"tornado_handle_request_exception\"\n\n instrument_list = [(\"tornado.web\", \"RequestHandler._handle_request_exception\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n\n # Late import to avoid ImportErrors\n from tornado.web import Finish, HTTPError\n from elasticapm.contrib.tornado.utils import get_data_from_request\n\n e = args[0]\n if isinstance(e, Finish):\n # Not an error; Finish is an exception that ends a request without an error response\n return wrapped(*args, **kwargs)\n\n client = instance.application.elasticapm_client\n request = instance.request\n client.capture_exception(\n context={\"request\": get_data_from_request(instance, request, client.config, constants.ERROR)}\n )\n if isinstance(e, HTTPError):\n elasticapm.set_transaction_result(\"HTTP {}xx\".format(int(e.status_code / 100)), override=False)\n elasticapm.set_context({\"status_code\": e.status_code}, \"response\")\n else:\n elasticapm.set_transaction_result(\"HTTP 5xx\", override=False)\n elasticapm.set_context({\"status_code\": 500}, \"response\")\n\n return wrapped(*args, **kwargs)\n\n\nclass TornadoRenderInstrumentation(AbstractInstrumentedModule):\n name = \"tornado_render\"\n\n instrument_list = [(\"tornado.web\", \"RequestHandler.render\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n if \"template_name\" in kwargs:\n name = kwargs[\"template_name\"]\n else:\n name = args[0]\n\n with capture_span(name, span_type=\"template\", span_subtype=\"tornado\", span_action=\"render\"):\n return wrapped(*args, **kwargs)\n", "path": "elasticapm/instrumentation/packages/tornado.py"}], "after_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\"\"\"\nInstrumentation for Tornado\n\"\"\"\nimport elasticapm\nfrom elasticapm.conf import constants\nfrom elasticapm.instrumentation.packages.asyncio.base import AbstractInstrumentedModule, AsyncAbstractInstrumentedModule\nfrom elasticapm.traces import capture_span\nfrom elasticapm.utils.disttracing import TraceParent\n\n\nclass TornadoRequestExecuteInstrumentation(AsyncAbstractInstrumentedModule):\n name = \"tornado_request_execute\"\n creates_transactions = True\n instrument_list = [(\"tornado.web\", \"RequestHandler._execute\")]\n\n async def call(self, module, method, wrapped, instance, args, kwargs):\n if not hasattr(instance.application, \"elasticapm_client\"):\n # If tornado was instrumented but not as the main framework\n # (i.e. in Flower), we should skip it.\n return await wrapped(*args, **kwargs)\n\n # Late import to avoid ImportErrors\n from elasticapm.contrib.tornado.utils import get_data_from_request, get_data_from_response\n\n request = instance.request\n trace_parent = TraceParent.from_headers(request.headers)\n client = instance.application.elasticapm_client\n client.begin_transaction(\"request\", trace_parent=trace_parent)\n elasticapm.set_context(\n lambda: get_data_from_request(instance, request, client.config, constants.TRANSACTION), \"request\"\n )\n # TODO: Can we somehow incorporate the routing rule itself here?\n elasticapm.set_transaction_name(\"{} {}\".format(request.method, type(instance).__name__), override=False)\n\n ret = await wrapped(*args, **kwargs)\n\n elasticapm.set_context(\n lambda: get_data_from_response(instance, client.config, constants.TRANSACTION), \"response\"\n )\n result = \"HTTP {}xx\".format(instance.get_status() // 100)\n elasticapm.set_transaction_result(result, override=False)\n client.end_transaction()\n\n return ret\n\n\nclass TornadoHandleRequestExceptionInstrumentation(AbstractInstrumentedModule):\n name = \"tornado_handle_request_exception\"\n\n instrument_list = [(\"tornado.web\", \"RequestHandler._handle_request_exception\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n if not hasattr(instance.application, \"elasticapm_client\"):\n # If tornado was instrumented but not as the main framework\n # (i.e. in Flower), we should skip it.\n return wrapped(*args, **kwargs)\n\n # Late import to avoid ImportErrors\n from tornado.web import Finish, HTTPError\n from elasticapm.contrib.tornado.utils import get_data_from_request\n\n e = args[0]\n if isinstance(e, Finish):\n # Not an error; Finish is an exception that ends a request without an error response\n return wrapped(*args, **kwargs)\n\n client = instance.application.elasticapm_client\n request = instance.request\n client.capture_exception(\n context={\"request\": get_data_from_request(instance, request, client.config, constants.ERROR)}\n )\n if isinstance(e, HTTPError):\n elasticapm.set_transaction_result(\"HTTP {}xx\".format(int(e.status_code / 100)), override=False)\n elasticapm.set_context({\"status_code\": e.status_code}, \"response\")\n else:\n elasticapm.set_transaction_result(\"HTTP 5xx\", override=False)\n elasticapm.set_context({\"status_code\": 500}, \"response\")\n\n return wrapped(*args, **kwargs)\n\n\nclass TornadoRenderInstrumentation(AbstractInstrumentedModule):\n name = \"tornado_render\"\n\n instrument_list = [(\"tornado.web\", \"RequestHandler.render\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n if \"template_name\" in kwargs:\n name = kwargs[\"template_name\"]\n else:\n name = args[0]\n\n with capture_span(name, span_type=\"template\", span_subtype=\"tornado\", span_action=\"render\"):\n return wrapped(*args, **kwargs)\n", "path": "elasticapm/instrumentation/packages/tornado.py"}]}
| 1,717 | 310 |
gh_patches_debug_37753
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmpose-948
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
False 'segmentation' labeled in 'macaque_train/test.json'
In general, the macaque dataset is missing parts of segmentation.
猴子的数据集:每当身体位置是分开的时候,‘'segmentation’ 的标注都有问题。我check了原始csv标注数据,是正确的;是你们制作的“macaque_train/test.json”出现了偏差。

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/dataset/parse_macaquepose_dataset.py`
Content:
```
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import csv
3 import json
4 import os
5 import time
6
7 import cv2
8 import numpy as np
9
10 np.random.seed(0)
11
12
13 def PolyArea(x, y):
14 """Calculate area of polygon given (x,y) coordinates (Shoelace formula)
15
16 :param x: np.ndarray(N, )
17 :param y: np.ndarray(N, )
18 :return: area
19 """
20 return 0.5 * np.abs(np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1)))
21
22
23 def save_coco_anno(data_annotation,
24 img_root,
25 save_path,
26 start_img_id=0,
27 start_ann_id=0,
28 kpt_num=17):
29 """Save annotations in coco-format.
30
31 :param data_annotation: list of data annotation.
32 :param img_root: the root dir to load images.
33 :param save_path: the path to save transformed annotation file.
34 :param start_img_id: the starting point to count the image id.
35 :param start_ann_id: the starting point to count the annotation id.
36 :param kpt_num: the number of keypoint.
37 """
38 images = []
39 annotations = []
40
41 img_id = start_img_id
42 ann_id = start_ann_id
43
44 for i in range(0, len(data_annotation)):
45 data_anno = data_annotation[i]
46 image_name = data_anno[0]
47
48 img = cv2.imread(os.path.join(img_root, image_name))
49
50 kp_string = data_anno[1]
51 kps = json.loads(kp_string)
52
53 seg_string = data_anno[2]
54 segs = json.loads(seg_string)
55
56 for kp, seg in zip(kps, segs):
57 keypoints = np.zeros([kpt_num, 3])
58 for ind, p in enumerate(kp):
59 if p['position'] is None:
60 continue
61 else:
62 keypoints[ind, 0] = p['position'][0]
63 keypoints[ind, 1] = p['position'][1]
64 keypoints[ind, 2] = 2
65
66 segmentation = np.array(seg[0]['segment'])
67 max_x, max_y = segmentation.max(0)
68 min_x, min_y = segmentation.min(0)
69
70 anno = {}
71 anno['keypoints'] = keypoints.reshape(-1).tolist()
72 anno['image_id'] = img_id
73 anno['id'] = ann_id
74 anno['num_keypoints'] = int(sum(keypoints[:, 2] > 0))
75 anno['bbox'] = [
76 float(min_x),
77 float(min_y),
78 float(max_x - min_x + 1),
79 float(max_y - min_y + 1)
80 ]
81 anno['iscrowd'] = 0
82 anno['area'] = float(
83 PolyArea(segmentation[:, 0], segmentation[:, 1]))
84 anno['category_id'] = 1
85 anno['segmentation'] = segmentation.reshape([1, -1]).tolist()
86
87 annotations.append(anno)
88 ann_id += 1
89
90 image = {}
91 image['id'] = img_id
92 image['file_name'] = image_name
93 image['height'] = img.shape[0]
94 image['width'] = img.shape[1]
95
96 images.append(image)
97 img_id += 1
98
99 cocotype = {}
100
101 cocotype['info'] = {}
102 cocotype['info']['description'] = 'MacaquePose Generated by MMPose Team'
103 cocotype['info']['version'] = '1.0'
104 cocotype['info']['year'] = time.strftime('%Y', time.localtime())
105 cocotype['info']['date_created'] = time.strftime('%Y/%m/%d',
106 time.localtime())
107
108 cocotype['images'] = images
109 cocotype['annotations'] = annotations
110 cocotype['categories'] = [{
111 'supercategory':
112 'animal',
113 'id':
114 1,
115 'name':
116 'macaque',
117 'keypoints': [
118 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear',
119 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow',
120 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee',
121 'right_knee', 'left_ankle', 'right_ankle'
122 ],
123 'skeleton': [[16, 14], [14, 12], [17, 15], [15, 13], [12, 13], [6, 12],
124 [7, 13], [6, 7], [6, 8], [7, 9], [8, 10], [9, 11], [2, 3],
125 [1, 2], [1, 3], [2, 4], [3, 5], [4, 6], [5, 7]]
126 }]
127
128 os.makedirs(os.path.dirname(save_path), exist_ok=True)
129 json.dump(cocotype, open(save_path, 'w'), indent=4)
130 print('number of images:', img_id)
131 print('number of annotations:', ann_id)
132 print(f'done {save_path}')
133
134
135 dataset_dir = '/data/macaque/'
136
137 with open(os.path.join(dataset_dir, 'annotations.csv'), 'r') as fp:
138 data_annotation_all = list(csv.reader(fp, delimiter=','))[1:]
139
140 np.random.shuffle(data_annotation_all)
141
142 data_annotation_train = data_annotation_all[0:12500]
143 data_annotation_val = data_annotation_all[12500:]
144
145 img_root = os.path.join(dataset_dir, 'images')
146 save_coco_anno(
147 data_annotation_train,
148 img_root,
149 os.path.join(dataset_dir, 'annotations', 'macaque_train.json'),
150 kpt_num=17)
151 save_coco_anno(
152 data_annotation_val,
153 img_root,
154 os.path.join(dataset_dir, 'annotations', 'macaque_test.json'),
155 start_img_id=12500,
156 start_ann_id=15672,
157 kpt_num=17)
158
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tools/dataset/parse_macaquepose_dataset.py b/tools/dataset/parse_macaquepose_dataset.py
--- a/tools/dataset/parse_macaquepose_dataset.py
+++ b/tools/dataset/parse_macaquepose_dataset.py
@@ -10,14 +10,22 @@
np.random.seed(0)
-def PolyArea(x, y):
+def get_poly_area(x, y):
"""Calculate area of polygon given (x,y) coordinates (Shoelace formula)
:param x: np.ndarray(N, )
:param y: np.ndarray(N, )
:return: area
"""
- return 0.5 * np.abs(np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1)))
+ return float(0.5 *
+ np.abs(np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1))))
+
+
+def get_seg_area(segmentations):
+ area = 0
+ for segmentation in segmentations:
+ area += get_poly_area(segmentation[:, 0], segmentation[:, 1])
+ return area
def save_coco_anno(data_annotation,
@@ -63,9 +71,26 @@
keypoints[ind, 1] = p['position'][1]
keypoints[ind, 2] = 2
- segmentation = np.array(seg[0]['segment'])
- max_x, max_y = segmentation.max(0)
- min_x, min_y = segmentation.min(0)
+ segmentations = []
+
+ max_x = -1
+ max_y = -1
+ min_x = 999999
+ min_y = 999999
+ for segm in seg:
+ if len(segm['segment']) == 0:
+ continue
+
+ segmentation = np.array(segm['segment'])
+ segmentations.append(segmentation)
+
+ _max_x, _max_y = segmentation.max(0)
+ _min_x, _min_y = segmentation.min(0)
+
+ max_x = max(max_x, _max_x)
+ max_y = max(max_y, _max_y)
+ min_x = min(min_x, _min_x)
+ min_y = min(min_y, _min_y)
anno = {}
anno['keypoints'] = keypoints.reshape(-1).tolist()
@@ -79,10 +104,11 @@
float(max_y - min_y + 1)
]
anno['iscrowd'] = 0
- anno['area'] = float(
- PolyArea(segmentation[:, 0], segmentation[:, 1]))
+ anno['area'] = get_seg_area(segmentations)
anno['category_id'] = 1
- anno['segmentation'] = segmentation.reshape([1, -1]).tolist()
+ anno['segmentation'] = [
+ seg.reshape(-1).tolist() for seg in segmentations
+ ]
annotations.append(anno)
ann_id += 1
@@ -133,7 +159,6 @@
dataset_dir = '/data/macaque/'
-
with open(os.path.join(dataset_dir, 'annotations.csv'), 'r') as fp:
data_annotation_all = list(csv.reader(fp, delimiter=','))[1:]
|
{"golden_diff": "diff --git a/tools/dataset/parse_macaquepose_dataset.py b/tools/dataset/parse_macaquepose_dataset.py\n--- a/tools/dataset/parse_macaquepose_dataset.py\n+++ b/tools/dataset/parse_macaquepose_dataset.py\n@@ -10,14 +10,22 @@\n np.random.seed(0)\n \n \n-def PolyArea(x, y):\n+def get_poly_area(x, y):\n \"\"\"Calculate area of polygon given (x,y) coordinates (Shoelace formula)\n \n :param x: np.ndarray(N, )\n :param y: np.ndarray(N, )\n :return: area\n \"\"\"\n- return 0.5 * np.abs(np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1)))\n+ return float(0.5 *\n+ np.abs(np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1))))\n+\n+\n+def get_seg_area(segmentations):\n+ area = 0\n+ for segmentation in segmentations:\n+ area += get_poly_area(segmentation[:, 0], segmentation[:, 1])\n+ return area\n \n \n def save_coco_anno(data_annotation,\n@@ -63,9 +71,26 @@\n keypoints[ind, 1] = p['position'][1]\n keypoints[ind, 2] = 2\n \n- segmentation = np.array(seg[0]['segment'])\n- max_x, max_y = segmentation.max(0)\n- min_x, min_y = segmentation.min(0)\n+ segmentations = []\n+\n+ max_x = -1\n+ max_y = -1\n+ min_x = 999999\n+ min_y = 999999\n+ for segm in seg:\n+ if len(segm['segment']) == 0:\n+ continue\n+\n+ segmentation = np.array(segm['segment'])\n+ segmentations.append(segmentation)\n+\n+ _max_x, _max_y = segmentation.max(0)\n+ _min_x, _min_y = segmentation.min(0)\n+\n+ max_x = max(max_x, _max_x)\n+ max_y = max(max_y, _max_y)\n+ min_x = min(min_x, _min_x)\n+ min_y = min(min_y, _min_y)\n \n anno = {}\n anno['keypoints'] = keypoints.reshape(-1).tolist()\n@@ -79,10 +104,11 @@\n float(max_y - min_y + 1)\n ]\n anno['iscrowd'] = 0\n- anno['area'] = float(\n- PolyArea(segmentation[:, 0], segmentation[:, 1]))\n+ anno['area'] = get_seg_area(segmentations)\n anno['category_id'] = 1\n- anno['segmentation'] = segmentation.reshape([1, -1]).tolist()\n+ anno['segmentation'] = [\n+ seg.reshape(-1).tolist() for seg in segmentations\n+ ]\n \n annotations.append(anno)\n ann_id += 1\n@@ -133,7 +159,6 @@\n \n \n dataset_dir = '/data/macaque/'\n-\n with open(os.path.join(dataset_dir, 'annotations.csv'), 'r') as fp:\n data_annotation_all = list(csv.reader(fp, delimiter=','))[1:]\n", "issue": "False 'segmentation' labeled in 'macaque_train/test.json'\nIn general, the macaque dataset is missing parts of segmentation.\r\n\r\n\u7334\u5b50\u7684\u6570\u636e\u96c6\uff1a\u6bcf\u5f53\u8eab\u4f53\u4f4d\u7f6e\u662f\u5206\u5f00\u7684\u65f6\u5019\uff0c\u2018'segmentation\u2019 \u7684\u6807\u6ce8\u90fd\u6709\u95ee\u9898\u3002\u6211check\u4e86\u539f\u59cbcsv\u6807\u6ce8\u6570\u636e\uff0c\u662f\u6b63\u786e\u7684\uff1b\u662f\u4f60\u4eec\u5236\u4f5c\u7684\u201cmacaque_train/test.json\u201d\u51fa\u73b0\u4e86\u504f\u5dee\u3002\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport csv\nimport json\nimport os\nimport time\n\nimport cv2\nimport numpy as np\n\nnp.random.seed(0)\n\n\ndef PolyArea(x, y):\n \"\"\"Calculate area of polygon given (x,y) coordinates (Shoelace formula)\n\n :param x: np.ndarray(N, )\n :param y: np.ndarray(N, )\n :return: area\n \"\"\"\n return 0.5 * np.abs(np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1)))\n\n\ndef save_coco_anno(data_annotation,\n img_root,\n save_path,\n start_img_id=0,\n start_ann_id=0,\n kpt_num=17):\n \"\"\"Save annotations in coco-format.\n\n :param data_annotation: list of data annotation.\n :param img_root: the root dir to load images.\n :param save_path: the path to save transformed annotation file.\n :param start_img_id: the starting point to count the image id.\n :param start_ann_id: the starting point to count the annotation id.\n :param kpt_num: the number of keypoint.\n \"\"\"\n images = []\n annotations = []\n\n img_id = start_img_id\n ann_id = start_ann_id\n\n for i in range(0, len(data_annotation)):\n data_anno = data_annotation[i]\n image_name = data_anno[0]\n\n img = cv2.imread(os.path.join(img_root, image_name))\n\n kp_string = data_anno[1]\n kps = json.loads(kp_string)\n\n seg_string = data_anno[2]\n segs = json.loads(seg_string)\n\n for kp, seg in zip(kps, segs):\n keypoints = np.zeros([kpt_num, 3])\n for ind, p in enumerate(kp):\n if p['position'] is None:\n continue\n else:\n keypoints[ind, 0] = p['position'][0]\n keypoints[ind, 1] = p['position'][1]\n keypoints[ind, 2] = 2\n\n segmentation = np.array(seg[0]['segment'])\n max_x, max_y = segmentation.max(0)\n min_x, min_y = segmentation.min(0)\n\n anno = {}\n anno['keypoints'] = keypoints.reshape(-1).tolist()\n anno['image_id'] = img_id\n anno['id'] = ann_id\n anno['num_keypoints'] = int(sum(keypoints[:, 2] > 0))\n anno['bbox'] = [\n float(min_x),\n float(min_y),\n float(max_x - min_x + 1),\n float(max_y - min_y + 1)\n ]\n anno['iscrowd'] = 0\n anno['area'] = float(\n PolyArea(segmentation[:, 0], segmentation[:, 1]))\n anno['category_id'] = 1\n anno['segmentation'] = segmentation.reshape([1, -1]).tolist()\n\n annotations.append(anno)\n ann_id += 1\n\n image = {}\n image['id'] = img_id\n image['file_name'] = image_name\n image['height'] = img.shape[0]\n image['width'] = img.shape[1]\n\n images.append(image)\n img_id += 1\n\n cocotype = {}\n\n cocotype['info'] = {}\n cocotype['info']['description'] = 'MacaquePose Generated by MMPose Team'\n cocotype['info']['version'] = '1.0'\n cocotype['info']['year'] = time.strftime('%Y', time.localtime())\n cocotype['info']['date_created'] = time.strftime('%Y/%m/%d',\n time.localtime())\n\n cocotype['images'] = images\n cocotype['annotations'] = annotations\n cocotype['categories'] = [{\n 'supercategory':\n 'animal',\n 'id':\n 1,\n 'name':\n 'macaque',\n 'keypoints': [\n 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear',\n 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow',\n 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee',\n 'right_knee', 'left_ankle', 'right_ankle'\n ],\n 'skeleton': [[16, 14], [14, 12], [17, 15], [15, 13], [12, 13], [6, 12],\n [7, 13], [6, 7], [6, 8], [7, 9], [8, 10], [9, 11], [2, 3],\n [1, 2], [1, 3], [2, 4], [3, 5], [4, 6], [5, 7]]\n }]\n\n os.makedirs(os.path.dirname(save_path), exist_ok=True)\n json.dump(cocotype, open(save_path, 'w'), indent=4)\n print('number of images:', img_id)\n print('number of annotations:', ann_id)\n print(f'done {save_path}')\n\n\ndataset_dir = '/data/macaque/'\n\nwith open(os.path.join(dataset_dir, 'annotations.csv'), 'r') as fp:\n data_annotation_all = list(csv.reader(fp, delimiter=','))[1:]\n\nnp.random.shuffle(data_annotation_all)\n\ndata_annotation_train = data_annotation_all[0:12500]\ndata_annotation_val = data_annotation_all[12500:]\n\nimg_root = os.path.join(dataset_dir, 'images')\nsave_coco_anno(\n data_annotation_train,\n img_root,\n os.path.join(dataset_dir, 'annotations', 'macaque_train.json'),\n kpt_num=17)\nsave_coco_anno(\n data_annotation_val,\n img_root,\n os.path.join(dataset_dir, 'annotations', 'macaque_test.json'),\n start_img_id=12500,\n start_ann_id=15672,\n kpt_num=17)\n", "path": "tools/dataset/parse_macaquepose_dataset.py"}], "after_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport csv\nimport json\nimport os\nimport time\n\nimport cv2\nimport numpy as np\n\nnp.random.seed(0)\n\n\ndef get_poly_area(x, y):\n \"\"\"Calculate area of polygon given (x,y) coordinates (Shoelace formula)\n\n :param x: np.ndarray(N, )\n :param y: np.ndarray(N, )\n :return: area\n \"\"\"\n return float(0.5 *\n np.abs(np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1))))\n\n\ndef get_seg_area(segmentations):\n area = 0\n for segmentation in segmentations:\n area += get_poly_area(segmentation[:, 0], segmentation[:, 1])\n return area\n\n\ndef save_coco_anno(data_annotation,\n img_root,\n save_path,\n start_img_id=0,\n start_ann_id=0,\n kpt_num=17):\n \"\"\"Save annotations in coco-format.\n\n :param data_annotation: list of data annotation.\n :param img_root: the root dir to load images.\n :param save_path: the path to save transformed annotation file.\n :param start_img_id: the starting point to count the image id.\n :param start_ann_id: the starting point to count the annotation id.\n :param kpt_num: the number of keypoint.\n \"\"\"\n images = []\n annotations = []\n\n img_id = start_img_id\n ann_id = start_ann_id\n\n for i in range(0, len(data_annotation)):\n data_anno = data_annotation[i]\n image_name = data_anno[0]\n\n img = cv2.imread(os.path.join(img_root, image_name))\n\n kp_string = data_anno[1]\n kps = json.loads(kp_string)\n\n seg_string = data_anno[2]\n segs = json.loads(seg_string)\n\n for kp, seg in zip(kps, segs):\n keypoints = np.zeros([kpt_num, 3])\n for ind, p in enumerate(kp):\n if p['position'] is None:\n continue\n else:\n keypoints[ind, 0] = p['position'][0]\n keypoints[ind, 1] = p['position'][1]\n keypoints[ind, 2] = 2\n\n segmentations = []\n\n max_x = -1\n max_y = -1\n min_x = 999999\n min_y = 999999\n for segm in seg:\n if len(segm['segment']) == 0:\n continue\n\n segmentation = np.array(segm['segment'])\n segmentations.append(segmentation)\n\n _max_x, _max_y = segmentation.max(0)\n _min_x, _min_y = segmentation.min(0)\n\n max_x = max(max_x, _max_x)\n max_y = max(max_y, _max_y)\n min_x = min(min_x, _min_x)\n min_y = min(min_y, _min_y)\n\n anno = {}\n anno['keypoints'] = keypoints.reshape(-1).tolist()\n anno['image_id'] = img_id\n anno['id'] = ann_id\n anno['num_keypoints'] = int(sum(keypoints[:, 2] > 0))\n anno['bbox'] = [\n float(min_x),\n float(min_y),\n float(max_x - min_x + 1),\n float(max_y - min_y + 1)\n ]\n anno['iscrowd'] = 0\n anno['area'] = get_seg_area(segmentations)\n anno['category_id'] = 1\n anno['segmentation'] = [\n seg.reshape(-1).tolist() for seg in segmentations\n ]\n\n annotations.append(anno)\n ann_id += 1\n\n image = {}\n image['id'] = img_id\n image['file_name'] = image_name\n image['height'] = img.shape[0]\n image['width'] = img.shape[1]\n\n images.append(image)\n img_id += 1\n\n cocotype = {}\n\n cocotype['info'] = {}\n cocotype['info']['description'] = 'MacaquePose Generated by MMPose Team'\n cocotype['info']['version'] = '1.0'\n cocotype['info']['year'] = time.strftime('%Y', time.localtime())\n cocotype['info']['date_created'] = time.strftime('%Y/%m/%d',\n time.localtime())\n\n cocotype['images'] = images\n cocotype['annotations'] = annotations\n cocotype['categories'] = [{\n 'supercategory':\n 'animal',\n 'id':\n 1,\n 'name':\n 'macaque',\n 'keypoints': [\n 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear',\n 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow',\n 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee',\n 'right_knee', 'left_ankle', 'right_ankle'\n ],\n 'skeleton': [[16, 14], [14, 12], [17, 15], [15, 13], [12, 13], [6, 12],\n [7, 13], [6, 7], [6, 8], [7, 9], [8, 10], [9, 11], [2, 3],\n [1, 2], [1, 3], [2, 4], [3, 5], [4, 6], [5, 7]]\n }]\n\n os.makedirs(os.path.dirname(save_path), exist_ok=True)\n json.dump(cocotype, open(save_path, 'w'), indent=4)\n print('number of images:', img_id)\n print('number of annotations:', ann_id)\n print(f'done {save_path}')\n\n\ndataset_dir = '/data/macaque/'\nwith open(os.path.join(dataset_dir, 'annotations.csv'), 'r') as fp:\n data_annotation_all = list(csv.reader(fp, delimiter=','))[1:]\n\nnp.random.shuffle(data_annotation_all)\n\ndata_annotation_train = data_annotation_all[0:12500]\ndata_annotation_val = data_annotation_all[12500:]\n\nimg_root = os.path.join(dataset_dir, 'images')\nsave_coco_anno(\n data_annotation_train,\n img_root,\n os.path.join(dataset_dir, 'annotations', 'macaque_train.json'),\n kpt_num=17)\nsave_coco_anno(\n data_annotation_val,\n img_root,\n os.path.join(dataset_dir, 'annotations', 'macaque_test.json'),\n start_img_id=12500,\n start_ann_id=15672,\n kpt_num=17)\n", "path": "tools/dataset/parse_macaquepose_dataset.py"}]}
| 2,143 | 745 |
gh_patches_debug_11264
|
rasdani/github-patches
|
git_diff
|
ckan__ckan-5758
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exception when deleting resource if datastore table should exist but does not
**2.8.4**
**Describe the bug**
If for whatever reason, you end up with a resource for which datastore_active is set in the resource extras, but the datastore table does not actually exist, an exception is thown when trying to delete this resource.
**Steps to reproduce**
1. Create a resource and make sure data is uploaded to the datastore
2. Manually delete the database table of this resource from the database
3. Try to delete this resource via the ckan UI
4. An exception is thrown
**Expected behavior**
Before deleting, check whether the datastore table actually exists. If it doesn't exist, just skip the delete step. Better than throwing an exception.
**Additional details**
Not sure how I managed to get into this inconsistent state. Might not even be CKAN's fault since we had some issues with our persistence infrastructure/volumes.
Stack trace here:
```
File '/srv/app/src/ckan/ckan/controllers/package.py', line 1175 in resource_delete
get_action('resource_delete')(context, {'id': resource_id})
File '/srv/app/src/ckan/ckan/logic/__init__.py', line 466 in wrapped
result = _action(context, data_dict, **kw)
File '/srv/app/src/ckan/ckan/logic/action/delete.py', line 204 in resource_delete
plugin.after_delete(context, pkg_dict.get('resources', []))
File '/srv/app/src/ckan/ckanext/datastore/plugin.py', line 161 in after_delete
'resource_id': res.id,
File '/srv/app/src/ckan/ckanext/datastore/backend/postgres.py', line 1720 in delete
data_dict['resource_id'])
File '/usr/lib/python2.7/site-packages/sqlalchemy/engine/base.py', line 939 in execute
return self._execute_text(object, multiparams, params)
File '/usr/lib/python2.7/site-packages/sqlalchemy/engine/base.py', line 1097 in _execute_text
statement, parameters
File '/usr/lib/python2.7/site-packages/sqlalchemy/engine/base.py', line 1189 in _execute_context
context)
File '/usr/lib/python2.7/site-packages/sqlalchemy/engine/base.py', line 1402 in _handle_dbapi_exception
exc_info
File '/usr/lib/python2.7/site-packages/sqlalchemy/util/compat.py', line 203 in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File '/usr/lib/python2.7/site-packages/sqlalchemy/engine/base.py', line 1182 in _execute_context
context)
File '/usr/lib/python2.7/site-packages/sqlalchemy/engine/default.py', line 470 in do_execute
cursor.execute(statement, parameters)
ProgrammingError: (psycopg2.ProgrammingError) table "f03c4532-bc47-4ca0-bf73-f96e42082f49" does not exist
[SQL: 'DROP TABLE "f03c4532-bc47-4ca0-bf73-f96e42082f49" CASCADE']
```
I will provide a pull request.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext/datastore/plugin.py`
Content:
```
1 # encoding: utf-8
2
3 import logging
4
5 from six import string_types
6
7 import ckan.plugins as p
8 import ckan.logic as logic
9 import ckan.model as model
10 from ckan.model.core import State
11
12 import ckanext.datastore.helpers as datastore_helpers
13 import ckanext.datastore.logic.action as action
14 import ckanext.datastore.logic.auth as auth
15 import ckanext.datastore.interfaces as interfaces
16 from ckanext.datastore.backend import (
17 DatastoreException,
18 _parse_sort_clause,
19 DatastoreBackend
20 )
21 from ckanext.datastore.backend.postgres import DatastorePostgresqlBackend
22 import ckanext.datastore.blueprint as view
23
24 log = logging.getLogger(__name__)
25 _get_or_bust = logic.get_or_bust
26
27 DEFAULT_FORMATS = []
28
29 ValidationError = p.toolkit.ValidationError
30
31
32 class DatastorePlugin(p.SingletonPlugin):
33 p.implements(p.IConfigurable, inherit=True)
34 p.implements(p.IConfigurer)
35 p.implements(p.IActions)
36 p.implements(p.IAuthFunctions)
37 p.implements(p.IRoutes, inherit=True)
38 p.implements(p.IResourceController, inherit=True)
39 p.implements(p.ITemplateHelpers)
40 p.implements(p.IForkObserver, inherit=True)
41 p.implements(interfaces.IDatastore, inherit=True)
42 p.implements(interfaces.IDatastoreBackend, inherit=True)
43 p.implements(p.IBlueprint)
44
45 resource_show_action = None
46
47 def __new__(cls, *args, **kwargs):
48 idatastore_extensions = p.PluginImplementations(interfaces.IDatastore)
49 idatastore_extensions = idatastore_extensions.extensions()
50
51 if idatastore_extensions and idatastore_extensions[0].__class__ != cls:
52 msg = ('The "datastore" plugin must be the first IDatastore '
53 'plugin loaded. Change the order it is loaded in '
54 '"ckan.plugins" in your CKAN .ini file and try again.')
55 raise DatastoreException(msg)
56
57 return super(cls, cls).__new__(cls, *args, **kwargs)
58
59 # IDatastoreBackend
60
61 def register_backends(self):
62 return {
63 'postgresql': DatastorePostgresqlBackend,
64 'postgres': DatastorePostgresqlBackend,
65 }
66
67 # IConfigurer
68
69 def update_config(self, config):
70 DatastoreBackend.register_backends()
71 DatastoreBackend.set_active_backend(config)
72
73 templates_base = config.get('ckan.base_templates_folder')
74
75 p.toolkit.add_template_directory(config, templates_base)
76 self.backend = DatastoreBackend.get_active_backend()
77
78 # IConfigurable
79
80 def configure(self, config):
81 self.config = config
82 self.backend.configure(config)
83
84 # IActions
85
86 def get_actions(self):
87 actions = {
88 'datastore_create': action.datastore_create,
89 'datastore_upsert': action.datastore_upsert,
90 'datastore_delete': action.datastore_delete,
91 'datastore_search': action.datastore_search,
92 'datastore_info': action.datastore_info,
93 'datastore_function_create': action.datastore_function_create,
94 'datastore_function_delete': action.datastore_function_delete,
95 'datastore_run_triggers': action.datastore_run_triggers,
96 }
97 if getattr(self.backend, 'enable_sql_search', False):
98 # Only enable search_sql if the config/backend does not disable it
99 actions.update({
100 'datastore_search_sql': action.datastore_search_sql,
101 })
102 return actions
103
104 # IAuthFunctions
105
106 def get_auth_functions(self):
107 return {
108 'datastore_create': auth.datastore_create,
109 'datastore_upsert': auth.datastore_upsert,
110 'datastore_delete': auth.datastore_delete,
111 'datastore_info': auth.datastore_info,
112 'datastore_search': auth.datastore_search,
113 'datastore_search_sql': auth.datastore_search_sql,
114 'datastore_change_permissions': auth.datastore_change_permissions,
115 'datastore_function_create': auth.datastore_function_create,
116 'datastore_function_delete': auth.datastore_function_delete,
117 'datastore_run_triggers': auth.datastore_run_triggers,
118 }
119
120 # IResourceController
121
122 def before_show(self, resource_dict):
123 # Modify the resource url of datastore resources so that
124 # they link to the datastore dumps.
125 if resource_dict.get('url_type') == 'datastore':
126 resource_dict['url'] = p.toolkit.url_for(
127 'datastore.dump', resource_id=resource_dict['id'],
128 qualified=True)
129
130 if 'datastore_active' not in resource_dict:
131 resource_dict[u'datastore_active'] = False
132
133 return resource_dict
134
135 def after_delete(self, context, resources):
136 model = context['model']
137 pkg = context['package']
138 res_query = model.Session.query(model.Resource)
139 query = res_query.filter(
140 model.Resource.package_id == pkg.id,
141 model.Resource.state == State.DELETED
142 )
143 deleted = [
144 res for res in query.all()
145 if res.extras.get('datastore_active') is True]
146
147 for res in deleted:
148 self.backend.delete(context, {
149 'resource_id': res.id,
150 })
151 res.extras['datastore_active'] = False
152 res_query.filter_by(id=res.id).update(
153 {'extras': res.extras}, synchronize_session=False)
154
155 # IDatastore
156
157 def datastore_validate(self, context, data_dict, fields_types):
158 column_names = list(fields_types.keys())
159
160 filters = data_dict.get('filters', {})
161 for key in list(filters.keys()):
162 if key in fields_types:
163 del filters[key]
164
165 q = data_dict.get('q')
166 if q:
167 if isinstance(q, string_types):
168 del data_dict['q']
169 column_names.append(u'rank')
170 elif isinstance(q, dict):
171 for key in list(q.keys()):
172 if key in fields_types and isinstance(q[key],
173 string_types):
174 column_names.append(u'rank ' + key)
175 del q[key]
176
177 fields = data_dict.get('fields')
178 if fields:
179 data_dict['fields'] = list(set(fields) - set(column_names))
180
181 language = data_dict.get('language')
182 if language:
183 if isinstance(language, string_types):
184 del data_dict['language']
185
186 plain = data_dict.get('plain')
187 if plain:
188 if isinstance(plain, bool):
189 del data_dict['plain']
190
191 distinct = data_dict.get('distinct')
192 if distinct:
193 if isinstance(distinct, bool):
194 del data_dict['distinct']
195
196 sort_clauses = data_dict.get('sort')
197 if sort_clauses:
198 invalid_clauses = [
199 c for c in sort_clauses
200 if not _parse_sort_clause(
201 c, fields_types
202 )
203 ]
204 data_dict['sort'] = invalid_clauses
205
206 limit = data_dict.get('limit')
207 if limit:
208 is_positive_int = datastore_helpers.validate_int(limit,
209 non_negative=True)
210 is_all = isinstance(limit, string_types) and limit.lower() == 'all'
211 if is_positive_int or is_all:
212 del data_dict['limit']
213
214 offset = data_dict.get('offset')
215 if offset:
216 is_positive_int = datastore_helpers.validate_int(offset,
217 non_negative=True)
218 if is_positive_int:
219 del data_dict['offset']
220
221 return data_dict
222
223 def datastore_delete(self, context, data_dict, fields_types, query_dict):
224 hook = getattr(self.backend, 'datastore_delete', None)
225 if hook:
226 query_dict = hook(context, data_dict, fields_types, query_dict)
227 return query_dict
228
229 def datastore_search(self, context, data_dict, fields_types, query_dict):
230 hook = getattr(self.backend, 'datastore_search', None)
231 if hook:
232 query_dict = hook(context, data_dict, fields_types, query_dict)
233 return query_dict
234
235 def get_helpers(self):
236 return {
237 'datastore_dictionary': datastore_helpers.datastore_dictionary}
238
239 # IForkObserver
240
241 def before_fork(self):
242 try:
243 before_fork = self.backend.before_fork
244 except AttributeError:
245 pass
246 else:
247 before_fork()
248
249 # IBlueprint
250
251 def get_blueprint(self):
252 u'''Return a Flask Blueprint object to be registered by the app.'''
253
254 return view.datastore
255
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ckanext/datastore/plugin.py b/ckanext/datastore/plugin.py
--- a/ckanext/datastore/plugin.py
+++ b/ckanext/datastore/plugin.py
@@ -145,9 +145,10 @@
if res.extras.get('datastore_active') is True]
for res in deleted:
- self.backend.delete(context, {
- 'resource_id': res.id,
- })
+ if self.backend.resource_exists(res.id):
+ self.backend.delete(context, {
+ 'resource_id': res.id,
+ })
res.extras['datastore_active'] = False
res_query.filter_by(id=res.id).update(
{'extras': res.extras}, synchronize_session=False)
|
{"golden_diff": "diff --git a/ckanext/datastore/plugin.py b/ckanext/datastore/plugin.py\n--- a/ckanext/datastore/plugin.py\n+++ b/ckanext/datastore/plugin.py\n@@ -145,9 +145,10 @@\n if res.extras.get('datastore_active') is True]\n \n for res in deleted:\n- self.backend.delete(context, {\n- 'resource_id': res.id,\n- })\n+ if self.backend.resource_exists(res.id):\n+ self.backend.delete(context, {\n+ 'resource_id': res.id,\n+ })\n res.extras['datastore_active'] = False\n res_query.filter_by(id=res.id).update(\n {'extras': res.extras}, synchronize_session=False)\n", "issue": "Exception when deleting resource if datastore table should exist but does not\n**2.8.4**\r\n\r\n**Describe the bug**\r\nIf for whatever reason, you end up with a resource for which datastore_active is set in the resource extras, but the datastore table does not actually exist, an exception is thown when trying to delete this resource.\r\n\r\n**Steps to reproduce**\r\n1. Create a resource and make sure data is uploaded to the datastore\r\n2. Manually delete the database table of this resource from the database\r\n3. Try to delete this resource via the ckan UI\r\n4. An exception is thrown\r\n\r\n**Expected behavior**\r\nBefore deleting, check whether the datastore table actually exists. If it doesn't exist, just skip the delete step. Better than throwing an exception.\r\n\r\n**Additional details**\r\nNot sure how I managed to get into this inconsistent state. Might not even be CKAN's fault since we had some issues with our persistence infrastructure/volumes.\r\n\r\nStack trace here:\r\n```\r\nFile '/srv/app/src/ckan/ckan/controllers/package.py', line 1175 in resource_delete\r\n get_action('resource_delete')(context, {'id': resource_id})\r\nFile '/srv/app/src/ckan/ckan/logic/__init__.py', line 466 in wrapped\r\n result = _action(context, data_dict, **kw)\r\nFile '/srv/app/src/ckan/ckan/logic/action/delete.py', line 204 in resource_delete\r\n plugin.after_delete(context, pkg_dict.get('resources', []))\r\nFile '/srv/app/src/ckan/ckanext/datastore/plugin.py', line 161 in after_delete\r\n 'resource_id': res.id,\r\nFile '/srv/app/src/ckan/ckanext/datastore/backend/postgres.py', line 1720 in delete\r\n data_dict['resource_id'])\r\nFile '/usr/lib/python2.7/site-packages/sqlalchemy/engine/base.py', line 939 in execute\r\n return self._execute_text(object, multiparams, params)\r\nFile '/usr/lib/python2.7/site-packages/sqlalchemy/engine/base.py', line 1097 in _execute_text\r\n statement, parameters\r\nFile '/usr/lib/python2.7/site-packages/sqlalchemy/engine/base.py', line 1189 in _execute_context\r\n context)\r\nFile '/usr/lib/python2.7/site-packages/sqlalchemy/engine/base.py', line 1402 in _handle_dbapi_exception\r\n exc_info\r\nFile '/usr/lib/python2.7/site-packages/sqlalchemy/util/compat.py', line 203 in raise_from_cause\r\n reraise(type(exception), exception, tb=exc_tb, cause=cause)\r\nFile '/usr/lib/python2.7/site-packages/sqlalchemy/engine/base.py', line 1182 in _execute_context\r\n context)\r\nFile '/usr/lib/python2.7/site-packages/sqlalchemy/engine/default.py', line 470 in do_execute\r\n cursor.execute(statement, parameters)\r\nProgrammingError: (psycopg2.ProgrammingError) table \"f03c4532-bc47-4ca0-bf73-f96e42082f49\" does not exist\r\n [SQL: 'DROP TABLE \"f03c4532-bc47-4ca0-bf73-f96e42082f49\" CASCADE']\r\n```\r\n\r\nI will provide a pull request.\n", "before_files": [{"content": "# encoding: utf-8\n\nimport logging\n\nfrom six import string_types\n\nimport ckan.plugins as p\nimport ckan.logic as logic\nimport ckan.model as model\nfrom ckan.model.core import State\n\nimport ckanext.datastore.helpers as datastore_helpers\nimport ckanext.datastore.logic.action as action\nimport ckanext.datastore.logic.auth as auth\nimport ckanext.datastore.interfaces as interfaces\nfrom ckanext.datastore.backend import (\n DatastoreException,\n _parse_sort_clause,\n DatastoreBackend\n)\nfrom ckanext.datastore.backend.postgres import DatastorePostgresqlBackend\nimport ckanext.datastore.blueprint as view\n\nlog = logging.getLogger(__name__)\n_get_or_bust = logic.get_or_bust\n\nDEFAULT_FORMATS = []\n\nValidationError = p.toolkit.ValidationError\n\n\nclass DatastorePlugin(p.SingletonPlugin):\n p.implements(p.IConfigurable, inherit=True)\n p.implements(p.IConfigurer)\n p.implements(p.IActions)\n p.implements(p.IAuthFunctions)\n p.implements(p.IRoutes, inherit=True)\n p.implements(p.IResourceController, inherit=True)\n p.implements(p.ITemplateHelpers)\n p.implements(p.IForkObserver, inherit=True)\n p.implements(interfaces.IDatastore, inherit=True)\n p.implements(interfaces.IDatastoreBackend, inherit=True)\n p.implements(p.IBlueprint)\n\n resource_show_action = None\n\n def __new__(cls, *args, **kwargs):\n idatastore_extensions = p.PluginImplementations(interfaces.IDatastore)\n idatastore_extensions = idatastore_extensions.extensions()\n\n if idatastore_extensions and idatastore_extensions[0].__class__ != cls:\n msg = ('The \"datastore\" plugin must be the first IDatastore '\n 'plugin loaded. Change the order it is loaded in '\n '\"ckan.plugins\" in your CKAN .ini file and try again.')\n raise DatastoreException(msg)\n\n return super(cls, cls).__new__(cls, *args, **kwargs)\n\n # IDatastoreBackend\n\n def register_backends(self):\n return {\n 'postgresql': DatastorePostgresqlBackend,\n 'postgres': DatastorePostgresqlBackend,\n }\n\n # IConfigurer\n\n def update_config(self, config):\n DatastoreBackend.register_backends()\n DatastoreBackend.set_active_backend(config)\n\n templates_base = config.get('ckan.base_templates_folder')\n\n p.toolkit.add_template_directory(config, templates_base)\n self.backend = DatastoreBackend.get_active_backend()\n\n # IConfigurable\n\n def configure(self, config):\n self.config = config\n self.backend.configure(config)\n\n # IActions\n\n def get_actions(self):\n actions = {\n 'datastore_create': action.datastore_create,\n 'datastore_upsert': action.datastore_upsert,\n 'datastore_delete': action.datastore_delete,\n 'datastore_search': action.datastore_search,\n 'datastore_info': action.datastore_info,\n 'datastore_function_create': action.datastore_function_create,\n 'datastore_function_delete': action.datastore_function_delete,\n 'datastore_run_triggers': action.datastore_run_triggers,\n }\n if getattr(self.backend, 'enable_sql_search', False):\n # Only enable search_sql if the config/backend does not disable it\n actions.update({\n 'datastore_search_sql': action.datastore_search_sql,\n })\n return actions\n\n # IAuthFunctions\n\n def get_auth_functions(self):\n return {\n 'datastore_create': auth.datastore_create,\n 'datastore_upsert': auth.datastore_upsert,\n 'datastore_delete': auth.datastore_delete,\n 'datastore_info': auth.datastore_info,\n 'datastore_search': auth.datastore_search,\n 'datastore_search_sql': auth.datastore_search_sql,\n 'datastore_change_permissions': auth.datastore_change_permissions,\n 'datastore_function_create': auth.datastore_function_create,\n 'datastore_function_delete': auth.datastore_function_delete,\n 'datastore_run_triggers': auth.datastore_run_triggers,\n }\n\n # IResourceController\n\n def before_show(self, resource_dict):\n # Modify the resource url of datastore resources so that\n # they link to the datastore dumps.\n if resource_dict.get('url_type') == 'datastore':\n resource_dict['url'] = p.toolkit.url_for(\n 'datastore.dump', resource_id=resource_dict['id'],\n qualified=True)\n\n if 'datastore_active' not in resource_dict:\n resource_dict[u'datastore_active'] = False\n\n return resource_dict\n\n def after_delete(self, context, resources):\n model = context['model']\n pkg = context['package']\n res_query = model.Session.query(model.Resource)\n query = res_query.filter(\n model.Resource.package_id == pkg.id,\n model.Resource.state == State.DELETED\n )\n deleted = [\n res for res in query.all()\n if res.extras.get('datastore_active') is True]\n\n for res in deleted:\n self.backend.delete(context, {\n 'resource_id': res.id,\n })\n res.extras['datastore_active'] = False\n res_query.filter_by(id=res.id).update(\n {'extras': res.extras}, synchronize_session=False)\n\n # IDatastore\n\n def datastore_validate(self, context, data_dict, fields_types):\n column_names = list(fields_types.keys())\n\n filters = data_dict.get('filters', {})\n for key in list(filters.keys()):\n if key in fields_types:\n del filters[key]\n\n q = data_dict.get('q')\n if q:\n if isinstance(q, string_types):\n del data_dict['q']\n column_names.append(u'rank')\n elif isinstance(q, dict):\n for key in list(q.keys()):\n if key in fields_types and isinstance(q[key],\n string_types):\n column_names.append(u'rank ' + key)\n del q[key]\n\n fields = data_dict.get('fields')\n if fields:\n data_dict['fields'] = list(set(fields) - set(column_names))\n\n language = data_dict.get('language')\n if language:\n if isinstance(language, string_types):\n del data_dict['language']\n\n plain = data_dict.get('plain')\n if plain:\n if isinstance(plain, bool):\n del data_dict['plain']\n\n distinct = data_dict.get('distinct')\n if distinct:\n if isinstance(distinct, bool):\n del data_dict['distinct']\n\n sort_clauses = data_dict.get('sort')\n if sort_clauses:\n invalid_clauses = [\n c for c in sort_clauses\n if not _parse_sort_clause(\n c, fields_types\n )\n ]\n data_dict['sort'] = invalid_clauses\n\n limit = data_dict.get('limit')\n if limit:\n is_positive_int = datastore_helpers.validate_int(limit,\n non_negative=True)\n is_all = isinstance(limit, string_types) and limit.lower() == 'all'\n if is_positive_int or is_all:\n del data_dict['limit']\n\n offset = data_dict.get('offset')\n if offset:\n is_positive_int = datastore_helpers.validate_int(offset,\n non_negative=True)\n if is_positive_int:\n del data_dict['offset']\n\n return data_dict\n\n def datastore_delete(self, context, data_dict, fields_types, query_dict):\n hook = getattr(self.backend, 'datastore_delete', None)\n if hook:\n query_dict = hook(context, data_dict, fields_types, query_dict)\n return query_dict\n\n def datastore_search(self, context, data_dict, fields_types, query_dict):\n hook = getattr(self.backend, 'datastore_search', None)\n if hook:\n query_dict = hook(context, data_dict, fields_types, query_dict)\n return query_dict\n\n def get_helpers(self):\n return {\n 'datastore_dictionary': datastore_helpers.datastore_dictionary}\n\n # IForkObserver\n\n def before_fork(self):\n try:\n before_fork = self.backend.before_fork\n except AttributeError:\n pass\n else:\n before_fork()\n\n # IBlueprint\n\n def get_blueprint(self):\n u'''Return a Flask Blueprint object to be registered by the app.'''\n\n return view.datastore\n", "path": "ckanext/datastore/plugin.py"}], "after_files": [{"content": "# encoding: utf-8\n\nimport logging\n\nfrom six import string_types\n\nimport ckan.plugins as p\nimport ckan.logic as logic\nimport ckan.model as model\nfrom ckan.model.core import State\n\nimport ckanext.datastore.helpers as datastore_helpers\nimport ckanext.datastore.logic.action as action\nimport ckanext.datastore.logic.auth as auth\nimport ckanext.datastore.interfaces as interfaces\nfrom ckanext.datastore.backend import (\n DatastoreException,\n _parse_sort_clause,\n DatastoreBackend\n)\nfrom ckanext.datastore.backend.postgres import DatastorePostgresqlBackend\nimport ckanext.datastore.blueprint as view\n\nlog = logging.getLogger(__name__)\n_get_or_bust = logic.get_or_bust\n\nDEFAULT_FORMATS = []\n\nValidationError = p.toolkit.ValidationError\n\n\nclass DatastorePlugin(p.SingletonPlugin):\n p.implements(p.IConfigurable, inherit=True)\n p.implements(p.IConfigurer)\n p.implements(p.IActions)\n p.implements(p.IAuthFunctions)\n p.implements(p.IRoutes, inherit=True)\n p.implements(p.IResourceController, inherit=True)\n p.implements(p.ITemplateHelpers)\n p.implements(p.IForkObserver, inherit=True)\n p.implements(interfaces.IDatastore, inherit=True)\n p.implements(interfaces.IDatastoreBackend, inherit=True)\n p.implements(p.IBlueprint)\n\n resource_show_action = None\n\n def __new__(cls, *args, **kwargs):\n idatastore_extensions = p.PluginImplementations(interfaces.IDatastore)\n idatastore_extensions = idatastore_extensions.extensions()\n\n if idatastore_extensions and idatastore_extensions[0].__class__ != cls:\n msg = ('The \"datastore\" plugin must be the first IDatastore '\n 'plugin loaded. Change the order it is loaded in '\n '\"ckan.plugins\" in your CKAN .ini file and try again.')\n raise DatastoreException(msg)\n\n return super(cls, cls).__new__(cls, *args, **kwargs)\n\n # IDatastoreBackend\n\n def register_backends(self):\n return {\n 'postgresql': DatastorePostgresqlBackend,\n 'postgres': DatastorePostgresqlBackend,\n }\n\n # IConfigurer\n\n def update_config(self, config):\n DatastoreBackend.register_backends()\n DatastoreBackend.set_active_backend(config)\n\n templates_base = config.get('ckan.base_templates_folder')\n\n p.toolkit.add_template_directory(config, templates_base)\n self.backend = DatastoreBackend.get_active_backend()\n\n # IConfigurable\n\n def configure(self, config):\n self.config = config\n self.backend.configure(config)\n\n # IActions\n\n def get_actions(self):\n actions = {\n 'datastore_create': action.datastore_create,\n 'datastore_upsert': action.datastore_upsert,\n 'datastore_delete': action.datastore_delete,\n 'datastore_search': action.datastore_search,\n 'datastore_info': action.datastore_info,\n 'datastore_function_create': action.datastore_function_create,\n 'datastore_function_delete': action.datastore_function_delete,\n 'datastore_run_triggers': action.datastore_run_triggers,\n }\n if getattr(self.backend, 'enable_sql_search', False):\n # Only enable search_sql if the config/backend does not disable it\n actions.update({\n 'datastore_search_sql': action.datastore_search_sql,\n })\n return actions\n\n # IAuthFunctions\n\n def get_auth_functions(self):\n return {\n 'datastore_create': auth.datastore_create,\n 'datastore_upsert': auth.datastore_upsert,\n 'datastore_delete': auth.datastore_delete,\n 'datastore_info': auth.datastore_info,\n 'datastore_search': auth.datastore_search,\n 'datastore_search_sql': auth.datastore_search_sql,\n 'datastore_change_permissions': auth.datastore_change_permissions,\n 'datastore_function_create': auth.datastore_function_create,\n 'datastore_function_delete': auth.datastore_function_delete,\n 'datastore_run_triggers': auth.datastore_run_triggers,\n }\n\n # IResourceController\n\n def before_show(self, resource_dict):\n # Modify the resource url of datastore resources so that\n # they link to the datastore dumps.\n if resource_dict.get('url_type') == 'datastore':\n resource_dict['url'] = p.toolkit.url_for(\n 'datastore.dump', resource_id=resource_dict['id'],\n qualified=True)\n\n if 'datastore_active' not in resource_dict:\n resource_dict[u'datastore_active'] = False\n\n return resource_dict\n\n def after_delete(self, context, resources):\n model = context['model']\n pkg = context['package']\n res_query = model.Session.query(model.Resource)\n query = res_query.filter(\n model.Resource.package_id == pkg.id,\n model.Resource.state == State.DELETED\n )\n deleted = [\n res for res in query.all()\n if res.extras.get('datastore_active') is True]\n\n for res in deleted:\n if self.backend.resource_exists(res.id):\n self.backend.delete(context, {\n 'resource_id': res.id,\n })\n res.extras['datastore_active'] = False\n res_query.filter_by(id=res.id).update(\n {'extras': res.extras}, synchronize_session=False)\n\n # IDatastore\n\n def datastore_validate(self, context, data_dict, fields_types):\n column_names = list(fields_types.keys())\n\n filters = data_dict.get('filters', {})\n for key in list(filters.keys()):\n if key in fields_types:\n del filters[key]\n\n q = data_dict.get('q')\n if q:\n if isinstance(q, string_types):\n del data_dict['q']\n column_names.append(u'rank')\n elif isinstance(q, dict):\n for key in list(q.keys()):\n if key in fields_types and isinstance(q[key],\n string_types):\n column_names.append(u'rank ' + key)\n del q[key]\n\n fields = data_dict.get('fields')\n if fields:\n data_dict['fields'] = list(set(fields) - set(column_names))\n\n language = data_dict.get('language')\n if language:\n if isinstance(language, string_types):\n del data_dict['language']\n\n plain = data_dict.get('plain')\n if plain:\n if isinstance(plain, bool):\n del data_dict['plain']\n\n distinct = data_dict.get('distinct')\n if distinct:\n if isinstance(distinct, bool):\n del data_dict['distinct']\n\n sort_clauses = data_dict.get('sort')\n if sort_clauses:\n invalid_clauses = [\n c for c in sort_clauses\n if not _parse_sort_clause(\n c, fields_types\n )\n ]\n data_dict['sort'] = invalid_clauses\n\n limit = data_dict.get('limit')\n if limit:\n is_positive_int = datastore_helpers.validate_int(limit,\n non_negative=True)\n is_all = isinstance(limit, string_types) and limit.lower() == 'all'\n if is_positive_int or is_all:\n del data_dict['limit']\n\n offset = data_dict.get('offset')\n if offset:\n is_positive_int = datastore_helpers.validate_int(offset,\n non_negative=True)\n if is_positive_int:\n del data_dict['offset']\n\n return data_dict\n\n def datastore_delete(self, context, data_dict, fields_types, query_dict):\n hook = getattr(self.backend, 'datastore_delete', None)\n if hook:\n query_dict = hook(context, data_dict, fields_types, query_dict)\n return query_dict\n\n def datastore_search(self, context, data_dict, fields_types, query_dict):\n hook = getattr(self.backend, 'datastore_search', None)\n if hook:\n query_dict = hook(context, data_dict, fields_types, query_dict)\n return query_dict\n\n def get_helpers(self):\n return {\n 'datastore_dictionary': datastore_helpers.datastore_dictionary}\n\n # IForkObserver\n\n def before_fork(self):\n try:\n before_fork = self.backend.before_fork\n except AttributeError:\n pass\n else:\n before_fork()\n\n # IBlueprint\n\n def get_blueprint(self):\n u'''Return a Flask Blueprint object to be registered by the app.'''\n\n return view.datastore\n", "path": "ckanext/datastore/plugin.py"}]}
| 3,457 | 165 |
gh_patches_debug_38291
|
rasdani/github-patches
|
git_diff
|
cupy__cupy-1674
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`cupy.linalg.solve` returns wrong output for complex inputs
```
>>> a
array([[ 1.03310883-0.87701077j, -0.35775642-0.94918977j,
2.63837642+0.29920745j],
[-0.88041189+0.97322198j, 0.26771522-0.56788898j,
-0.47020123+0.63015159j],
[-0.05746688-0.16238003j, -0.18490837-0.00778188j,
1.61550173+1.13397892j]])
>>> b
array([ 0.92632861+1.45552766j, -0.54307601-0.62108012j,
-0.23769241+2.91680425j])
>>> cupy.linalg.solve(a, b)
array([ 0.07332535+0.27916782j, -1.61809337-0.62108012j,
-0.23769241+2.91680425j])
>>> numpy.linalg.solve(cupy.asnumpy(a), cupy.asnumpy(b))
array([0.04332182-0.34957548j, 1.6260605 -0.2383031j ,
0.88753974+1.15498982j])
>>> cupy.__version__
'5.0.0b4'
```
#1524 would fix the issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cupy/linalg/solve.py`
Content:
```
1 import numpy
2 from numpy import linalg
3 import six
4
5 import cupy
6 from cupy.core import core
7 from cupy import cuda
8 from cupy.cuda import cublas
9 from cupy.cuda import device
10 from cupy.linalg import decomposition
11 from cupy.linalg import util
12
13 if cuda.cusolver_enabled:
14 from cupy.cuda import cusolver
15
16
17 def solve(a, b):
18 """Solves a linear matrix equation.
19
20 It computes the exact solution of ``x`` in ``ax = b``,
21 where ``a`` is a square and full rank matrix.
22
23 Args:
24 a (cupy.ndarray): The matrix with dimension ``(..., M, M)``.
25 b (cupy.ndarray): The matrix with dimension ``(...,M)`` or
26 ``(..., M, K)``.
27
28 Returns:
29 cupy.ndarray:
30 The matrix with dimension ``(..., M)`` or ``(..., M, K)``.
31
32 .. seealso:: :func:`numpy.linalg.solve`
33 """
34 # NOTE: Since cusolver in CUDA 8.0 does not support gesv,
35 # we manually solve a linear system with QR decomposition.
36 # For details, please see the following:
37 # https://docs.nvidia.com/cuda/cusolver/index.html#qr_examples
38 if not cuda.cusolver_enabled:
39 raise RuntimeError('Current cupy only supports cusolver in CUDA 8.0')
40
41 util._assert_cupy_array(a, b)
42 util._assert_nd_squareness(a)
43
44 if not ((a.ndim == b.ndim or a.ndim == b.ndim + 1) and
45 a.shape[:-1] == b.shape[:a.ndim - 1]):
46 raise ValueError(
47 'a must have (..., M, M) shape and b must have (..., M) '
48 'or (..., M, K)')
49
50 # Cast to float32 or float64
51 if a.dtype.char == 'f' or a.dtype.char == 'd':
52 dtype = a.dtype
53 else:
54 dtype = numpy.find_common_type((a.dtype.char, 'f'), ())
55
56 a = a.astype(dtype)
57 b = b.astype(dtype)
58 if a.ndim == 2:
59 return _solve(a, b)
60 x = cupy.empty_like(b)
61 shape = a.shape[:-2]
62 for i in six.moves.range(numpy.prod(shape)):
63 index = numpy.unravel_index(i, shape)
64 x[index] = _solve(a[index], b[index])
65 return x
66
67
68 def _solve(a, b):
69 a = cupy.asfortranarray(a)
70 b = cupy.asfortranarray(b)
71 dtype = a.dtype
72 m, k = (b.size, 1) if b.ndim == 1 else b.shape
73 cusolver_handle = device.get_cusolver_handle()
74 cublas_handle = device.get_cublas_handle()
75 dev_info = cupy.empty(1, dtype=numpy.int32)
76
77 if dtype == 'f':
78 geqrf = cusolver.sgeqrf
79 geqrf_bufferSize = cusolver.sgeqrf_bufferSize
80 ormqr = cusolver.sormqr
81 trsm = cublas.strsm
82 else: # dtype == 'd'
83 geqrf = cusolver.dgeqrf
84 geqrf_bufferSize = cusolver.dgeqrf_bufferSize
85 ormqr = cusolver.dormqr
86 trsm = cublas.dtrsm
87
88 # 1. QR decomposition (A = Q * R)
89 buffersize = geqrf_bufferSize(cusolver_handle, m, m, a.data.ptr, m)
90 workspace = cupy.empty(buffersize, dtype=dtype)
91 tau = cupy.empty(m, dtype=dtype)
92 geqrf(
93 cusolver_handle, m, m, a.data.ptr, m,
94 tau.data.ptr, workspace.data.ptr, buffersize, dev_info.data.ptr)
95 _check_status(dev_info)
96 # 2. ormqr (Q^T * B)
97 ormqr(
98 cusolver_handle, cublas.CUBLAS_SIDE_LEFT, cublas.CUBLAS_OP_T,
99 m, k, m, a.data.ptr, m, tau.data.ptr, b.data.ptr, m,
100 workspace.data.ptr, buffersize, dev_info.data.ptr)
101 _check_status(dev_info)
102 # 3. trsm (X = R^{-1} * (Q^T * B))
103 trsm(
104 cublas_handle, cublas.CUBLAS_SIDE_LEFT, cublas.CUBLAS_FILL_MODE_UPPER,
105 cublas.CUBLAS_OP_N, cublas.CUBLAS_DIAG_NON_UNIT,
106 m, k, 1, a.data.ptr, m, b.data.ptr, m)
107 return b
108
109
110 def _check_status(dev_info):
111 status = int(dev_info)
112 if status < 0:
113 raise linalg.LinAlgError(
114 'Parameter error (maybe caused by a bug in cupy.linalg?)')
115
116
117 def tensorsolve(a, b, axes=None):
118 """Solves tensor equations denoted by ``ax = b``.
119
120 Suppose that ``b`` is equivalent to ``cupy.tensordot(a, x)``.
121 This function computes tensor ``x`` from ``a`` and ``b``.
122
123 Args:
124 a (cupy.ndarray): The tensor with ``len(shape) >= 1``
125 b (cupy.ndarray): The tensor with ``len(shape) >= 1``
126 axes (tuple of ints): Axes in ``a`` to reorder to the right
127 before inversion.
128
129 Returns:
130 cupy.ndarray:
131 The tensor with shape ``Q`` such that ``b.shape + Q == a.shape``.
132
133 .. seealso:: :func:`numpy.linalg.tensorsolve`
134 """
135 if axes is not None:
136 allaxes = list(six.moves.range(a.ndim))
137 for k in axes:
138 allaxes.remove(k)
139 allaxes.insert(a.ndim, k)
140 a = a.transpose(allaxes)
141
142 oldshape = a.shape[-(a.ndim - b.ndim):]
143 prod = cupy.internal.prod(oldshape)
144
145 a = a.reshape(-1, prod)
146 b = b.ravel()
147 result = solve(a, b)
148 return result.reshape(oldshape)
149
150
151 # TODO(okuta): Implement lstsq
152
153
154 def inv(a):
155 """Computes the inverse of a matrix.
156
157 This function computes matrix ``a_inv`` from n-dimensional regular matrix
158 ``a`` such that ``dot(a, a_inv) == eye(n)``.
159
160 Args:
161 a (cupy.ndarray): The regular matrix
162
163 Returns:
164 cupy.ndarray: The inverse of a matrix.
165
166 .. seealso:: :func:`numpy.linalg.inv`
167 """
168 if not cuda.cusolver_enabled:
169 raise RuntimeError('Current cupy only supports cusolver in CUDA 8.0')
170
171 # to prevent `a` to be overwritten
172 a = a.copy()
173
174 util._assert_cupy_array(a)
175 util._assert_rank2(a)
176 util._assert_nd_squareness(a)
177
178 if a.dtype.char == 'f' or a.dtype.char == 'd':
179 dtype = a.dtype.char
180 else:
181 dtype = numpy.find_common_type((a.dtype.char, 'f'), ()).char
182
183 cusolver_handle = device.get_cusolver_handle()
184 dev_info = cupy.empty(1, dtype=dtype)
185
186 ipiv = cupy.empty((a.shape[0], 1), dtype=dtype)
187
188 if dtype == 'f':
189 getrf = cusolver.sgetrf
190 getrf_bufferSize = cusolver.sgetrf_bufferSize
191 getrs = cusolver.sgetrs
192 else: # dtype == 'd'
193 getrf = cusolver.dgetrf
194 getrf_bufferSize = cusolver.dgetrf_bufferSize
195 getrs = cusolver.dgetrs
196
197 m = a.shape[0]
198
199 buffersize = getrf_bufferSize(cusolver_handle, m, m, a.data.ptr, m)
200 workspace = cupy.empty(buffersize, dtype=dtype)
201
202 # LU factorization
203 getrf(cusolver_handle, m, m, a.data.ptr, m, workspace.data.ptr,
204 ipiv.data.ptr, dev_info.data.ptr)
205
206 b = cupy.eye(m, dtype=dtype)
207
208 # solve for the inverse
209 getrs(cusolver_handle, 0, m, m, a.data.ptr, m, ipiv.data.ptr, b.data.ptr,
210 m, dev_info.data.ptr)
211
212 return b
213
214
215 def pinv(a, rcond=1e-15):
216 """Compute the Moore-Penrose pseudoinverse of a matrix.
217
218 It computes a pseudoinverse of a matrix ``a``, which is a generalization
219 of the inverse matrix with Singular Value Decomposition (SVD).
220 Note that it automatically removes small singular values for stability.
221
222 Args:
223 a (cupy.ndarray): The matrix with dimension ``(M, N)``
224 rcond (float): Cutoff parameter for small singular values.
225 For stability it computes the largest singular value denoted by
226 ``s``, and sets all singular values smaller than ``s`` to zero.
227
228 Returns:
229 cupy.ndarray: The pseudoinverse of ``a`` with dimension ``(N, M)``.
230
231 .. seealso:: :func:`numpy.linalg.pinv`
232 """
233 u, s, vt = decomposition.svd(a, full_matrices=False)
234 cutoff = rcond * s.max()
235 s1 = 1 / s
236 s1[s <= cutoff] = 0
237 return core.dot(vt.T, s1[:, None] * u.T)
238
239
240 def tensorinv(a, ind=2):
241 """Computes the inverse of a tensor.
242
243 This function computes tensor ``a_inv`` from tensor ``a`` such that
244 ``tensordot(a_inv, a, ind) == I``, where ``I`` denotes the identity tensor.
245
246 Args:
247 a (cupy.ndarray):
248 The tensor such that
249 ``prod(a.shape[:ind]) == prod(a.shape[ind:])``.
250 ind (int):
251 The positive number used in ``axes`` option of ``tensordot``.
252
253 Returns:
254 cupy.ndarray:
255 The inverse of a tensor whose shape is equivalent to
256 ``a.shape[ind:] + a.shape[:ind]``.
257
258 .. seealso:: :func:`numpy.linalg.tensorinv`
259 """
260 util._assert_cupy_array(a)
261
262 if ind <= 0:
263 raise ValueError('Invalid ind argument')
264 oldshape = a.shape
265 invshape = oldshape[ind:] + oldshape[:ind]
266 prod = cupy.internal.prod(oldshape[ind:])
267 a = a.reshape(prod, -1)
268 a_inv = inv(a)
269 return a_inv.reshape(*invshape)
270
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cupy/linalg/solve.py b/cupy/linalg/solve.py
--- a/cupy/linalg/solve.py
+++ b/cupy/linalg/solve.py
@@ -53,37 +53,55 @@
else:
dtype = numpy.find_common_type((a.dtype.char, 'f'), ())
+ cublas_handle = device.get_cublas_handle()
+ cusolver_handle = device.get_cusolver_handle()
+
a = a.astype(dtype)
b = b.astype(dtype)
if a.ndim == 2:
- return _solve(a, b)
+ return _solve(a, b, cublas_handle, cusolver_handle)
+
x = cupy.empty_like(b)
shape = a.shape[:-2]
for i in six.moves.range(numpy.prod(shape)):
index = numpy.unravel_index(i, shape)
- x[index] = _solve(a[index], b[index])
+ x[index] = _solve(a[index], b[index], cublas_handle, cusolver_handle)
return x
-def _solve(a, b):
+def _solve(a, b, cublas_handle, cusolver_handle):
a = cupy.asfortranarray(a)
b = cupy.asfortranarray(b)
dtype = a.dtype
m, k = (b.size, 1) if b.ndim == 1 else b.shape
- cusolver_handle = device.get_cusolver_handle()
- cublas_handle = device.get_cublas_handle()
dev_info = cupy.empty(1, dtype=numpy.int32)
if dtype == 'f':
geqrf = cusolver.sgeqrf
geqrf_bufferSize = cusolver.sgeqrf_bufferSize
ormqr = cusolver.sormqr
+ trans = cublas.CUBLAS_OP_T
trsm = cublas.strsm
- else: # dtype == 'd'
+ elif dtype == 'd':
geqrf = cusolver.dgeqrf
geqrf_bufferSize = cusolver.dgeqrf_bufferSize
ormqr = cusolver.dormqr
+ trans = cublas.CUBLAS_OP_T
trsm = cublas.dtrsm
+ elif dtype == 'F':
+ geqrf = cusolver.cgeqrf
+ geqrf_bufferSize = cusolver.cgeqrf_bufferSize
+ ormqr = cusolver.cormqr
+ trans = cublas.CUBLAS_OP_C
+ trsm = cublas.ctrsm
+ elif dtype == 'D':
+ geqrf = cusolver.zgeqrf
+ geqrf_bufferSize = cusolver.zgeqrf_bufferSize
+ ormqr = cusolver.zormqr
+ trans = cublas.CUBLAS_OP_C
+ trsm = cublas.ztrsm
+ else:
+ raise NotImplementedError(dtype)
# 1. QR decomposition (A = Q * R)
buffersize = geqrf_bufferSize(cusolver_handle, m, m, a.data.ptr, m)
@@ -95,7 +113,7 @@
_check_status(dev_info)
# 2. ormqr (Q^T * B)
ormqr(
- cusolver_handle, cublas.CUBLAS_SIDE_LEFT, cublas.CUBLAS_OP_T,
+ cusolver_handle, cublas.CUBLAS_SIDE_LEFT, trans,
m, k, m, a.data.ptr, m, tau.data.ptr, b.data.ptr, m,
workspace.data.ptr, buffersize, dev_info.data.ptr)
_check_status(dev_info)
|
{"golden_diff": "diff --git a/cupy/linalg/solve.py b/cupy/linalg/solve.py\n--- a/cupy/linalg/solve.py\n+++ b/cupy/linalg/solve.py\n@@ -53,37 +53,55 @@\n else:\n dtype = numpy.find_common_type((a.dtype.char, 'f'), ())\n \n+ cublas_handle = device.get_cublas_handle()\n+ cusolver_handle = device.get_cusolver_handle()\n+\n a = a.astype(dtype)\n b = b.astype(dtype)\n if a.ndim == 2:\n- return _solve(a, b)\n+ return _solve(a, b, cublas_handle, cusolver_handle)\n+\n x = cupy.empty_like(b)\n shape = a.shape[:-2]\n for i in six.moves.range(numpy.prod(shape)):\n index = numpy.unravel_index(i, shape)\n- x[index] = _solve(a[index], b[index])\n+ x[index] = _solve(a[index], b[index], cublas_handle, cusolver_handle)\n return x\n \n \n-def _solve(a, b):\n+def _solve(a, b, cublas_handle, cusolver_handle):\n a = cupy.asfortranarray(a)\n b = cupy.asfortranarray(b)\n dtype = a.dtype\n m, k = (b.size, 1) if b.ndim == 1 else b.shape\n- cusolver_handle = device.get_cusolver_handle()\n- cublas_handle = device.get_cublas_handle()\n dev_info = cupy.empty(1, dtype=numpy.int32)\n \n if dtype == 'f':\n geqrf = cusolver.sgeqrf\n geqrf_bufferSize = cusolver.sgeqrf_bufferSize\n ormqr = cusolver.sormqr\n+ trans = cublas.CUBLAS_OP_T\n trsm = cublas.strsm\n- else: # dtype == 'd'\n+ elif dtype == 'd':\n geqrf = cusolver.dgeqrf\n geqrf_bufferSize = cusolver.dgeqrf_bufferSize\n ormqr = cusolver.dormqr\n+ trans = cublas.CUBLAS_OP_T\n trsm = cublas.dtrsm\n+ elif dtype == 'F':\n+ geqrf = cusolver.cgeqrf\n+ geqrf_bufferSize = cusolver.cgeqrf_bufferSize\n+ ormqr = cusolver.cormqr\n+ trans = cublas.CUBLAS_OP_C\n+ trsm = cublas.ctrsm\n+ elif dtype == 'D':\n+ geqrf = cusolver.zgeqrf\n+ geqrf_bufferSize = cusolver.zgeqrf_bufferSize\n+ ormqr = cusolver.zormqr\n+ trans = cublas.CUBLAS_OP_C\n+ trsm = cublas.ztrsm\n+ else:\n+ raise NotImplementedError(dtype)\n \n # 1. QR decomposition (A = Q * R)\n buffersize = geqrf_bufferSize(cusolver_handle, m, m, a.data.ptr, m)\n@@ -95,7 +113,7 @@\n _check_status(dev_info)\n # 2. ormqr (Q^T * B)\n ormqr(\n- cusolver_handle, cublas.CUBLAS_SIDE_LEFT, cublas.CUBLAS_OP_T,\n+ cusolver_handle, cublas.CUBLAS_SIDE_LEFT, trans,\n m, k, m, a.data.ptr, m, tau.data.ptr, b.data.ptr, m,\n workspace.data.ptr, buffersize, dev_info.data.ptr)\n _check_status(dev_info)\n", "issue": "`cupy.linalg.solve` returns wrong output for complex inputs\n```\r\n>>> a\r\narray([[ 1.03310883-0.87701077j, -0.35775642-0.94918977j,\r\n 2.63837642+0.29920745j],\r\n [-0.88041189+0.97322198j, 0.26771522-0.56788898j,\r\n -0.47020123+0.63015159j],\r\n [-0.05746688-0.16238003j, -0.18490837-0.00778188j,\r\n 1.61550173+1.13397892j]])\r\n>>> b\r\narray([ 0.92632861+1.45552766j, -0.54307601-0.62108012j,\r\n -0.23769241+2.91680425j])\r\n>>> cupy.linalg.solve(a, b)\r\narray([ 0.07332535+0.27916782j, -1.61809337-0.62108012j,\r\n -0.23769241+2.91680425j])\r\n>>> numpy.linalg.solve(cupy.asnumpy(a), cupy.asnumpy(b))\r\narray([0.04332182-0.34957548j, 1.6260605 -0.2383031j ,\r\n 0.88753974+1.15498982j])\r\n>>> cupy.__version__\r\n'5.0.0b4'\r\n```\r\n\r\n#1524 would fix the issue.\n", "before_files": [{"content": "import numpy\nfrom numpy import linalg\nimport six\n\nimport cupy\nfrom cupy.core import core\nfrom cupy import cuda\nfrom cupy.cuda import cublas\nfrom cupy.cuda import device\nfrom cupy.linalg import decomposition\nfrom cupy.linalg import util\n\nif cuda.cusolver_enabled:\n from cupy.cuda import cusolver\n\n\ndef solve(a, b):\n \"\"\"Solves a linear matrix equation.\n\n It computes the exact solution of ``x`` in ``ax = b``,\n where ``a`` is a square and full rank matrix.\n\n Args:\n a (cupy.ndarray): The matrix with dimension ``(..., M, M)``.\n b (cupy.ndarray): The matrix with dimension ``(...,M)`` or\n ``(..., M, K)``.\n\n Returns:\n cupy.ndarray:\n The matrix with dimension ``(..., M)`` or ``(..., M, K)``.\n\n .. seealso:: :func:`numpy.linalg.solve`\n \"\"\"\n # NOTE: Since cusolver in CUDA 8.0 does not support gesv,\n # we manually solve a linear system with QR decomposition.\n # For details, please see the following:\n # https://docs.nvidia.com/cuda/cusolver/index.html#qr_examples\n if not cuda.cusolver_enabled:\n raise RuntimeError('Current cupy only supports cusolver in CUDA 8.0')\n\n util._assert_cupy_array(a, b)\n util._assert_nd_squareness(a)\n\n if not ((a.ndim == b.ndim or a.ndim == b.ndim + 1) and\n a.shape[:-1] == b.shape[:a.ndim - 1]):\n raise ValueError(\n 'a must have (..., M, M) shape and b must have (..., M) '\n 'or (..., M, K)')\n\n # Cast to float32 or float64\n if a.dtype.char == 'f' or a.dtype.char == 'd':\n dtype = a.dtype\n else:\n dtype = numpy.find_common_type((a.dtype.char, 'f'), ())\n\n a = a.astype(dtype)\n b = b.astype(dtype)\n if a.ndim == 2:\n return _solve(a, b)\n x = cupy.empty_like(b)\n shape = a.shape[:-2]\n for i in six.moves.range(numpy.prod(shape)):\n index = numpy.unravel_index(i, shape)\n x[index] = _solve(a[index], b[index])\n return x\n\n\ndef _solve(a, b):\n a = cupy.asfortranarray(a)\n b = cupy.asfortranarray(b)\n dtype = a.dtype\n m, k = (b.size, 1) if b.ndim == 1 else b.shape\n cusolver_handle = device.get_cusolver_handle()\n cublas_handle = device.get_cublas_handle()\n dev_info = cupy.empty(1, dtype=numpy.int32)\n\n if dtype == 'f':\n geqrf = cusolver.sgeqrf\n geqrf_bufferSize = cusolver.sgeqrf_bufferSize\n ormqr = cusolver.sormqr\n trsm = cublas.strsm\n else: # dtype == 'd'\n geqrf = cusolver.dgeqrf\n geqrf_bufferSize = cusolver.dgeqrf_bufferSize\n ormqr = cusolver.dormqr\n trsm = cublas.dtrsm\n\n # 1. QR decomposition (A = Q * R)\n buffersize = geqrf_bufferSize(cusolver_handle, m, m, a.data.ptr, m)\n workspace = cupy.empty(buffersize, dtype=dtype)\n tau = cupy.empty(m, dtype=dtype)\n geqrf(\n cusolver_handle, m, m, a.data.ptr, m,\n tau.data.ptr, workspace.data.ptr, buffersize, dev_info.data.ptr)\n _check_status(dev_info)\n # 2. ormqr (Q^T * B)\n ormqr(\n cusolver_handle, cublas.CUBLAS_SIDE_LEFT, cublas.CUBLAS_OP_T,\n m, k, m, a.data.ptr, m, tau.data.ptr, b.data.ptr, m,\n workspace.data.ptr, buffersize, dev_info.data.ptr)\n _check_status(dev_info)\n # 3. trsm (X = R^{-1} * (Q^T * B))\n trsm(\n cublas_handle, cublas.CUBLAS_SIDE_LEFT, cublas.CUBLAS_FILL_MODE_UPPER,\n cublas.CUBLAS_OP_N, cublas.CUBLAS_DIAG_NON_UNIT,\n m, k, 1, a.data.ptr, m, b.data.ptr, m)\n return b\n\n\ndef _check_status(dev_info):\n status = int(dev_info)\n if status < 0:\n raise linalg.LinAlgError(\n 'Parameter error (maybe caused by a bug in cupy.linalg?)')\n\n\ndef tensorsolve(a, b, axes=None):\n \"\"\"Solves tensor equations denoted by ``ax = b``.\n\n Suppose that ``b`` is equivalent to ``cupy.tensordot(a, x)``.\n This function computes tensor ``x`` from ``a`` and ``b``.\n\n Args:\n a (cupy.ndarray): The tensor with ``len(shape) >= 1``\n b (cupy.ndarray): The tensor with ``len(shape) >= 1``\n axes (tuple of ints): Axes in ``a`` to reorder to the right\n before inversion.\n\n Returns:\n cupy.ndarray:\n The tensor with shape ``Q`` such that ``b.shape + Q == a.shape``.\n\n .. seealso:: :func:`numpy.linalg.tensorsolve`\n \"\"\"\n if axes is not None:\n allaxes = list(six.moves.range(a.ndim))\n for k in axes:\n allaxes.remove(k)\n allaxes.insert(a.ndim, k)\n a = a.transpose(allaxes)\n\n oldshape = a.shape[-(a.ndim - b.ndim):]\n prod = cupy.internal.prod(oldshape)\n\n a = a.reshape(-1, prod)\n b = b.ravel()\n result = solve(a, b)\n return result.reshape(oldshape)\n\n\n# TODO(okuta): Implement lstsq\n\n\ndef inv(a):\n \"\"\"Computes the inverse of a matrix.\n\n This function computes matrix ``a_inv`` from n-dimensional regular matrix\n ``a`` such that ``dot(a, a_inv) == eye(n)``.\n\n Args:\n a (cupy.ndarray): The regular matrix\n\n Returns:\n cupy.ndarray: The inverse of a matrix.\n\n .. seealso:: :func:`numpy.linalg.inv`\n \"\"\"\n if not cuda.cusolver_enabled:\n raise RuntimeError('Current cupy only supports cusolver in CUDA 8.0')\n\n # to prevent `a` to be overwritten\n a = a.copy()\n\n util._assert_cupy_array(a)\n util._assert_rank2(a)\n util._assert_nd_squareness(a)\n\n if a.dtype.char == 'f' or a.dtype.char == 'd':\n dtype = a.dtype.char\n else:\n dtype = numpy.find_common_type((a.dtype.char, 'f'), ()).char\n\n cusolver_handle = device.get_cusolver_handle()\n dev_info = cupy.empty(1, dtype=dtype)\n\n ipiv = cupy.empty((a.shape[0], 1), dtype=dtype)\n\n if dtype == 'f':\n getrf = cusolver.sgetrf\n getrf_bufferSize = cusolver.sgetrf_bufferSize\n getrs = cusolver.sgetrs\n else: # dtype == 'd'\n getrf = cusolver.dgetrf\n getrf_bufferSize = cusolver.dgetrf_bufferSize\n getrs = cusolver.dgetrs\n\n m = a.shape[0]\n\n buffersize = getrf_bufferSize(cusolver_handle, m, m, a.data.ptr, m)\n workspace = cupy.empty(buffersize, dtype=dtype)\n\n # LU factorization\n getrf(cusolver_handle, m, m, a.data.ptr, m, workspace.data.ptr,\n ipiv.data.ptr, dev_info.data.ptr)\n\n b = cupy.eye(m, dtype=dtype)\n\n # solve for the inverse\n getrs(cusolver_handle, 0, m, m, a.data.ptr, m, ipiv.data.ptr, b.data.ptr,\n m, dev_info.data.ptr)\n\n return b\n\n\ndef pinv(a, rcond=1e-15):\n \"\"\"Compute the Moore-Penrose pseudoinverse of a matrix.\n\n It computes a pseudoinverse of a matrix ``a``, which is a generalization\n of the inverse matrix with Singular Value Decomposition (SVD).\n Note that it automatically removes small singular values for stability.\n\n Args:\n a (cupy.ndarray): The matrix with dimension ``(M, N)``\n rcond (float): Cutoff parameter for small singular values.\n For stability it computes the largest singular value denoted by\n ``s``, and sets all singular values smaller than ``s`` to zero.\n\n Returns:\n cupy.ndarray: The pseudoinverse of ``a`` with dimension ``(N, M)``.\n\n .. seealso:: :func:`numpy.linalg.pinv`\n \"\"\"\n u, s, vt = decomposition.svd(a, full_matrices=False)\n cutoff = rcond * s.max()\n s1 = 1 / s\n s1[s <= cutoff] = 0\n return core.dot(vt.T, s1[:, None] * u.T)\n\n\ndef tensorinv(a, ind=2):\n \"\"\"Computes the inverse of a tensor.\n\n This function computes tensor ``a_inv`` from tensor ``a`` such that\n ``tensordot(a_inv, a, ind) == I``, where ``I`` denotes the identity tensor.\n\n Args:\n a (cupy.ndarray):\n The tensor such that\n ``prod(a.shape[:ind]) == prod(a.shape[ind:])``.\n ind (int):\n The positive number used in ``axes`` option of ``tensordot``.\n\n Returns:\n cupy.ndarray:\n The inverse of a tensor whose shape is equivalent to\n ``a.shape[ind:] + a.shape[:ind]``.\n\n .. seealso:: :func:`numpy.linalg.tensorinv`\n \"\"\"\n util._assert_cupy_array(a)\n\n if ind <= 0:\n raise ValueError('Invalid ind argument')\n oldshape = a.shape\n invshape = oldshape[ind:] + oldshape[:ind]\n prod = cupy.internal.prod(oldshape[ind:])\n a = a.reshape(prod, -1)\n a_inv = inv(a)\n return a_inv.reshape(*invshape)\n", "path": "cupy/linalg/solve.py"}], "after_files": [{"content": "import numpy\nfrom numpy import linalg\nimport six\n\nimport cupy\nfrom cupy.core import core\nfrom cupy import cuda\nfrom cupy.cuda import cublas\nfrom cupy.cuda import device\nfrom cupy.linalg import decomposition\nfrom cupy.linalg import util\n\nif cuda.cusolver_enabled:\n from cupy.cuda import cusolver\n\n\ndef solve(a, b):\n \"\"\"Solves a linear matrix equation.\n\n It computes the exact solution of ``x`` in ``ax = b``,\n where ``a`` is a square and full rank matrix.\n\n Args:\n a (cupy.ndarray): The matrix with dimension ``(..., M, M)``.\n b (cupy.ndarray): The matrix with dimension ``(...,M)`` or\n ``(..., M, K)``.\n\n Returns:\n cupy.ndarray:\n The matrix with dimension ``(..., M)`` or ``(..., M, K)``.\n\n .. seealso:: :func:`numpy.linalg.solve`\n \"\"\"\n # NOTE: Since cusolver in CUDA 8.0 does not support gesv,\n # we manually solve a linear system with QR decomposition.\n # For details, please see the following:\n # https://docs.nvidia.com/cuda/cusolver/index.html#qr_examples\n if not cuda.cusolver_enabled:\n raise RuntimeError('Current cupy only supports cusolver in CUDA 8.0')\n\n util._assert_cupy_array(a, b)\n util._assert_nd_squareness(a)\n\n if not ((a.ndim == b.ndim or a.ndim == b.ndim + 1) and\n a.shape[:-1] == b.shape[:a.ndim - 1]):\n raise ValueError(\n 'a must have (..., M, M) shape and b must have (..., M) '\n 'or (..., M, K)')\n\n # Cast to float32 or float64\n if a.dtype.char == 'f' or a.dtype.char == 'd':\n dtype = a.dtype\n else:\n dtype = numpy.find_common_type((a.dtype.char, 'f'), ())\n\n cublas_handle = device.get_cublas_handle()\n cusolver_handle = device.get_cusolver_handle()\n\n a = a.astype(dtype)\n b = b.astype(dtype)\n if a.ndim == 2:\n return _solve(a, b, cublas_handle, cusolver_handle)\n\n x = cupy.empty_like(b)\n shape = a.shape[:-2]\n for i in six.moves.range(numpy.prod(shape)):\n index = numpy.unravel_index(i, shape)\n x[index] = _solve(a[index], b[index], cublas_handle, cusolver_handle)\n return x\n\n\ndef _solve(a, b, cublas_handle, cusolver_handle):\n a = cupy.asfortranarray(a)\n b = cupy.asfortranarray(b)\n dtype = a.dtype\n m, k = (b.size, 1) if b.ndim == 1 else b.shape\n dev_info = cupy.empty(1, dtype=numpy.int32)\n\n if dtype == 'f':\n geqrf = cusolver.sgeqrf\n geqrf_bufferSize = cusolver.sgeqrf_bufferSize\n ormqr = cusolver.sormqr\n trans = cublas.CUBLAS_OP_T\n trsm = cublas.strsm\n elif dtype == 'd':\n geqrf = cusolver.dgeqrf\n geqrf_bufferSize = cusolver.dgeqrf_bufferSize\n ormqr = cusolver.dormqr\n trans = cublas.CUBLAS_OP_T\n trsm = cublas.dtrsm\n elif dtype == 'F':\n geqrf = cusolver.cgeqrf\n geqrf_bufferSize = cusolver.cgeqrf_bufferSize\n ormqr = cusolver.cormqr\n trans = cublas.CUBLAS_OP_C\n trsm = cublas.ctrsm\n elif dtype == 'D':\n geqrf = cusolver.zgeqrf\n geqrf_bufferSize = cusolver.zgeqrf_bufferSize\n ormqr = cusolver.zormqr\n trans = cublas.CUBLAS_OP_C\n trsm = cublas.ztrsm\n else:\n raise NotImplementedError(dtype)\n\n # 1. QR decomposition (A = Q * R)\n buffersize = geqrf_bufferSize(cusolver_handle, m, m, a.data.ptr, m)\n workspace = cupy.empty(buffersize, dtype=dtype)\n tau = cupy.empty(m, dtype=dtype)\n geqrf(\n cusolver_handle, m, m, a.data.ptr, m,\n tau.data.ptr, workspace.data.ptr, buffersize, dev_info.data.ptr)\n _check_status(dev_info)\n # 2. ormqr (Q^T * B)\n ormqr(\n cusolver_handle, cublas.CUBLAS_SIDE_LEFT, trans,\n m, k, m, a.data.ptr, m, tau.data.ptr, b.data.ptr, m,\n workspace.data.ptr, buffersize, dev_info.data.ptr)\n _check_status(dev_info)\n # 3. trsm (X = R^{-1} * (Q^T * B))\n trsm(\n cublas_handle, cublas.CUBLAS_SIDE_LEFT, cublas.CUBLAS_FILL_MODE_UPPER,\n cublas.CUBLAS_OP_N, cublas.CUBLAS_DIAG_NON_UNIT,\n m, k, 1, a.data.ptr, m, b.data.ptr, m)\n return b\n\n\ndef _check_status(dev_info):\n status = int(dev_info)\n if status < 0:\n raise linalg.LinAlgError(\n 'Parameter error (maybe caused by a bug in cupy.linalg?)')\n\n\ndef tensorsolve(a, b, axes=None):\n \"\"\"Solves tensor equations denoted by ``ax = b``.\n\n Suppose that ``b`` is equivalent to ``cupy.tensordot(a, x)``.\n This function computes tensor ``x`` from ``a`` and ``b``.\n\n Args:\n a (cupy.ndarray): The tensor with ``len(shape) >= 1``\n b (cupy.ndarray): The tensor with ``len(shape) >= 1``\n axes (tuple of ints): Axes in ``a`` to reorder to the right\n before inversion.\n\n Returns:\n cupy.ndarray:\n The tensor with shape ``Q`` such that ``b.shape + Q == a.shape``.\n\n .. seealso:: :func:`numpy.linalg.tensorsolve`\n \"\"\"\n if axes is not None:\n allaxes = list(six.moves.range(a.ndim))\n for k in axes:\n allaxes.remove(k)\n allaxes.insert(a.ndim, k)\n a = a.transpose(allaxes)\n\n oldshape = a.shape[-(a.ndim - b.ndim):]\n prod = cupy.internal.prod(oldshape)\n\n a = a.reshape(-1, prod)\n b = b.ravel()\n result = solve(a, b)\n return result.reshape(oldshape)\n\n\n# TODO(okuta): Implement lstsq\n\n\ndef inv(a):\n \"\"\"Computes the inverse of a matrix.\n\n This function computes matrix ``a_inv`` from n-dimensional regular matrix\n ``a`` such that ``dot(a, a_inv) == eye(n)``.\n\n Args:\n a (cupy.ndarray): The regular matrix\n\n Returns:\n cupy.ndarray: The inverse of a matrix.\n\n .. seealso:: :func:`numpy.linalg.inv`\n \"\"\"\n if not cuda.cusolver_enabled:\n raise RuntimeError('Current cupy only supports cusolver in CUDA 8.0')\n\n # to prevent `a` to be overwritten\n a = a.copy()\n\n util._assert_cupy_array(a)\n util._assert_rank2(a)\n util._assert_nd_squareness(a)\n\n if a.dtype.char == 'f' or a.dtype.char == 'd':\n dtype = a.dtype.char\n else:\n dtype = numpy.find_common_type((a.dtype.char, 'f'), ()).char\n\n cusolver_handle = device.get_cusolver_handle()\n dev_info = cupy.empty(1, dtype=dtype)\n\n ipiv = cupy.empty((a.shape[0], 1), dtype=dtype)\n\n if dtype == 'f':\n getrf = cusolver.sgetrf\n getrf_bufferSize = cusolver.sgetrf_bufferSize\n getrs = cusolver.sgetrs\n else: # dtype == 'd'\n getrf = cusolver.dgetrf\n getrf_bufferSize = cusolver.dgetrf_bufferSize\n getrs = cusolver.dgetrs\n\n m = a.shape[0]\n\n buffersize = getrf_bufferSize(cusolver_handle, m, m, a.data.ptr, m)\n workspace = cupy.empty(buffersize, dtype=dtype)\n\n # LU factorization\n getrf(cusolver_handle, m, m, a.data.ptr, m, workspace.data.ptr,\n ipiv.data.ptr, dev_info.data.ptr)\n\n b = cupy.eye(m, dtype=dtype)\n\n # solve for the inverse\n getrs(cusolver_handle, 0, m, m, a.data.ptr, m, ipiv.data.ptr, b.data.ptr,\n m, dev_info.data.ptr)\n\n return b\n\n\ndef pinv(a, rcond=1e-15):\n \"\"\"Compute the Moore-Penrose pseudoinverse of a matrix.\n\n It computes a pseudoinverse of a matrix ``a``, which is a generalization\n of the inverse matrix with Singular Value Decomposition (SVD).\n Note that it automatically removes small singular values for stability.\n\n Args:\n a (cupy.ndarray): The matrix with dimension ``(M, N)``\n rcond (float): Cutoff parameter for small singular values.\n For stability it computes the largest singular value denoted by\n ``s``, and sets all singular values smaller than ``s`` to zero.\n\n Returns:\n cupy.ndarray: The pseudoinverse of ``a`` with dimension ``(N, M)``.\n\n .. seealso:: :func:`numpy.linalg.pinv`\n \"\"\"\n u, s, vt = decomposition.svd(a, full_matrices=False)\n cutoff = rcond * s.max()\n s1 = 1 / s\n s1[s <= cutoff] = 0\n return core.dot(vt.T, s1[:, None] * u.T)\n\n\ndef tensorinv(a, ind=2):\n \"\"\"Computes the inverse of a tensor.\n\n This function computes tensor ``a_inv`` from tensor ``a`` such that\n ``tensordot(a_inv, a, ind) == I``, where ``I`` denotes the identity tensor.\n\n Args:\n a (cupy.ndarray):\n The tensor such that\n ``prod(a.shape[:ind]) == prod(a.shape[ind:])``.\n ind (int):\n The positive number used in ``axes`` option of ``tensordot``.\n\n Returns:\n cupy.ndarray:\n The inverse of a tensor whose shape is equivalent to\n ``a.shape[ind:] + a.shape[:ind]``.\n\n .. seealso:: :func:`numpy.linalg.tensorinv`\n \"\"\"\n util._assert_cupy_array(a)\n\n if ind <= 0:\n raise ValueError('Invalid ind argument')\n oldshape = a.shape\n invshape = oldshape[ind:] + oldshape[:ind]\n prod = cupy.internal.prod(oldshape[ind:])\n a = a.reshape(prod, -1)\n a_inv = inv(a)\n return a_inv.reshape(*invshape)\n", "path": "cupy/linalg/solve.py"}]}
| 3,865 | 805 |
gh_patches_debug_36976
|
rasdani/github-patches
|
git_diff
|
keras-team__keras-18977
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
chore: override item removal methods in tracking
Based on the TODO comments in keras/keras/utils/tracking.py
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `keras/utils/tracking.py`
Content:
```
1 from functools import wraps
2
3 from keras.backend.common.global_state import get_global_attribute
4 from keras.backend.common.global_state import set_global_attribute
5 from keras.utils import python_utils
6
7
8 class DotNotTrackScope:
9 def __enter__(self):
10 self.original_value = is_tracking_enabled()
11 set_global_attribute("tracking_on", False)
12
13 def __exit__(self, *args, **kwargs):
14 set_global_attribute("tracking_on", self.original_value)
15
16
17 def is_tracking_enabled():
18 return get_global_attribute("tracking_on", True)
19
20
21 def no_automatic_dependency_tracking(fn):
22 @wraps(fn)
23 def wrapper(*args, **kwargs):
24 with DotNotTrackScope():
25 return fn(*args, **kwargs)
26
27 return wrapper
28
29
30 class Tracker:
31 """Attribute tracker, used for e.g. Variable tracking.
32
33 Monitors certain attribute types
34 and put them in appropriate lists in case of a match.
35
36 Also passively tracks certain mutable collections
37 (dict, list) so that items added to them later
38 still get tracked. This is done by wrapping these
39 collections into an equivalent, tracking-aware object.
40
41 Usage:
42
43 ```python
44 def __init__(self):
45 self.tracker = Tracker(
46 # Format: `name: (test_fn, store)`
47 {
48 "variables":
49 (lambda x: isinstance(x, Variable), self._variables),
50 "metrics": (lambda x: isinstance(x, Metric), self._metrics),
51 "layers": (lambda x: isinstance(x, Layer), self._layers),
52 }
53 )
54
55 def __setattr__(self, name, value):
56 if hasattr(self, "_tracker"):
57 value = self._tracker.track(value)
58 return super().__setattr__(name, value)
59 ```
60 """
61
62 def __init__(self, config):
63 self.config = config
64 self.stored_ids = {name: set() for name in self.config.keys()}
65 self.locked = False
66 self._lock_violation_msg = None
67
68 def track(self, attr):
69 if not is_tracking_enabled():
70 return attr
71
72 for store_name, (is_attr_type, _) in self.config.items():
73 if is_attr_type(attr):
74 if id(attr) not in self.stored_ids[store_name]:
75 self.add_to_store(store_name, attr)
76 return attr
77 if isinstance(attr, tuple):
78 wrapped_attr = []
79 for e in attr:
80 wrapped_attr.append(self.track(e))
81 # This should cover tuples and nametuples
82 return attr.__class__(wrapped_attr)
83 elif isinstance(attr, list):
84 return TrackedList(attr, self)
85 elif isinstance(attr, dict):
86 # TODO: OrderedDict?
87 return TrackedDict(attr, self)
88 elif isinstance(attr, set):
89 return TrackedSet(attr, self)
90 return attr
91
92 def untrack(self, value):
93 for store_name in self.stored_ids.keys():
94 if id(value) in self.stored_ids[store_name]:
95 self.stored_ids[store_name].remove(id(value))
96 python_utils.remove_by_id(self.config[store_name][1], value)
97
98 def lock(self, msg):
99 self.locked = True
100 self._lock_violation_msg = msg
101
102 def add_to_store(self, store_name, value):
103 if self.locked:
104 raise ValueError(self._lock_violation_msg)
105 self.config[store_name][1].append(value)
106 self.stored_ids[store_name].add(id(value))
107
108
109 class TrackedList(list):
110 # TODO: override item removal methods?
111 def __init__(self, values=None, tracker=None):
112 self.tracker = tracker
113 if tracker and values:
114 values = [tracker.track(v) for v in values]
115 super().__init__(values or [])
116
117 def append(self, value):
118 if self.tracker:
119 self.tracker.track(value)
120 super().append(value)
121
122 def insert(self, value):
123 if self.tracker:
124 self.tracker.track(value)
125 super().insert(value)
126
127 def extend(self, values):
128 if self.tracker:
129 values = [self.tracker.track(v) for v in values]
130 super().extend(values)
131
132 def remove(self, value):
133 if self.tracker:
134 self.tracker.untrack(value)
135 try:
136 super().remove(value)
137 except ValueError:
138 python_utils.remove_by_id(self, value)
139
140
141 class TrackedDict(dict):
142 # TODO: override item removal methods?
143 def __init__(self, values=None, tracker=None):
144 self.tracker = tracker
145 if tracker and values:
146 values = {k: tracker.track(v) for k, v in values.items()}
147 super().__init__(values or [])
148
149 def __setitem__(self, key, value):
150 if self.tracker:
151 self.tracker.track(value)
152 super().__setitem__(key, value)
153
154 def update(self, mapping):
155 if self.tracker:
156 mapping = {k: self.tracker.track(v) for k, v in mapping.items()}
157 super().update(mapping)
158
159
160 class TrackedSet(set):
161 # TODO: override item removal methods?
162 def __init__(self, values=None, tracker=None):
163 self.tracker = tracker
164 if tracker and values:
165 values = {tracker.track(v) for v in values}
166 super().__init__(values or [])
167
168 def add(self, value):
169 if self.tracker:
170 self.tracker.track(value)
171 super().add(value)
172
173 def update(self, values):
174 if self.tracker:
175 values = [self.tracker.track(v) for v in values]
176 super().update(values)
177
178 def remove(self, value):
179 if self.tracker:
180 self.tracker.untrack(value)
181 super().remove(value)
182
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/keras/utils/tracking.py b/keras/utils/tracking.py
--- a/keras/utils/tracking.py
+++ b/keras/utils/tracking.py
@@ -107,7 +107,6 @@
class TrackedList(list):
- # TODO: override item removal methods?
def __init__(self, values=None, tracker=None):
self.tracker = tracker
if tracker and values:
@@ -137,9 +136,28 @@
except ValueError:
python_utils.remove_by_id(self, value)
+ def pop(self, index=-1):
+ if self.tracker:
+ value = self[index]
+ self.tracker.untrack(value)
+ return super().pop(index)
+ else:
+ return super().pop(index)
+
+ def clear(self):
+ if self.tracker:
+ for value in self:
+ self.tracker.untrack(value)
+ super().clear()
+
+ def __delitem__(self, index):
+ value = self[index] # Get value before removing
+ super().__delitem__(index)
+ if self.tracker:
+ self.tracker.untrack(value)
+
class TrackedDict(dict):
- # TODO: override item removal methods?
def __init__(self, values=None, tracker=None):
self.tracker = tracker
if tracker and values:
@@ -156,9 +174,29 @@
mapping = {k: self.tracker.track(v) for k, v in mapping.items()}
super().update(mapping)
+ def pop(self, key, default=None):
+ if self.tracker:
+ value = super().pop(key, default)
+ if value is not default:
+ self.tracker.untrack(value)
+ return value
+ else:
+ return super().pop(key, default)
+
+ def popitem(self):
+ key, value = super().popitem()
+ if self.tracker:
+ self.tracker.untrack(value)
+ return key, value
+
+ def clear(self):
+ if self.tracker:
+ for value in self.values():
+ self.tracker.untrack(value)
+ super().clear()
+
class TrackedSet(set):
- # TODO: override item removal methods?
def __init__(self, values=None, tracker=None):
self.tracker = tracker
if tracker and values:
@@ -179,3 +217,15 @@
if self.tracker:
self.tracker.untrack(value)
super().remove(value)
+
+ def pop(self):
+ value = super().pop()
+ if self.tracker:
+ self.tracker.untrack(value)
+ return value
+
+ def clear(self):
+ if self.tracker:
+ for value in self:
+ self.tracker.untrack(value)
+ super().clear()
|
{"golden_diff": "diff --git a/keras/utils/tracking.py b/keras/utils/tracking.py\n--- a/keras/utils/tracking.py\n+++ b/keras/utils/tracking.py\n@@ -107,7 +107,6 @@\n \n \n class TrackedList(list):\n- # TODO: override item removal methods?\n def __init__(self, values=None, tracker=None):\n self.tracker = tracker\n if tracker and values:\n@@ -137,9 +136,28 @@\n except ValueError:\n python_utils.remove_by_id(self, value)\n \n+ def pop(self, index=-1):\n+ if self.tracker:\n+ value = self[index]\n+ self.tracker.untrack(value)\n+ return super().pop(index)\n+ else:\n+ return super().pop(index)\n+\n+ def clear(self):\n+ if self.tracker:\n+ for value in self:\n+ self.tracker.untrack(value)\n+ super().clear()\n+\n+ def __delitem__(self, index):\n+ value = self[index] # Get value before removing\n+ super().__delitem__(index)\n+ if self.tracker:\n+ self.tracker.untrack(value)\n+\n \n class TrackedDict(dict):\n- # TODO: override item removal methods?\n def __init__(self, values=None, tracker=None):\n self.tracker = tracker\n if tracker and values:\n@@ -156,9 +174,29 @@\n mapping = {k: self.tracker.track(v) for k, v in mapping.items()}\n super().update(mapping)\n \n+ def pop(self, key, default=None):\n+ if self.tracker:\n+ value = super().pop(key, default)\n+ if value is not default:\n+ self.tracker.untrack(value)\n+ return value\n+ else:\n+ return super().pop(key, default)\n+\n+ def popitem(self):\n+ key, value = super().popitem()\n+ if self.tracker:\n+ self.tracker.untrack(value)\n+ return key, value\n+\n+ def clear(self):\n+ if self.tracker:\n+ for value in self.values():\n+ self.tracker.untrack(value)\n+ super().clear()\n+\n \n class TrackedSet(set):\n- # TODO: override item removal methods?\n def __init__(self, values=None, tracker=None):\n self.tracker = tracker\n if tracker and values:\n@@ -179,3 +217,15 @@\n if self.tracker:\n self.tracker.untrack(value)\n super().remove(value)\n+\n+ def pop(self):\n+ value = super().pop()\n+ if self.tracker:\n+ self.tracker.untrack(value)\n+ return value\n+\n+ def clear(self):\n+ if self.tracker:\n+ for value in self:\n+ self.tracker.untrack(value)\n+ super().clear()\n", "issue": "chore: override item removal methods in tracking\nBased on the TODO comments in keras/keras/utils/tracking.py\n", "before_files": [{"content": "from functools import wraps\n\nfrom keras.backend.common.global_state import get_global_attribute\nfrom keras.backend.common.global_state import set_global_attribute\nfrom keras.utils import python_utils\n\n\nclass DotNotTrackScope:\n def __enter__(self):\n self.original_value = is_tracking_enabled()\n set_global_attribute(\"tracking_on\", False)\n\n def __exit__(self, *args, **kwargs):\n set_global_attribute(\"tracking_on\", self.original_value)\n\n\ndef is_tracking_enabled():\n return get_global_attribute(\"tracking_on\", True)\n\n\ndef no_automatic_dependency_tracking(fn):\n @wraps(fn)\n def wrapper(*args, **kwargs):\n with DotNotTrackScope():\n return fn(*args, **kwargs)\n\n return wrapper\n\n\nclass Tracker:\n \"\"\"Attribute tracker, used for e.g. Variable tracking.\n\n Monitors certain attribute types\n and put them in appropriate lists in case of a match.\n\n Also passively tracks certain mutable collections\n (dict, list) so that items added to them later\n still get tracked. This is done by wrapping these\n collections into an equivalent, tracking-aware object.\n\n Usage:\n\n ```python\n def __init__(self):\n self.tracker = Tracker(\n # Format: `name: (test_fn, store)`\n {\n \"variables\":\n (lambda x: isinstance(x, Variable), self._variables),\n \"metrics\": (lambda x: isinstance(x, Metric), self._metrics),\n \"layers\": (lambda x: isinstance(x, Layer), self._layers),\n }\n )\n\n def __setattr__(self, name, value):\n if hasattr(self, \"_tracker\"):\n value = self._tracker.track(value)\n return super().__setattr__(name, value)\n ```\n \"\"\"\n\n def __init__(self, config):\n self.config = config\n self.stored_ids = {name: set() for name in self.config.keys()}\n self.locked = False\n self._lock_violation_msg = None\n\n def track(self, attr):\n if not is_tracking_enabled():\n return attr\n\n for store_name, (is_attr_type, _) in self.config.items():\n if is_attr_type(attr):\n if id(attr) not in self.stored_ids[store_name]:\n self.add_to_store(store_name, attr)\n return attr\n if isinstance(attr, tuple):\n wrapped_attr = []\n for e in attr:\n wrapped_attr.append(self.track(e))\n # This should cover tuples and nametuples\n return attr.__class__(wrapped_attr)\n elif isinstance(attr, list):\n return TrackedList(attr, self)\n elif isinstance(attr, dict):\n # TODO: OrderedDict?\n return TrackedDict(attr, self)\n elif isinstance(attr, set):\n return TrackedSet(attr, self)\n return attr\n\n def untrack(self, value):\n for store_name in self.stored_ids.keys():\n if id(value) in self.stored_ids[store_name]:\n self.stored_ids[store_name].remove(id(value))\n python_utils.remove_by_id(self.config[store_name][1], value)\n\n def lock(self, msg):\n self.locked = True\n self._lock_violation_msg = msg\n\n def add_to_store(self, store_name, value):\n if self.locked:\n raise ValueError(self._lock_violation_msg)\n self.config[store_name][1].append(value)\n self.stored_ids[store_name].add(id(value))\n\n\nclass TrackedList(list):\n # TODO: override item removal methods?\n def __init__(self, values=None, tracker=None):\n self.tracker = tracker\n if tracker and values:\n values = [tracker.track(v) for v in values]\n super().__init__(values or [])\n\n def append(self, value):\n if self.tracker:\n self.tracker.track(value)\n super().append(value)\n\n def insert(self, value):\n if self.tracker:\n self.tracker.track(value)\n super().insert(value)\n\n def extend(self, values):\n if self.tracker:\n values = [self.tracker.track(v) for v in values]\n super().extend(values)\n\n def remove(self, value):\n if self.tracker:\n self.tracker.untrack(value)\n try:\n super().remove(value)\n except ValueError:\n python_utils.remove_by_id(self, value)\n\n\nclass TrackedDict(dict):\n # TODO: override item removal methods?\n def __init__(self, values=None, tracker=None):\n self.tracker = tracker\n if tracker and values:\n values = {k: tracker.track(v) for k, v in values.items()}\n super().__init__(values or [])\n\n def __setitem__(self, key, value):\n if self.tracker:\n self.tracker.track(value)\n super().__setitem__(key, value)\n\n def update(self, mapping):\n if self.tracker:\n mapping = {k: self.tracker.track(v) for k, v in mapping.items()}\n super().update(mapping)\n\n\nclass TrackedSet(set):\n # TODO: override item removal methods?\n def __init__(self, values=None, tracker=None):\n self.tracker = tracker\n if tracker and values:\n values = {tracker.track(v) for v in values}\n super().__init__(values or [])\n\n def add(self, value):\n if self.tracker:\n self.tracker.track(value)\n super().add(value)\n\n def update(self, values):\n if self.tracker:\n values = [self.tracker.track(v) for v in values]\n super().update(values)\n\n def remove(self, value):\n if self.tracker:\n self.tracker.untrack(value)\n super().remove(value)\n", "path": "keras/utils/tracking.py"}], "after_files": [{"content": "from functools import wraps\n\nfrom keras.backend.common.global_state import get_global_attribute\nfrom keras.backend.common.global_state import set_global_attribute\nfrom keras.utils import python_utils\n\n\nclass DotNotTrackScope:\n def __enter__(self):\n self.original_value = is_tracking_enabled()\n set_global_attribute(\"tracking_on\", False)\n\n def __exit__(self, *args, **kwargs):\n set_global_attribute(\"tracking_on\", self.original_value)\n\n\ndef is_tracking_enabled():\n return get_global_attribute(\"tracking_on\", True)\n\n\ndef no_automatic_dependency_tracking(fn):\n @wraps(fn)\n def wrapper(*args, **kwargs):\n with DotNotTrackScope():\n return fn(*args, **kwargs)\n\n return wrapper\n\n\nclass Tracker:\n \"\"\"Attribute tracker, used for e.g. Variable tracking.\n\n Monitors certain attribute types\n and put them in appropriate lists in case of a match.\n\n Also passively tracks certain mutable collections\n (dict, list) so that items added to them later\n still get tracked. This is done by wrapping these\n collections into an equivalent, tracking-aware object.\n\n Usage:\n\n ```python\n def __init__(self):\n self.tracker = Tracker(\n # Format: `name: (test_fn, store)`\n {\n \"variables\":\n (lambda x: isinstance(x, Variable), self._variables),\n \"metrics\": (lambda x: isinstance(x, Metric), self._metrics),\n \"layers\": (lambda x: isinstance(x, Layer), self._layers),\n }\n )\n\n def __setattr__(self, name, value):\n if hasattr(self, \"_tracker\"):\n value = self._tracker.track(value)\n return super().__setattr__(name, value)\n ```\n \"\"\"\n\n def __init__(self, config):\n self.config = config\n self.stored_ids = {name: set() for name in self.config.keys()}\n self.locked = False\n self._lock_violation_msg = None\n\n def track(self, attr):\n if not is_tracking_enabled():\n return attr\n\n for store_name, (is_attr_type, _) in self.config.items():\n if is_attr_type(attr):\n if id(attr) not in self.stored_ids[store_name]:\n self.add_to_store(store_name, attr)\n return attr\n if isinstance(attr, tuple):\n wrapped_attr = []\n for e in attr:\n wrapped_attr.append(self.track(e))\n # This should cover tuples and nametuples\n return attr.__class__(wrapped_attr)\n elif isinstance(attr, list):\n return TrackedList(attr, self)\n elif isinstance(attr, dict):\n # TODO: OrderedDict?\n return TrackedDict(attr, self)\n elif isinstance(attr, set):\n return TrackedSet(attr, self)\n return attr\n\n def untrack(self, value):\n for store_name in self.stored_ids.keys():\n if id(value) in self.stored_ids[store_name]:\n self.stored_ids[store_name].remove(id(value))\n python_utils.remove_by_id(self.config[store_name][1], value)\n\n def lock(self, msg):\n self.locked = True\n self._lock_violation_msg = msg\n\n def add_to_store(self, store_name, value):\n if self.locked:\n raise ValueError(self._lock_violation_msg)\n self.config[store_name][1].append(value)\n self.stored_ids[store_name].add(id(value))\n\n\nclass TrackedList(list):\n def __init__(self, values=None, tracker=None):\n self.tracker = tracker\n if tracker and values:\n values = [tracker.track(v) for v in values]\n super().__init__(values or [])\n\n def append(self, value):\n if self.tracker:\n self.tracker.track(value)\n super().append(value)\n\n def insert(self, value):\n if self.tracker:\n self.tracker.track(value)\n super().insert(value)\n\n def extend(self, values):\n if self.tracker:\n values = [self.tracker.track(v) for v in values]\n super().extend(values)\n\n def remove(self, value):\n if self.tracker:\n self.tracker.untrack(value)\n try:\n super().remove(value)\n except ValueError:\n python_utils.remove_by_id(self, value)\n\n def pop(self, index=-1):\n if self.tracker:\n value = self[index]\n self.tracker.untrack(value)\n return super().pop(index)\n else:\n return super().pop(index)\n\n def clear(self):\n if self.tracker:\n for value in self:\n self.tracker.untrack(value)\n super().clear()\n\n def __delitem__(self, index):\n value = self[index] # Get value before removing\n super().__delitem__(index)\n if self.tracker:\n self.tracker.untrack(value)\n\n\nclass TrackedDict(dict):\n def __init__(self, values=None, tracker=None):\n self.tracker = tracker\n if tracker and values:\n values = {k: tracker.track(v) for k, v in values.items()}\n super().__init__(values or [])\n\n def __setitem__(self, key, value):\n if self.tracker:\n self.tracker.track(value)\n super().__setitem__(key, value)\n\n def update(self, mapping):\n if self.tracker:\n mapping = {k: self.tracker.track(v) for k, v in mapping.items()}\n super().update(mapping)\n\n def pop(self, key, default=None):\n if self.tracker:\n value = super().pop(key, default)\n if value is not default:\n self.tracker.untrack(value)\n return value\n else:\n return super().pop(key, default)\n\n def popitem(self):\n key, value = super().popitem()\n if self.tracker:\n self.tracker.untrack(value)\n return key, value\n\n def clear(self):\n if self.tracker:\n for value in self.values():\n self.tracker.untrack(value)\n super().clear()\n\n\nclass TrackedSet(set):\n def __init__(self, values=None, tracker=None):\n self.tracker = tracker\n if tracker and values:\n values = {tracker.track(v) for v in values}\n super().__init__(values or [])\n\n def add(self, value):\n if self.tracker:\n self.tracker.track(value)\n super().add(value)\n\n def update(self, values):\n if self.tracker:\n values = [self.tracker.track(v) for v in values]\n super().update(values)\n\n def remove(self, value):\n if self.tracker:\n self.tracker.untrack(value)\n super().remove(value)\n\n def pop(self):\n value = super().pop()\n if self.tracker:\n self.tracker.untrack(value)\n return value\n\n def clear(self):\n if self.tracker:\n for value in self:\n self.tracker.untrack(value)\n super().clear()\n", "path": "keras/utils/tracking.py"}]}
| 1,973 | 648 |
gh_patches_debug_34076
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-575
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Openlibrary connector not loading isbn sometimes, when it appears to be available
an example: https://openlibrary.org/books/OL27222321M.json
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/connectors/openlibrary.py`
Content:
```
1 ''' openlibrary data connector '''
2 import re
3
4 from bookwyrm import models
5 from .abstract_connector import AbstractConnector, SearchResult, Mapping
6 from .abstract_connector import get_data
7 from .connector_manager import ConnectorException
8 from .openlibrary_languages import languages
9
10
11 class Connector(AbstractConnector):
12 ''' instantiate a connector for OL '''
13 def __init__(self, identifier):
14 super().__init__(identifier)
15
16 get_first = lambda a: a[0]
17 get_remote_id = lambda a: self.base_url + a
18 self.book_mappings = [
19 Mapping('title'),
20 Mapping('id', remote_field='key', formatter=get_remote_id),
21 Mapping(
22 'cover', remote_field='covers', formatter=self.get_cover_url),
23 Mapping('sortTitle', remote_field='sort_title'),
24 Mapping('subtitle'),
25 Mapping('description', formatter=get_description),
26 Mapping('languages', formatter=get_languages),
27 Mapping('series', formatter=get_first),
28 Mapping('seriesNumber', remote_field='series_number'),
29 Mapping('subjects'),
30 Mapping('subjectPlaces'),
31 Mapping('isbn13', formatter=get_first),
32 Mapping('isbn10', formatter=get_first),
33 Mapping('lccn', formatter=get_first),
34 Mapping(
35 'oclcNumber', remote_field='oclc_numbers',
36 formatter=get_first
37 ),
38 Mapping(
39 'openlibraryKey', remote_field='key',
40 formatter=get_openlibrary_key
41 ),
42 Mapping('goodreadsKey', remote_field='goodreads_key'),
43 Mapping('asin'),
44 Mapping(
45 'firstPublishedDate', remote_field='first_publish_date',
46 ),
47 Mapping('publishedDate', remote_field='publish_date'),
48 Mapping('pages', remote_field='number_of_pages'),
49 Mapping('physicalFormat', remote_field='physical_format'),
50 Mapping('publishers'),
51 ]
52
53 self.author_mappings = [
54 Mapping('id', remote_field='key', formatter=get_remote_id),
55 Mapping('name'),
56 Mapping(
57 'openlibraryKey', remote_field='key',
58 formatter=get_openlibrary_key
59 ),
60 Mapping('born', remote_field='birth_date'),
61 Mapping('died', remote_field='death_date'),
62 Mapping('bio', formatter=get_description),
63 ]
64
65
66 def get_remote_id_from_data(self, data):
67 ''' format a url from an openlibrary id field '''
68 try:
69 key = data['key']
70 except KeyError:
71 raise ConnectorException('Invalid book data')
72 return '%s%s' % (self.books_url, key)
73
74
75 def is_work_data(self, data):
76 return bool(re.match(r'^[\/\w]+OL\d+W$', data['key']))
77
78
79 def get_edition_from_work_data(self, data):
80 try:
81 key = data['key']
82 except KeyError:
83 raise ConnectorException('Invalid book data')
84 url = '%s%s/editions' % (self.books_url, key)
85 data = get_data(url)
86 return pick_default_edition(data['entries'])
87
88
89 def get_work_from_edition_data(self, data):
90 try:
91 key = data['works'][0]['key']
92 except (IndexError, KeyError):
93 raise ConnectorException('No work found for edition')
94 url = '%s%s' % (self.books_url, key)
95 return get_data(url)
96
97
98 def get_authors_from_data(self, data):
99 ''' parse author json and load or create authors '''
100 for author_blob in data.get('authors', []):
101 author_blob = author_blob.get('author', author_blob)
102 # this id is "/authors/OL1234567A"
103 author_id = author_blob['key']
104 url = '%s%s' % (self.base_url, author_id)
105 yield self.get_or_create_author(url)
106
107
108 def get_cover_url(self, cover_blob):
109 ''' ask openlibrary for the cover '''
110 cover_id = cover_blob[0]
111 image_name = '%s-L.jpg' % cover_id
112 return '%s/b/id/%s' % (self.covers_url, image_name)
113
114
115 def parse_search_data(self, data):
116 return data.get('docs')
117
118
119 def format_search_result(self, search_result):
120 # build the remote id from the openlibrary key
121 key = self.books_url + search_result['key']
122 author = search_result.get('author_name') or ['Unknown']
123 return SearchResult(
124 title=search_result.get('title'),
125 key=key,
126 author=', '.join(author),
127 connector=self,
128 year=search_result.get('first_publish_year'),
129 )
130
131
132 def load_edition_data(self, olkey):
133 ''' query openlibrary for editions of a work '''
134 url = '%s/works/%s/editions' % (self.books_url, olkey)
135 return get_data(url)
136
137
138 def expand_book_data(self, book):
139 work = book
140 # go from the edition to the work, if necessary
141 if isinstance(book, models.Edition):
142 work = book.parent_work
143
144 # we can mass download edition data from OL to avoid repeatedly querying
145 edition_options = self.load_edition_data(work.openlibrary_key)
146 for edition_data in edition_options.get('entries'):
147 self.create_edition_from_data(work, edition_data)
148
149
150 def get_description(description_blob):
151 ''' descriptions can be a string or a dict '''
152 if isinstance(description_blob, dict):
153 return description_blob.get('value')
154 return description_blob
155
156
157 def get_openlibrary_key(key):
158 ''' convert /books/OL27320736M into OL27320736M '''
159 return key.split('/')[-1]
160
161
162 def get_languages(language_blob):
163 ''' /language/eng -> English '''
164 langs = []
165 for lang in language_blob:
166 langs.append(
167 languages.get(lang.get('key', ''), None)
168 )
169 return langs
170
171
172 def pick_default_edition(options):
173 ''' favor physical copies with covers in english '''
174 if not options:
175 return None
176 if len(options) == 1:
177 return options[0]
178
179 options = [e for e in options if e.get('covers')] or options
180 options = [e for e in options if \
181 '/languages/eng' in str(e.get('languages'))] or options
182 formats = ['paperback', 'hardcover', 'mass market paperback']
183 options = [e for e in options if \
184 str(e.get('physical_format')).lower() in formats] or options
185 options = [e for e in options if e.get('isbn_13')] or options
186 options = [e for e in options if e.get('ocaid')] or options
187 return options[0]
188
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bookwyrm/connectors/openlibrary.py b/bookwyrm/connectors/openlibrary.py
--- a/bookwyrm/connectors/openlibrary.py
+++ b/bookwyrm/connectors/openlibrary.py
@@ -27,9 +27,9 @@
Mapping('series', formatter=get_first),
Mapping('seriesNumber', remote_field='series_number'),
Mapping('subjects'),
- Mapping('subjectPlaces'),
- Mapping('isbn13', formatter=get_first),
- Mapping('isbn10', formatter=get_first),
+ Mapping('subjectPlaces', remote_field='subject_places'),
+ Mapping('isbn13', remote_field='isbn_13', formatter=get_first),
+ Mapping('isbn10', remote_field='isbn_10', formatter=get_first),
Mapping('lccn', formatter=get_first),
Mapping(
'oclcNumber', remote_field='oclc_numbers',
@@ -144,9 +144,34 @@
# we can mass download edition data from OL to avoid repeatedly querying
edition_options = self.load_edition_data(work.openlibrary_key)
for edition_data in edition_options.get('entries'):
+ # does this edition have ANY interesting data?
+ if ignore_edition(edition_data):
+ continue
self.create_edition_from_data(work, edition_data)
+def ignore_edition(edition_data):
+ ''' don't load a million editions that have no metadata '''
+ # an isbn, we love to see it
+ if edition_data.get('isbn_13') or edition_data.get('isbn_10'):
+ print(edition_data.get('isbn_10'))
+ return False
+ # grudgingly, oclc can stay
+ if edition_data.get('oclc_numbers'):
+ print(edition_data.get('oclc_numbers'))
+ return False
+ # if it has a cover it can stay
+ if edition_data.get('covers'):
+ print(edition_data.get('covers'))
+ return False
+ # keep non-english editions
+ if edition_data.get('languages') and \
+ 'languages/eng' not in str(edition_data.get('languages')):
+ print(edition_data.get('languages'))
+ return False
+ return True
+
+
def get_description(description_blob):
''' descriptions can be a string or a dict '''
if isinstance(description_blob, dict):
|
{"golden_diff": "diff --git a/bookwyrm/connectors/openlibrary.py b/bookwyrm/connectors/openlibrary.py\n--- a/bookwyrm/connectors/openlibrary.py\n+++ b/bookwyrm/connectors/openlibrary.py\n@@ -27,9 +27,9 @@\n Mapping('series', formatter=get_first),\n Mapping('seriesNumber', remote_field='series_number'),\n Mapping('subjects'),\n- Mapping('subjectPlaces'),\n- Mapping('isbn13', formatter=get_first),\n- Mapping('isbn10', formatter=get_first),\n+ Mapping('subjectPlaces', remote_field='subject_places'),\n+ Mapping('isbn13', remote_field='isbn_13', formatter=get_first),\n+ Mapping('isbn10', remote_field='isbn_10', formatter=get_first),\n Mapping('lccn', formatter=get_first),\n Mapping(\n 'oclcNumber', remote_field='oclc_numbers',\n@@ -144,9 +144,34 @@\n # we can mass download edition data from OL to avoid repeatedly querying\n edition_options = self.load_edition_data(work.openlibrary_key)\n for edition_data in edition_options.get('entries'):\n+ # does this edition have ANY interesting data?\n+ if ignore_edition(edition_data):\n+ continue\n self.create_edition_from_data(work, edition_data)\n \n \n+def ignore_edition(edition_data):\n+ ''' don't load a million editions that have no metadata '''\n+ # an isbn, we love to see it\n+ if edition_data.get('isbn_13') or edition_data.get('isbn_10'):\n+ print(edition_data.get('isbn_10'))\n+ return False\n+ # grudgingly, oclc can stay\n+ if edition_data.get('oclc_numbers'):\n+ print(edition_data.get('oclc_numbers'))\n+ return False\n+ # if it has a cover it can stay\n+ if edition_data.get('covers'):\n+ print(edition_data.get('covers'))\n+ return False\n+ # keep non-english editions\n+ if edition_data.get('languages') and \\\n+ 'languages/eng' not in str(edition_data.get('languages')):\n+ print(edition_data.get('languages'))\n+ return False\n+ return True\n+\n+\n def get_description(description_blob):\n ''' descriptions can be a string or a dict '''\n if isinstance(description_blob, dict):\n", "issue": "Openlibrary connector not loading isbn sometimes, when it appears to be available\nan example: https://openlibrary.org/books/OL27222321M.json\n", "before_files": [{"content": "''' openlibrary data connector '''\nimport re\n\nfrom bookwyrm import models\nfrom .abstract_connector import AbstractConnector, SearchResult, Mapping\nfrom .abstract_connector import get_data\nfrom .connector_manager import ConnectorException\nfrom .openlibrary_languages import languages\n\n\nclass Connector(AbstractConnector):\n ''' instantiate a connector for OL '''\n def __init__(self, identifier):\n super().__init__(identifier)\n\n get_first = lambda a: a[0]\n get_remote_id = lambda a: self.base_url + a\n self.book_mappings = [\n Mapping('title'),\n Mapping('id', remote_field='key', formatter=get_remote_id),\n Mapping(\n 'cover', remote_field='covers', formatter=self.get_cover_url),\n Mapping('sortTitle', remote_field='sort_title'),\n Mapping('subtitle'),\n Mapping('description', formatter=get_description),\n Mapping('languages', formatter=get_languages),\n Mapping('series', formatter=get_first),\n Mapping('seriesNumber', remote_field='series_number'),\n Mapping('subjects'),\n Mapping('subjectPlaces'),\n Mapping('isbn13', formatter=get_first),\n Mapping('isbn10', formatter=get_first),\n Mapping('lccn', formatter=get_first),\n Mapping(\n 'oclcNumber', remote_field='oclc_numbers',\n formatter=get_first\n ),\n Mapping(\n 'openlibraryKey', remote_field='key',\n formatter=get_openlibrary_key\n ),\n Mapping('goodreadsKey', remote_field='goodreads_key'),\n Mapping('asin'),\n Mapping(\n 'firstPublishedDate', remote_field='first_publish_date',\n ),\n Mapping('publishedDate', remote_field='publish_date'),\n Mapping('pages', remote_field='number_of_pages'),\n Mapping('physicalFormat', remote_field='physical_format'),\n Mapping('publishers'),\n ]\n\n self.author_mappings = [\n Mapping('id', remote_field='key', formatter=get_remote_id),\n Mapping('name'),\n Mapping(\n 'openlibraryKey', remote_field='key',\n formatter=get_openlibrary_key\n ),\n Mapping('born', remote_field='birth_date'),\n Mapping('died', remote_field='death_date'),\n Mapping('bio', formatter=get_description),\n ]\n\n\n def get_remote_id_from_data(self, data):\n ''' format a url from an openlibrary id field '''\n try:\n key = data['key']\n except KeyError:\n raise ConnectorException('Invalid book data')\n return '%s%s' % (self.books_url, key)\n\n\n def is_work_data(self, data):\n return bool(re.match(r'^[\\/\\w]+OL\\d+W$', data['key']))\n\n\n def get_edition_from_work_data(self, data):\n try:\n key = data['key']\n except KeyError:\n raise ConnectorException('Invalid book data')\n url = '%s%s/editions' % (self.books_url, key)\n data = get_data(url)\n return pick_default_edition(data['entries'])\n\n\n def get_work_from_edition_data(self, data):\n try:\n key = data['works'][0]['key']\n except (IndexError, KeyError):\n raise ConnectorException('No work found for edition')\n url = '%s%s' % (self.books_url, key)\n return get_data(url)\n\n\n def get_authors_from_data(self, data):\n ''' parse author json and load or create authors '''\n for author_blob in data.get('authors', []):\n author_blob = author_blob.get('author', author_blob)\n # this id is \"/authors/OL1234567A\"\n author_id = author_blob['key']\n url = '%s%s' % (self.base_url, author_id)\n yield self.get_or_create_author(url)\n\n\n def get_cover_url(self, cover_blob):\n ''' ask openlibrary for the cover '''\n cover_id = cover_blob[0]\n image_name = '%s-L.jpg' % cover_id\n return '%s/b/id/%s' % (self.covers_url, image_name)\n\n\n def parse_search_data(self, data):\n return data.get('docs')\n\n\n def format_search_result(self, search_result):\n # build the remote id from the openlibrary key\n key = self.books_url + search_result['key']\n author = search_result.get('author_name') or ['Unknown']\n return SearchResult(\n title=search_result.get('title'),\n key=key,\n author=', '.join(author),\n connector=self,\n year=search_result.get('first_publish_year'),\n )\n\n\n def load_edition_data(self, olkey):\n ''' query openlibrary for editions of a work '''\n url = '%s/works/%s/editions' % (self.books_url, olkey)\n return get_data(url)\n\n\n def expand_book_data(self, book):\n work = book\n # go from the edition to the work, if necessary\n if isinstance(book, models.Edition):\n work = book.parent_work\n\n # we can mass download edition data from OL to avoid repeatedly querying\n edition_options = self.load_edition_data(work.openlibrary_key)\n for edition_data in edition_options.get('entries'):\n self.create_edition_from_data(work, edition_data)\n\n\ndef get_description(description_blob):\n ''' descriptions can be a string or a dict '''\n if isinstance(description_blob, dict):\n return description_blob.get('value')\n return description_blob\n\n\ndef get_openlibrary_key(key):\n ''' convert /books/OL27320736M into OL27320736M '''\n return key.split('/')[-1]\n\n\ndef get_languages(language_blob):\n ''' /language/eng -> English '''\n langs = []\n for lang in language_blob:\n langs.append(\n languages.get(lang.get('key', ''), None)\n )\n return langs\n\n\ndef pick_default_edition(options):\n ''' favor physical copies with covers in english '''\n if not options:\n return None\n if len(options) == 1:\n return options[0]\n\n options = [e for e in options if e.get('covers')] or options\n options = [e for e in options if \\\n '/languages/eng' in str(e.get('languages'))] or options\n formats = ['paperback', 'hardcover', 'mass market paperback']\n options = [e for e in options if \\\n str(e.get('physical_format')).lower() in formats] or options\n options = [e for e in options if e.get('isbn_13')] or options\n options = [e for e in options if e.get('ocaid')] or options\n return options[0]\n", "path": "bookwyrm/connectors/openlibrary.py"}], "after_files": [{"content": "''' openlibrary data connector '''\nimport re\n\nfrom bookwyrm import models\nfrom .abstract_connector import AbstractConnector, SearchResult, Mapping\nfrom .abstract_connector import get_data\nfrom .connector_manager import ConnectorException\nfrom .openlibrary_languages import languages\n\n\nclass Connector(AbstractConnector):\n ''' instantiate a connector for OL '''\n def __init__(self, identifier):\n super().__init__(identifier)\n\n get_first = lambda a: a[0]\n get_remote_id = lambda a: self.base_url + a\n self.book_mappings = [\n Mapping('title'),\n Mapping('id', remote_field='key', formatter=get_remote_id),\n Mapping(\n 'cover', remote_field='covers', formatter=self.get_cover_url),\n Mapping('sortTitle', remote_field='sort_title'),\n Mapping('subtitle'),\n Mapping('description', formatter=get_description),\n Mapping('languages', formatter=get_languages),\n Mapping('series', formatter=get_first),\n Mapping('seriesNumber', remote_field='series_number'),\n Mapping('subjects'),\n Mapping('subjectPlaces', remote_field='subject_places'),\n Mapping('isbn13', remote_field='isbn_13', formatter=get_first),\n Mapping('isbn10', remote_field='isbn_10', formatter=get_first),\n Mapping('lccn', formatter=get_first),\n Mapping(\n 'oclcNumber', remote_field='oclc_numbers',\n formatter=get_first\n ),\n Mapping(\n 'openlibraryKey', remote_field='key',\n formatter=get_openlibrary_key\n ),\n Mapping('goodreadsKey', remote_field='goodreads_key'),\n Mapping('asin'),\n Mapping(\n 'firstPublishedDate', remote_field='first_publish_date',\n ),\n Mapping('publishedDate', remote_field='publish_date'),\n Mapping('pages', remote_field='number_of_pages'),\n Mapping('physicalFormat', remote_field='physical_format'),\n Mapping('publishers'),\n ]\n\n self.author_mappings = [\n Mapping('id', remote_field='key', formatter=get_remote_id),\n Mapping('name'),\n Mapping(\n 'openlibraryKey', remote_field='key',\n formatter=get_openlibrary_key\n ),\n Mapping('born', remote_field='birth_date'),\n Mapping('died', remote_field='death_date'),\n Mapping('bio', formatter=get_description),\n ]\n\n\n def get_remote_id_from_data(self, data):\n ''' format a url from an openlibrary id field '''\n try:\n key = data['key']\n except KeyError:\n raise ConnectorException('Invalid book data')\n return '%s%s' % (self.books_url, key)\n\n\n def is_work_data(self, data):\n return bool(re.match(r'^[\\/\\w]+OL\\d+W$', data['key']))\n\n\n def get_edition_from_work_data(self, data):\n try:\n key = data['key']\n except KeyError:\n raise ConnectorException('Invalid book data')\n url = '%s%s/editions' % (self.books_url, key)\n data = get_data(url)\n return pick_default_edition(data['entries'])\n\n\n def get_work_from_edition_data(self, data):\n try:\n key = data['works'][0]['key']\n except (IndexError, KeyError):\n raise ConnectorException('No work found for edition')\n url = '%s%s' % (self.books_url, key)\n return get_data(url)\n\n\n def get_authors_from_data(self, data):\n ''' parse author json and load or create authors '''\n for author_blob in data.get('authors', []):\n author_blob = author_blob.get('author', author_blob)\n # this id is \"/authors/OL1234567A\"\n author_id = author_blob['key']\n url = '%s%s' % (self.base_url, author_id)\n yield self.get_or_create_author(url)\n\n\n def get_cover_url(self, cover_blob):\n ''' ask openlibrary for the cover '''\n cover_id = cover_blob[0]\n image_name = '%s-L.jpg' % cover_id\n return '%s/b/id/%s' % (self.covers_url, image_name)\n\n\n def parse_search_data(self, data):\n return data.get('docs')\n\n\n def format_search_result(self, search_result):\n # build the remote id from the openlibrary key\n key = self.books_url + search_result['key']\n author = search_result.get('author_name') or ['Unknown']\n return SearchResult(\n title=search_result.get('title'),\n key=key,\n author=', '.join(author),\n connector=self,\n year=search_result.get('first_publish_year'),\n )\n\n\n def load_edition_data(self, olkey):\n ''' query openlibrary for editions of a work '''\n url = '%s/works/%s/editions' % (self.books_url, olkey)\n return get_data(url)\n\n\n def expand_book_data(self, book):\n work = book\n # go from the edition to the work, if necessary\n if isinstance(book, models.Edition):\n work = book.parent_work\n\n # we can mass download edition data from OL to avoid repeatedly querying\n edition_options = self.load_edition_data(work.openlibrary_key)\n for edition_data in edition_options.get('entries'):\n # does this edition have ANY interesting data?\n if ignore_edition(edition_data):\n continue\n self.create_edition_from_data(work, edition_data)\n\n\ndef ignore_edition(edition_data):\n ''' don't load a million editions that have no metadata '''\n # an isbn, we love to see it\n if edition_data.get('isbn_13') or edition_data.get('isbn_10'):\n print(edition_data.get('isbn_10'))\n return False\n # grudgingly, oclc can stay\n if edition_data.get('oclc_numbers'):\n print(edition_data.get('oclc_numbers'))\n return False\n # if it has a cover it can stay\n if edition_data.get('covers'):\n print(edition_data.get('covers'))\n return False\n # keep non-english editions\n if edition_data.get('languages') and \\\n 'languages/eng' not in str(edition_data.get('languages')):\n print(edition_data.get('languages'))\n return False\n return True\n\n\ndef get_description(description_blob):\n ''' descriptions can be a string or a dict '''\n if isinstance(description_blob, dict):\n return description_blob.get('value')\n return description_blob\n\n\ndef get_openlibrary_key(key):\n ''' convert /books/OL27320736M into OL27320736M '''\n return key.split('/')[-1]\n\n\ndef get_languages(language_blob):\n ''' /language/eng -> English '''\n langs = []\n for lang in language_blob:\n langs.append(\n languages.get(lang.get('key', ''), None)\n )\n return langs\n\n\ndef pick_default_edition(options):\n ''' favor physical copies with covers in english '''\n if not options:\n return None\n if len(options) == 1:\n return options[0]\n\n options = [e for e in options if e.get('covers')] or options\n options = [e for e in options if \\\n '/languages/eng' in str(e.get('languages'))] or options\n formats = ['paperback', 'hardcover', 'mass market paperback']\n options = [e for e in options if \\\n str(e.get('physical_format')).lower() in formats] or options\n options = [e for e in options if e.get('isbn_13')] or options\n options = [e for e in options if e.get('ocaid')] or options\n return options[0]\n", "path": "bookwyrm/connectors/openlibrary.py"}]}
| 2,185 | 521 |
gh_patches_debug_37011
|
rasdani/github-patches
|
git_diff
|
huggingface__dataset-viewer-1084
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No specific error when dataset tries to import a non-installed module
When a dataset script tries to import a module/library that is not installed, there is no informative error message.
See:
- #1067
- #1068
Related to:
- #976
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `services/worker/src/worker/job_runners/config_names.py`
Content:
```
1 # SPDX-License-Identifier: Apache-2.0
2 # Copyright 2022 The HuggingFace Authors.
3
4 import logging
5 from http import HTTPStatus
6 from typing import Any, List, Literal, Mapping, Optional, TypedDict, Union
7
8 from datasets import get_dataset_config_names
9 from datasets.data_files import EmptyDatasetError as _EmptyDatasetError
10 from libcommon.constants import PROCESSING_STEP_CONFIG_NAMES_VERSION
11 from libcommon.simple_cache import SplitFullName
12
13 from worker.job_runner import CompleteJobResult, JobRunnerError, ParameterMissingError
14 from worker.job_runners._datasets_based_job_runner import DatasetsBasedJobRunner
15
16 ConfigNamesJobRunnerErrorCode = Literal["EmptyDatasetError", "ConfigNamesError"]
17
18
19 class ConfigNamesJobRunnerError(JobRunnerError):
20 """Base class for job runner exceptions."""
21
22 def __init__(
23 self,
24 message: str,
25 status_code: HTTPStatus,
26 code: ConfigNamesJobRunnerErrorCode,
27 cause: Optional[BaseException] = None,
28 disclose_cause: bool = False,
29 ):
30 super().__init__(
31 message=message, status_code=status_code, code=code, cause=cause, disclose_cause=disclose_cause
32 )
33
34
35 class EmptyDatasetError(ConfigNamesJobRunnerError):
36 """Raised when the dataset has no data."""
37
38 def __init__(self, message: str, cause: Optional[BaseException] = None):
39 super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, "EmptyDatasetError", cause, True)
40
41
42 class ConfigNamesError(ConfigNamesJobRunnerError):
43 """Raised when the config names could not be fetched."""
44
45 def __init__(self, message: str, cause: Optional[BaseException] = None):
46 super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, "ConfigNamesError", cause, True)
47
48
49 class ConfigNameItem(TypedDict):
50 dataset: str
51 config: str
52
53
54 class ConfigNamesResponse(TypedDict):
55 config_names: List[ConfigNameItem]
56
57
58 def compute_config_names_response(
59 dataset: str,
60 hf_token: Optional[str] = None,
61 ) -> ConfigNamesResponse:
62 """
63 Get the response of /config-names for one specific dataset on huggingface.co.
64 Dataset can be private or gated if you pass an acceptable token.
65
66 It is assumed that the dataset exists and can be accessed using the token.
67
68 Args:
69 dataset (`str`):
70 A namespace (user or an organization) and a repo name separated
71 by a `/`.
72 hf_token (`str`, *optional*):
73 An authentication token (See https://huggingface.co/settings/token)
74 Returns:
75 `ConfigNamesResponse`: An object with the list of config names.
76 <Tip>
77 Raises the following errors:
78 - [`~job_runners.config_names.EmptyDatasetError`]
79 The dataset is empty.
80 - [`~job_runners.config_names.ConfigNamesError`]
81 If the list of configs could not be obtained using the datasets library.
82 </Tip>
83 """
84 logging.info(f"get config names for dataset={dataset}")
85 use_auth_token: Union[bool, str, None] = hf_token if hf_token is not None else False
86 # get the list of splits in streaming mode
87 try:
88 config_name_items: List[ConfigNameItem] = [
89 {"dataset": dataset, "config": str(config)}
90 for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
91 ]
92 except _EmptyDatasetError as err:
93 raise EmptyDatasetError("The dataset is empty.", cause=err) from err
94 except Exception as err:
95 raise ConfigNamesError("Cannot get the config names for the dataset.", cause=err) from err
96 return ConfigNamesResponse(config_names=config_name_items)
97
98
99 class ConfigNamesJobRunner(DatasetsBasedJobRunner):
100 @staticmethod
101 def get_job_type() -> str:
102 return "/config-names"
103
104 @staticmethod
105 def get_job_runner_version() -> int:
106 return PROCESSING_STEP_CONFIG_NAMES_VERSION
107
108 def compute(self) -> CompleteJobResult:
109 if self.dataset is None:
110 raise ParameterMissingError("'dataset' parameter is required")
111 return CompleteJobResult(
112 compute_config_names_response(dataset=self.dataset, hf_token=self.common_config.hf_token)
113 )
114
115 def get_new_splits(self, content: Mapping[str, Any]) -> set[SplitFullName]:
116 """Get the set of new splits, from the content created by the compute."""
117 return {SplitFullName(dataset=s["dataset"], config=s["config"], split=None) for s in content["config_names"]}
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/services/worker/src/worker/job_runners/config_names.py b/services/worker/src/worker/job_runners/config_names.py
--- a/services/worker/src/worker/job_runners/config_names.py
+++ b/services/worker/src/worker/job_runners/config_names.py
@@ -13,7 +13,7 @@
from worker.job_runner import CompleteJobResult, JobRunnerError, ParameterMissingError
from worker.job_runners._datasets_based_job_runner import DatasetsBasedJobRunner
-ConfigNamesJobRunnerErrorCode = Literal["EmptyDatasetError", "ConfigNamesError"]
+ConfigNamesJobRunnerErrorCode = Literal["EmptyDatasetError", "DatasetModuleNotInstalledError", "ConfigNamesError"]
class ConfigNamesJobRunnerError(JobRunnerError):
@@ -39,6 +39,13 @@
super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, "EmptyDatasetError", cause, True)
+class DatasetModuleNotInstalledError(ConfigNamesJobRunnerError):
+ """Raised when the dataset tries to import a module that is not installed."""
+
+ def __init__(self, message: str, cause: Optional[BaseException] = None):
+ super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, "DatasetModuleNotInstalledError", cause, True)
+
+
class ConfigNamesError(ConfigNamesJobRunnerError):
"""Raised when the config names could not be fetched."""
@@ -77,6 +84,8 @@
Raises the following errors:
- [`~job_runners.config_names.EmptyDatasetError`]
The dataset is empty.
+ - [`~job_runners.config_names.DatasetModuleNotInstalledError`]
+ The dataset tries to import a module that is not installed.
- [`~job_runners.config_names.ConfigNamesError`]
If the list of configs could not be obtained using the datasets library.
</Tip>
@@ -91,6 +100,10 @@
]
except _EmptyDatasetError as err:
raise EmptyDatasetError("The dataset is empty.", cause=err) from err
+ except ImportError as err:
+ raise DatasetModuleNotInstalledError(
+ "The dataset tries to import a module that is not installed.", cause=err
+ ) from err
except Exception as err:
raise ConfigNamesError("Cannot get the config names for the dataset.", cause=err) from err
return ConfigNamesResponse(config_names=config_name_items)
|
{"golden_diff": "diff --git a/services/worker/src/worker/job_runners/config_names.py b/services/worker/src/worker/job_runners/config_names.py\n--- a/services/worker/src/worker/job_runners/config_names.py\n+++ b/services/worker/src/worker/job_runners/config_names.py\n@@ -13,7 +13,7 @@\n from worker.job_runner import CompleteJobResult, JobRunnerError, ParameterMissingError\n from worker.job_runners._datasets_based_job_runner import DatasetsBasedJobRunner\n \n-ConfigNamesJobRunnerErrorCode = Literal[\"EmptyDatasetError\", \"ConfigNamesError\"]\n+ConfigNamesJobRunnerErrorCode = Literal[\"EmptyDatasetError\", \"DatasetModuleNotInstalledError\", \"ConfigNamesError\"]\n \n \n class ConfigNamesJobRunnerError(JobRunnerError):\n@@ -39,6 +39,13 @@\n super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, \"EmptyDatasetError\", cause, True)\n \n \n+class DatasetModuleNotInstalledError(ConfigNamesJobRunnerError):\n+ \"\"\"Raised when the dataset tries to import a module that is not installed.\"\"\"\n+\n+ def __init__(self, message: str, cause: Optional[BaseException] = None):\n+ super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, \"DatasetModuleNotInstalledError\", cause, True)\n+\n+\n class ConfigNamesError(ConfigNamesJobRunnerError):\n \"\"\"Raised when the config names could not be fetched.\"\"\"\n \n@@ -77,6 +84,8 @@\n Raises the following errors:\n - [`~job_runners.config_names.EmptyDatasetError`]\n The dataset is empty.\n+ - [`~job_runners.config_names.DatasetModuleNotInstalledError`]\n+ The dataset tries to import a module that is not installed.\n - [`~job_runners.config_names.ConfigNamesError`]\n If the list of configs could not be obtained using the datasets library.\n </Tip>\n@@ -91,6 +100,10 @@\n ]\n except _EmptyDatasetError as err:\n raise EmptyDatasetError(\"The dataset is empty.\", cause=err) from err\n+ except ImportError as err:\n+ raise DatasetModuleNotInstalledError(\n+ \"The dataset tries to import a module that is not installed.\", cause=err\n+ ) from err\n except Exception as err:\n raise ConfigNamesError(\"Cannot get the config names for the dataset.\", cause=err) from err\n return ConfigNamesResponse(config_names=config_name_items)\n", "issue": "No specific error when dataset tries to import a non-installed module\nWhen a dataset script tries to import a module/library that is not installed, there is no informative error message.\r\n\r\nSee:\r\n- #1067 \r\n- #1068\r\n\r\nRelated to:\r\n- #976\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n# Copyright 2022 The HuggingFace Authors.\n\nimport logging\nfrom http import HTTPStatus\nfrom typing import Any, List, Literal, Mapping, Optional, TypedDict, Union\n\nfrom datasets import get_dataset_config_names\nfrom datasets.data_files import EmptyDatasetError as _EmptyDatasetError\nfrom libcommon.constants import PROCESSING_STEP_CONFIG_NAMES_VERSION\nfrom libcommon.simple_cache import SplitFullName\n\nfrom worker.job_runner import CompleteJobResult, JobRunnerError, ParameterMissingError\nfrom worker.job_runners._datasets_based_job_runner import DatasetsBasedJobRunner\n\nConfigNamesJobRunnerErrorCode = Literal[\"EmptyDatasetError\", \"ConfigNamesError\"]\n\n\nclass ConfigNamesJobRunnerError(JobRunnerError):\n \"\"\"Base class for job runner exceptions.\"\"\"\n\n def __init__(\n self,\n message: str,\n status_code: HTTPStatus,\n code: ConfigNamesJobRunnerErrorCode,\n cause: Optional[BaseException] = None,\n disclose_cause: bool = False,\n ):\n super().__init__(\n message=message, status_code=status_code, code=code, cause=cause, disclose_cause=disclose_cause\n )\n\n\nclass EmptyDatasetError(ConfigNamesJobRunnerError):\n \"\"\"Raised when the dataset has no data.\"\"\"\n\n def __init__(self, message: str, cause: Optional[BaseException] = None):\n super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, \"EmptyDatasetError\", cause, True)\n\n\nclass ConfigNamesError(ConfigNamesJobRunnerError):\n \"\"\"Raised when the config names could not be fetched.\"\"\"\n\n def __init__(self, message: str, cause: Optional[BaseException] = None):\n super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, \"ConfigNamesError\", cause, True)\n\n\nclass ConfigNameItem(TypedDict):\n dataset: str\n config: str\n\n\nclass ConfigNamesResponse(TypedDict):\n config_names: List[ConfigNameItem]\n\n\ndef compute_config_names_response(\n dataset: str,\n hf_token: Optional[str] = None,\n) -> ConfigNamesResponse:\n \"\"\"\n Get the response of /config-names for one specific dataset on huggingface.co.\n Dataset can be private or gated if you pass an acceptable token.\n\n It is assumed that the dataset exists and can be accessed using the token.\n\n Args:\n dataset (`str`):\n A namespace (user or an organization) and a repo name separated\n by a `/`.\n hf_token (`str`, *optional*):\n An authentication token (See https://huggingface.co/settings/token)\n Returns:\n `ConfigNamesResponse`: An object with the list of config names.\n <Tip>\n Raises the following errors:\n - [`~job_runners.config_names.EmptyDatasetError`]\n The dataset is empty.\n - [`~job_runners.config_names.ConfigNamesError`]\n If the list of configs could not be obtained using the datasets library.\n </Tip>\n \"\"\"\n logging.info(f\"get config names for dataset={dataset}\")\n use_auth_token: Union[bool, str, None] = hf_token if hf_token is not None else False\n # get the list of splits in streaming mode\n try:\n config_name_items: List[ConfigNameItem] = [\n {\"dataset\": dataset, \"config\": str(config)}\n for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))\n ]\n except _EmptyDatasetError as err:\n raise EmptyDatasetError(\"The dataset is empty.\", cause=err) from err\n except Exception as err:\n raise ConfigNamesError(\"Cannot get the config names for the dataset.\", cause=err) from err\n return ConfigNamesResponse(config_names=config_name_items)\n\n\nclass ConfigNamesJobRunner(DatasetsBasedJobRunner):\n @staticmethod\n def get_job_type() -> str:\n return \"/config-names\"\n\n @staticmethod\n def get_job_runner_version() -> int:\n return PROCESSING_STEP_CONFIG_NAMES_VERSION\n\n def compute(self) -> CompleteJobResult:\n if self.dataset is None:\n raise ParameterMissingError(\"'dataset' parameter is required\")\n return CompleteJobResult(\n compute_config_names_response(dataset=self.dataset, hf_token=self.common_config.hf_token)\n )\n\n def get_new_splits(self, content: Mapping[str, Any]) -> set[SplitFullName]:\n \"\"\"Get the set of new splits, from the content created by the compute.\"\"\"\n return {SplitFullName(dataset=s[\"dataset\"], config=s[\"config\"], split=None) for s in content[\"config_names\"]}\n", "path": "services/worker/src/worker/job_runners/config_names.py"}], "after_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n# Copyright 2022 The HuggingFace Authors.\n\nimport logging\nfrom http import HTTPStatus\nfrom typing import Any, List, Literal, Mapping, Optional, TypedDict, Union\n\nfrom datasets import get_dataset_config_names\nfrom datasets.data_files import EmptyDatasetError as _EmptyDatasetError\nfrom libcommon.constants import PROCESSING_STEP_CONFIG_NAMES_VERSION\nfrom libcommon.simple_cache import SplitFullName\n\nfrom worker.job_runner import CompleteJobResult, JobRunnerError, ParameterMissingError\nfrom worker.job_runners._datasets_based_job_runner import DatasetsBasedJobRunner\n\nConfigNamesJobRunnerErrorCode = Literal[\"EmptyDatasetError\", \"DatasetModuleNotInstalledError\", \"ConfigNamesError\"]\n\n\nclass ConfigNamesJobRunnerError(JobRunnerError):\n \"\"\"Base class for job runner exceptions.\"\"\"\n\n def __init__(\n self,\n message: str,\n status_code: HTTPStatus,\n code: ConfigNamesJobRunnerErrorCode,\n cause: Optional[BaseException] = None,\n disclose_cause: bool = False,\n ):\n super().__init__(\n message=message, status_code=status_code, code=code, cause=cause, disclose_cause=disclose_cause\n )\n\n\nclass EmptyDatasetError(ConfigNamesJobRunnerError):\n \"\"\"Raised when the dataset has no data.\"\"\"\n\n def __init__(self, message: str, cause: Optional[BaseException] = None):\n super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, \"EmptyDatasetError\", cause, True)\n\n\nclass DatasetModuleNotInstalledError(ConfigNamesJobRunnerError):\n \"\"\"Raised when the dataset tries to import a module that is not installed.\"\"\"\n\n def __init__(self, message: str, cause: Optional[BaseException] = None):\n super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, \"DatasetModuleNotInstalledError\", cause, True)\n\n\nclass ConfigNamesError(ConfigNamesJobRunnerError):\n \"\"\"Raised when the config names could not be fetched.\"\"\"\n\n def __init__(self, message: str, cause: Optional[BaseException] = None):\n super().__init__(message, HTTPStatus.INTERNAL_SERVER_ERROR, \"ConfigNamesError\", cause, True)\n\n\nclass ConfigNameItem(TypedDict):\n dataset: str\n config: str\n\n\nclass ConfigNamesResponse(TypedDict):\n config_names: List[ConfigNameItem]\n\n\ndef compute_config_names_response(\n dataset: str,\n hf_token: Optional[str] = None,\n) -> ConfigNamesResponse:\n \"\"\"\n Get the response of /config-names for one specific dataset on huggingface.co.\n Dataset can be private or gated if you pass an acceptable token.\n\n It is assumed that the dataset exists and can be accessed using the token.\n\n Args:\n dataset (`str`):\n A namespace (user or an organization) and a repo name separated\n by a `/`.\n hf_token (`str`, *optional*):\n An authentication token (See https://huggingface.co/settings/token)\n Returns:\n `ConfigNamesResponse`: An object with the list of config names.\n <Tip>\n Raises the following errors:\n - [`~job_runners.config_names.EmptyDatasetError`]\n The dataset is empty.\n - [`~job_runners.config_names.DatasetModuleNotInstalledError`]\n The dataset tries to import a module that is not installed.\n - [`~job_runners.config_names.ConfigNamesError`]\n If the list of configs could not be obtained using the datasets library.\n </Tip>\n \"\"\"\n logging.info(f\"get config names for dataset={dataset}\")\n use_auth_token: Union[bool, str, None] = hf_token if hf_token is not None else False\n # get the list of splits in streaming mode\n try:\n config_name_items: List[ConfigNameItem] = [\n {\"dataset\": dataset, \"config\": str(config)}\n for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))\n ]\n except _EmptyDatasetError as err:\n raise EmptyDatasetError(\"The dataset is empty.\", cause=err) from err\n except ImportError as err:\n raise DatasetModuleNotInstalledError(\n \"The dataset tries to import a module that is not installed.\", cause=err\n ) from err\n except Exception as err:\n raise ConfigNamesError(\"Cannot get the config names for the dataset.\", cause=err) from err\n return ConfigNamesResponse(config_names=config_name_items)\n\n\nclass ConfigNamesJobRunner(DatasetsBasedJobRunner):\n @staticmethod\n def get_job_type() -> str:\n return \"/config-names\"\n\n @staticmethod\n def get_job_runner_version() -> int:\n return PROCESSING_STEP_CONFIG_NAMES_VERSION\n\n def compute(self) -> CompleteJobResult:\n if self.dataset is None:\n raise ParameterMissingError(\"'dataset' parameter is required\")\n return CompleteJobResult(\n compute_config_names_response(dataset=self.dataset, hf_token=self.common_config.hf_token)\n )\n\n def get_new_splits(self, content: Mapping[str, Any]) -> set[SplitFullName]:\n \"\"\"Get the set of new splits, from the content created by the compute.\"\"\"\n return {SplitFullName(dataset=s[\"dataset\"], config=s[\"config\"], split=None) for s in content[\"config_names\"]}\n", "path": "services/worker/src/worker/job_runners/config_names.py"}]}
| 1,562 | 524 |
gh_patches_debug_19065
|
rasdani/github-patches
|
git_diff
|
Azure__azure-cli-extensions-2850
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Is it possible to query Log Analytics via the az cli with a saved query?
I can’t tell from the documentation, is it possible to run a saved Log Analytics Query from this CLI command?
If not, a useful enhancement would be to enable the use a saved query in addition to the ability to execute queries in-line. The queries get long and cumbersome to maintain outside of Log Analytics.
If it is, however, possible to run a saved query, would you mind updating the documentation here? Thanks.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: f0fd6a58-ac1a-fa45-8d96-579b4af36499
* Version Independent ID: 4098ca97-1b85-eb29-18e9-e6f0495fd030
* Content: [az monitor log-analytics](https://docs.microsoft.com/en-us/cli/azure/ext/log-analytics/monitor/log-analytics?view=azure-cli-latest)
* Content Source: [latest/docs-ref-autogen/ext/log-analytics/monitor/log-analytics.yml](https://github.com/MicrosoftDocs/azure-docs-cli/blob/master/latest/docs-ref-autogen/ext/log-analytics/monitor/log-analytics.yml)
* GitHub Login: @rloutlaw
* Microsoft Alias: **routlaw**
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/log-analytics/setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 # --------------------------------------------------------------------------------------------
4 # Copyright (c) Microsoft Corporation. All rights reserved.
5 # Licensed under the MIT License. See License.txt in the project root for license information.
6 # --------------------------------------------------------------------------------------------
7
8 from codecs import open
9 from setuptools import setup, find_packages
10
11 VERSION = "0.2.1"
12
13 CLASSIFIERS = [
14 'Development Status :: 4 - Beta',
15 'Intended Audience :: Developers',
16 'Intended Audience :: System Administrators',
17 'Programming Language :: Python',
18 'Programming Language :: Python :: 2',
19 'Programming Language :: Python :: 2.7',
20 'Programming Language :: Python :: 3',
21 'Programming Language :: Python :: 3.4',
22 'Programming Language :: Python :: 3.5',
23 'Programming Language :: Python :: 3.6',
24 'License :: OSI Approved :: MIT License',
25 ]
26
27 DEPENDENCIES = []
28
29 with open('README.rst', 'r', encoding='utf-8') as f:
30 README = f.read()
31 with open('HISTORY.rst', 'r', encoding='utf-8') as f:
32 HISTORY = f.read()
33
34 setup(
35 name='log-analytics',
36 version=VERSION,
37 description='Support for Azure Log Analytics query capabilities.',
38 long_description=README + '\n\n' + HISTORY,
39 license='MIT',
40 author='Ace Eldeib',
41 author_email='[email protected]',
42 url='https://github.com/Azure/azure-cli-extensions/tree/master/src/log-analytics',
43 classifiers=CLASSIFIERS,
44 packages=find_packages(exclude=["tests"]),
45 package_data={'azext_loganalytics': ['azext_metadata.json']},
46 install_requires=DEPENDENCIES
47 )
48
```
Path: `src/log-analytics/azext_loganalytics/_help.py`
Content:
```
1 # --------------------------------------------------------------------------------------------
2 # Copyright (c) Microsoft Corporation. All rights reserved.
3 # Licensed under the MIT License. See License.txt in the project root for license information.
4 # --------------------------------------------------------------------------------------------
5
6 from knack.help_files import helps
7
8 # pylint: disable=line-too-long
9
10 helps['monitor log-analytics'] = """
11 type: group
12 short-summary: Commands for querying data in Log Analytics workspaces.
13 """
14
15 helps['monitor log-analytics query'] = """
16 type: command
17 short-summary: Query a Log Analytics workspace.
18 examples:
19 - name: Execute a simple query over past 3.5 days.
20 text: |
21 az monitor log-analytics query -w b8317023-66e4-4edc-8a5b-7c002b22f92f --analytics-query "AzureActivity | summarize count() by bin(timestamp, 1h)" -t P3DT12H
22 """
23
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/log-analytics/azext_loganalytics/_help.py b/src/log-analytics/azext_loganalytics/_help.py
--- a/src/log-analytics/azext_loganalytics/_help.py
+++ b/src/log-analytics/azext_loganalytics/_help.py
@@ -18,5 +18,9 @@
examples:
- name: Execute a simple query over past 3.5 days.
text: |
- az monitor log-analytics query -w b8317023-66e4-4edc-8a5b-7c002b22f92f --analytics-query "AzureActivity | summarize count() by bin(timestamp, 1h)" -t P3DT12H
+ az monitor log-analytics query -w workspace-customId --analytics-query "AzureActivity | summarize count() by bin(timestamp, 1h)" -t P3DT12H
+ - name: Execute a saved query in workspace
+ text: |
+ QUERY=$(az monitor log-analytics workspace saved-search show -g resource-group --workspace-name workspace-name -n query-name --query query --output tsv)
+ az monitor log-analytics query -w workspace-customId --analytics-query "$QUERY"
"""
diff --git a/src/log-analytics/setup.py b/src/log-analytics/setup.py
--- a/src/log-analytics/setup.py
+++ b/src/log-analytics/setup.py
@@ -8,7 +8,7 @@
from codecs import open
from setuptools import setup, find_packages
-VERSION = "0.2.1"
+VERSION = "0.2.2"
CLASSIFIERS = [
'Development Status :: 4 - Beta',
|
{"golden_diff": "diff --git a/src/log-analytics/azext_loganalytics/_help.py b/src/log-analytics/azext_loganalytics/_help.py\n--- a/src/log-analytics/azext_loganalytics/_help.py\n+++ b/src/log-analytics/azext_loganalytics/_help.py\n@@ -18,5 +18,9 @@\n examples:\n - name: Execute a simple query over past 3.5 days.\n text: |\n- az monitor log-analytics query -w b8317023-66e4-4edc-8a5b-7c002b22f92f --analytics-query \"AzureActivity | summarize count() by bin(timestamp, 1h)\" -t P3DT12H\n+ az monitor log-analytics query -w workspace-customId --analytics-query \"AzureActivity | summarize count() by bin(timestamp, 1h)\" -t P3DT12H\n+ - name: Execute a saved query in workspace\n+ text: |\n+ QUERY=$(az monitor log-analytics workspace saved-search show -g resource-group --workspace-name workspace-name -n query-name --query query --output tsv)\n+ az monitor log-analytics query -w workspace-customId --analytics-query \"$QUERY\"\n \"\"\"\ndiff --git a/src/log-analytics/setup.py b/src/log-analytics/setup.py\n--- a/src/log-analytics/setup.py\n+++ b/src/log-analytics/setup.py\n@@ -8,7 +8,7 @@\n from codecs import open\n from setuptools import setup, find_packages\n \n-VERSION = \"0.2.1\"\n+VERSION = \"0.2.2\"\n \n CLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n", "issue": "Is it possible to query Log Analytics via the az cli with a saved query?\n\r\nI can\u2019t tell from the documentation, is it possible to run a saved Log Analytics Query from this CLI command? \r\n\r\nIf not, a useful enhancement would be to enable the use a saved query in addition to the ability to execute queries in-line. The queries get long and cumbersome to maintain outside of Log Analytics.\r\n\r\nIf it is, however, possible to run a saved query, would you mind updating the documentation here? Thanks.\r\n\r\n\r\n---\r\n#### Document Details\r\n\r\n\u26a0 *Do not edit this section. It is required for docs.microsoft.com \u279f GitHub issue linking.*\r\n\r\n* ID: f0fd6a58-ac1a-fa45-8d96-579b4af36499\r\n* Version Independent ID: 4098ca97-1b85-eb29-18e9-e6f0495fd030\r\n* Content: [az monitor log-analytics](https://docs.microsoft.com/en-us/cli/azure/ext/log-analytics/monitor/log-analytics?view=azure-cli-latest)\r\n* Content Source: [latest/docs-ref-autogen/ext/log-analytics/monitor/log-analytics.yml](https://github.com/MicrosoftDocs/azure-docs-cli/blob/master/latest/docs-ref-autogen/ext/log-analytics/monitor/log-analytics.yml)\r\n* GitHub Login: @rloutlaw\r\n* Microsoft Alias: **routlaw**\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nfrom codecs import open\nfrom setuptools import setup, find_packages\n\nVERSION = \"0.2.1\"\n\nCLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'License :: OSI Approved :: MIT License',\n]\n\nDEPENDENCIES = []\n\nwith open('README.rst', 'r', encoding='utf-8') as f:\n README = f.read()\nwith open('HISTORY.rst', 'r', encoding='utf-8') as f:\n HISTORY = f.read()\n\nsetup(\n name='log-analytics',\n version=VERSION,\n description='Support for Azure Log Analytics query capabilities.',\n long_description=README + '\\n\\n' + HISTORY,\n license='MIT',\n author='Ace Eldeib',\n author_email='[email protected]',\n url='https://github.com/Azure/azure-cli-extensions/tree/master/src/log-analytics',\n classifiers=CLASSIFIERS,\n packages=find_packages(exclude=[\"tests\"]),\n package_data={'azext_loganalytics': ['azext_metadata.json']},\n install_requires=DEPENDENCIES\n)\n", "path": "src/log-analytics/setup.py"}, {"content": "# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nfrom knack.help_files import helps\n\n# pylint: disable=line-too-long\n\nhelps['monitor log-analytics'] = \"\"\"\n type: group\n short-summary: Commands for querying data in Log Analytics workspaces.\n\"\"\"\n\nhelps['monitor log-analytics query'] = \"\"\"\n type: command\n short-summary: Query a Log Analytics workspace.\n examples:\n - name: Execute a simple query over past 3.5 days.\n text: |\n az monitor log-analytics query -w b8317023-66e4-4edc-8a5b-7c002b22f92f --analytics-query \"AzureActivity | summarize count() by bin(timestamp, 1h)\" -t P3DT12H\n\"\"\"\n", "path": "src/log-analytics/azext_loganalytics/_help.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nfrom codecs import open\nfrom setuptools import setup, find_packages\n\nVERSION = \"0.2.2\"\n\nCLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'License :: OSI Approved :: MIT License',\n]\n\nDEPENDENCIES = []\n\nwith open('README.rst', 'r', encoding='utf-8') as f:\n README = f.read()\nwith open('HISTORY.rst', 'r', encoding='utf-8') as f:\n HISTORY = f.read()\n\nsetup(\n name='log-analytics',\n version=VERSION,\n description='Support for Azure Log Analytics query capabilities.',\n long_description=README + '\\n\\n' + HISTORY,\n license='MIT',\n author='Ace Eldeib',\n author_email='[email protected]',\n url='https://github.com/Azure/azure-cli-extensions/tree/master/src/log-analytics',\n classifiers=CLASSIFIERS,\n packages=find_packages(exclude=[\"tests\"]),\n package_data={'azext_loganalytics': ['azext_metadata.json']},\n install_requires=DEPENDENCIES\n)\n", "path": "src/log-analytics/setup.py"}, {"content": "# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nfrom knack.help_files import helps\n\n# pylint: disable=line-too-long\n\nhelps['monitor log-analytics'] = \"\"\"\n type: group\n short-summary: Commands for querying data in Log Analytics workspaces.\n\"\"\"\n\nhelps['monitor log-analytics query'] = \"\"\"\n type: command\n short-summary: Query a Log Analytics workspace.\n examples:\n - name: Execute a simple query over past 3.5 days.\n text: |\n az monitor log-analytics query -w workspace-customId --analytics-query \"AzureActivity | summarize count() by bin(timestamp, 1h)\" -t P3DT12H\n - name: Execute a saved query in workspace\n text: |\n QUERY=$(az monitor log-analytics workspace saved-search show -g resource-group --workspace-name workspace-name -n query-name --query query --output tsv)\n az monitor log-analytics query -w workspace-customId --analytics-query \"$QUERY\"\n\"\"\"\n", "path": "src/log-analytics/azext_loganalytics/_help.py"}]}
| 1,284 | 361 |
gh_patches_debug_36638
|
rasdani/github-patches
|
git_diff
|
zigpy__zha-device-handlers-1479
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Aqara plug (lumi.plug.maeu01) generates errors post-2022.2
**Describe the bug**
I use this plug with HA 2022.2.3, where it's been updated to use the quirk for lumi.plug.mmeu01 after [this pull](https://github.com/zigpy/zha-device-handlers/pull/1252/commits).
There are errors popping up in the log after this update.
```
Logger: homeassistant.util.logging
Source: util/logging.py:105
First occurred: 4:34:56 PM (16 occurrences)
Last logged: 4:55:26 PM
Exception in async_state_changed when dispatching 'LUMI lumi.plug.maeu01_54:ef:44:10:00:0e:52:9d_available_entity': () Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/zha/entity.py", line 107, in async_state_changed self.async_write_ha_state()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 530, in async_write_ha_state self._async_write_ha_state()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 563, in _async_write_ha_state state = self._stringify_state()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 536, in _stringify_state if (state := self.state) is None:
File "/usr/src/homeassistant/homeassistant/components/sensor/__init__.py", line 371, in state value = self.native_value
File "/usr/src/homeassistant/homeassistant/components/zha/sensor.py", line 175, in native_value return self.formatter(raw_state)
File "/usr/src/homeassistant/homeassistant/components/zha/sensor.py", line 472, in formatter return self._channel.summa_formatter(value)
File "/usr/src/homeassistant/homeassistant/components/zha/core/channels/smartenergy.py", line 196, in _formatter_function return self._summa_format.format(value).lstrip() AttributeError: 'NoneType' object has no attribute 'format'
```
**To Reproduce**
Steps to reproduce the behavior: unknown
**Additional context**
```
{
"node_descriptor": "NodeDescriptor(logical_type=<LogicalType.Router: 1>, complex_descriptor_available=0, user_descriptor_available=0, reserved=0, aps_flags=0, frequency_band=<FrequencyBand.Freq2400MHz: 8>, mac_capability_flags=<MACCapabilityFlags.AllocateAddress|RxOnWhenIdle|MainsPowered|FullFunctionDevice: 142>, manufacturer_code=4447, maximum_buffer_size=127, maximum_incoming_transfer_size=100, server_mask=11264, maximum_outgoing_transfer_size=100, descriptor_capability_field=<DescriptorCapability.NONE: 0>, *allocate_address=True, *is_alternate_pan_coordinator=False, *is_coordinator=False, *is_end_device=False, *is_full_function_device=True, *is_mains_powered=True, *is_receiver_on_when_idle=True, *is_router=True, *is_security_capable=False)",
"endpoints": {
"1": {
"profile_id": 260,
"device_type": "0x0051",
"in_clusters": [
"0x0000",
"0x0002",
"0x0003",
"0x0004",
"0x0005",
"0x0006",
"0x0009",
"0x0702",
"0x0b04"
],
"out_clusters": [
"0x000a",
"0x0019"
]
},
"21": {
"profile_id": 260,
"device_type": "0x0009",
"in_clusters": [
"0x000c"
],
"out_clusters": [
"0x0004",
"0x000c"
]
},
"242": {
"profile_id": 41440,
"device_type": "0x0061",
"in_clusters": [],
"out_clusters": [
"0x0021"
]
}
},
"manufacturer": "LUMI",
"model": "lumi.plug.maeu01",
"class": "zhaquirks.xiaomi.aqara.plug_mmeu01.Plug"
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zhaquirks/xiaomi/aqara/plug_mmeu01.py`
Content:
```
1 """Xiaomi lumi.plug.mmeu01 plug."""
2 import logging
3
4 from zigpy.profiles import zha
5 from zigpy.zcl.clusters.general import (
6 Alarms,
7 AnalogInput,
8 Basic,
9 DeviceTemperature,
10 GreenPowerProxy,
11 Groups,
12 Identify,
13 OnOff,
14 Ota,
15 Scenes,
16 Time,
17 )
18 from zigpy.zcl.clusters.homeautomation import ElectricalMeasurement
19 from zigpy.zcl.clusters.smartenergy import Metering
20
21 from zhaquirks import Bus
22 from zhaquirks.const import (
23 DEVICE_TYPE,
24 ENDPOINTS,
25 INPUT_CLUSTERS,
26 MODELS_INFO,
27 OUTPUT_CLUSTERS,
28 PROFILE_ID,
29 SKIP_CONFIGURATION,
30 )
31 from zhaquirks.xiaomi import (
32 LUMI,
33 AnalogInputCluster,
34 BasicCluster,
35 ElectricalMeasurementCluster,
36 XiaomiCustomDevice,
37 )
38
39 _LOGGER = logging.getLogger(__name__)
40
41 XIAOMI_PROFILE_ID = 0xA1E0
42 XIAOMI_DEVICE_TYPE = 0x61
43
44
45 class Plug(XiaomiCustomDevice):
46 """lumi.plug.mmeu01 plug."""
47
48 def __init__(self, *args, **kwargs):
49 """Init."""
50 self.voltage_bus = Bus()
51 self.consumption_bus = Bus()
52 self.power_bus = Bus()
53 super().__init__(*args, **kwargs)
54
55 signature = {
56 MODELS_INFO: [
57 (LUMI, "lumi.plug.mmeu01"),
58 (LUMI, "lumi.plug.maeu01"),
59 ],
60 ENDPOINTS: {
61 # <SimpleDescriptor endpoint=1 profile=260 device_type=81
62 # device_version=1
63 # input_clusters=[0, 2, 3, 4, 5, 6, 9, 1794, 2820]
64 # output_clusters=[10, 25]>
65 1: {
66 PROFILE_ID: zha.PROFILE_ID,
67 DEVICE_TYPE: zha.DeviceType.SMART_PLUG,
68 INPUT_CLUSTERS: [
69 Basic.cluster_id,
70 DeviceTemperature.cluster_id,
71 Identify.cluster_id,
72 Groups.cluster_id,
73 Scenes.cluster_id,
74 OnOff.cluster_id,
75 Alarms.cluster_id,
76 Metering.cluster_id,
77 ElectricalMeasurement.cluster_id,
78 ],
79 OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],
80 },
81 # <SimpleDescriptor endpoint=242 profile=41440 device_type=97
82 # device_version=0
83 # input_clusters=[]
84 # output_clusters=[33]>
85 242: {
86 PROFILE_ID: XIAOMI_PROFILE_ID,
87 DEVICE_TYPE: XIAOMI_DEVICE_TYPE,
88 OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],
89 },
90 },
91 }
92 replacement = {
93 SKIP_CONFIGURATION: True,
94 ENDPOINTS: {
95 1: {
96 PROFILE_ID: zha.PROFILE_ID,
97 DEVICE_TYPE: zha.DeviceType.SMART_PLUG,
98 INPUT_CLUSTERS: [
99 BasicCluster,
100 DeviceTemperature.cluster_id,
101 Identify.cluster_id,
102 Groups.cluster_id,
103 Scenes.cluster_id,
104 OnOff.cluster_id,
105 Alarms.cluster_id,
106 Metering.cluster_id,
107 ElectricalMeasurementCluster,
108 ],
109 OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],
110 },
111 21: {
112 PROFILE_ID: zha.PROFILE_ID,
113 DEVICE_TYPE: zha.DeviceType.MAIN_POWER_OUTLET,
114 INPUT_CLUSTERS: [AnalogInputCluster],
115 OUTPUT_CLUSTERS: [AnalogInput.cluster_id, Groups.cluster_id],
116 },
117 242: {
118 PROFILE_ID: XIAOMI_PROFILE_ID,
119 DEVICE_TYPE: XIAOMI_DEVICE_TYPE,
120 OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],
121 },
122 },
123 }
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/zhaquirks/xiaomi/aqara/plug_mmeu01.py b/zhaquirks/xiaomi/aqara/plug_mmeu01.py
--- a/zhaquirks/xiaomi/aqara/plug_mmeu01.py
+++ b/zhaquirks/xiaomi/aqara/plug_mmeu01.py
@@ -2,6 +2,7 @@
import logging
from zigpy.profiles import zha
+import zigpy.types as types
from zigpy.zcl.clusters.general import (
Alarms,
AnalogInput,
@@ -33,6 +34,7 @@
AnalogInputCluster,
BasicCluster,
ElectricalMeasurementCluster,
+ XiaomiAqaraE1Cluster,
XiaomiCustomDevice,
)
@@ -40,6 +42,7 @@
XIAOMI_PROFILE_ID = 0xA1E0
XIAOMI_DEVICE_TYPE = 0x61
+OPPLE_MFG_CODE = 0x115F
class Plug(XiaomiCustomDevice):
@@ -55,7 +58,6 @@
signature = {
MODELS_INFO: [
(LUMI, "lumi.plug.mmeu01"),
- (LUMI, "lumi.plug.maeu01"),
],
ENDPOINTS: {
# <SimpleDescriptor endpoint=1 profile=260 device_type=81
@@ -121,3 +123,58 @@
},
},
}
+
+
+class OppleCluster(XiaomiAqaraE1Cluster):
+ """Opple cluster."""
+
+ ep_attribute = "opple_cluster"
+ attributes = {
+ 0x0009: ("mode", types.uint8_t, True),
+ }
+ attr_config = {0x0009: 0x00}
+
+ async def bind(self):
+ """Bind cluster."""
+ result = await super().bind()
+ await self.write_attributes(self.attr_config, manufacturer=OPPLE_MFG_CODE)
+ return result
+
+
+class PlugMAEU01(Plug):
+ """lumi.plug.maeu01 plug."""
+
+ signature = {
+ MODELS_INFO: [
+ (LUMI, "lumi.plug.maeu01"),
+ ],
+ ENDPOINTS: Plug.signature[ENDPOINTS],
+ }
+
+ replacement = {
+ SKIP_CONFIGURATION: False,
+ ENDPOINTS: {
+ 1: {
+ PROFILE_ID: zha.PROFILE_ID,
+ DEVICE_TYPE: zha.DeviceType.SMART_PLUG,
+ INPUT_CLUSTERS: [
+ Basic.cluster_id,
+ DeviceTemperature.cluster_id,
+ Identify.cluster_id,
+ Groups.cluster_id,
+ Scenes.cluster_id,
+ OnOff.cluster_id,
+ Alarms.cluster_id,
+ Metering.cluster_id,
+ ElectricalMeasurement.cluster_id,
+ OppleCluster,
+ ],
+ OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],
+ },
+ 242: {
+ PROFILE_ID: XIAOMI_PROFILE_ID,
+ DEVICE_TYPE: XIAOMI_DEVICE_TYPE,
+ OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],
+ },
+ },
+ }
|
{"golden_diff": "diff --git a/zhaquirks/xiaomi/aqara/plug_mmeu01.py b/zhaquirks/xiaomi/aqara/plug_mmeu01.py\n--- a/zhaquirks/xiaomi/aqara/plug_mmeu01.py\n+++ b/zhaquirks/xiaomi/aqara/plug_mmeu01.py\n@@ -2,6 +2,7 @@\n import logging\n \n from zigpy.profiles import zha\n+import zigpy.types as types\n from zigpy.zcl.clusters.general import (\n Alarms,\n AnalogInput,\n@@ -33,6 +34,7 @@\n AnalogInputCluster,\n BasicCluster,\n ElectricalMeasurementCluster,\n+ XiaomiAqaraE1Cluster,\n XiaomiCustomDevice,\n )\n \n@@ -40,6 +42,7 @@\n \n XIAOMI_PROFILE_ID = 0xA1E0\n XIAOMI_DEVICE_TYPE = 0x61\n+OPPLE_MFG_CODE = 0x115F\n \n \n class Plug(XiaomiCustomDevice):\n@@ -55,7 +58,6 @@\n signature = {\n MODELS_INFO: [\n (LUMI, \"lumi.plug.mmeu01\"),\n- (LUMI, \"lumi.plug.maeu01\"),\n ],\n ENDPOINTS: {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=81\n@@ -121,3 +123,58 @@\n },\n },\n }\n+\n+\n+class OppleCluster(XiaomiAqaraE1Cluster):\n+ \"\"\"Opple cluster.\"\"\"\n+\n+ ep_attribute = \"opple_cluster\"\n+ attributes = {\n+ 0x0009: (\"mode\", types.uint8_t, True),\n+ }\n+ attr_config = {0x0009: 0x00}\n+\n+ async def bind(self):\n+ \"\"\"Bind cluster.\"\"\"\n+ result = await super().bind()\n+ await self.write_attributes(self.attr_config, manufacturer=OPPLE_MFG_CODE)\n+ return result\n+\n+\n+class PlugMAEU01(Plug):\n+ \"\"\"lumi.plug.maeu01 plug.\"\"\"\n+\n+ signature = {\n+ MODELS_INFO: [\n+ (LUMI, \"lumi.plug.maeu01\"),\n+ ],\n+ ENDPOINTS: Plug.signature[ENDPOINTS],\n+ }\n+\n+ replacement = {\n+ SKIP_CONFIGURATION: False,\n+ ENDPOINTS: {\n+ 1: {\n+ PROFILE_ID: zha.PROFILE_ID,\n+ DEVICE_TYPE: zha.DeviceType.SMART_PLUG,\n+ INPUT_CLUSTERS: [\n+ Basic.cluster_id,\n+ DeviceTemperature.cluster_id,\n+ Identify.cluster_id,\n+ Groups.cluster_id,\n+ Scenes.cluster_id,\n+ OnOff.cluster_id,\n+ Alarms.cluster_id,\n+ Metering.cluster_id,\n+ ElectricalMeasurement.cluster_id,\n+ OppleCluster,\n+ ],\n+ OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],\n+ },\n+ 242: {\n+ PROFILE_ID: XIAOMI_PROFILE_ID,\n+ DEVICE_TYPE: XIAOMI_DEVICE_TYPE,\n+ OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],\n+ },\n+ },\n+ }\n", "issue": "[BUG] Aqara plug (lumi.plug.maeu01) generates errors post-2022.2\n**Describe the bug**\r\nI use this plug with HA 2022.2.3, where it's been updated to use the quirk for lumi.plug.mmeu01 after [this pull](https://github.com/zigpy/zha-device-handlers/pull/1252/commits).\r\n\r\nThere are errors popping up in the log after this update.\r\n\r\n```\r\nLogger: homeassistant.util.logging\r\nSource: util/logging.py:105\r\nFirst occurred: 4:34:56 PM (16 occurrences)\r\nLast logged: 4:55:26 PM\r\n\r\nException in async_state_changed when dispatching 'LUMI lumi.plug.maeu01_54:ef:44:10:00:0e:52:9d_available_entity': () Traceback (most recent call last): \r\n File \"/usr/src/homeassistant/homeassistant/components/zha/entity.py\", line 107, in async_state_changed self.async_write_ha_state() \r\n File \"/usr/src/homeassistant/homeassistant/helpers/entity.py\", line 530, in async_write_ha_state self._async_write_ha_state() \r\n File \"/usr/src/homeassistant/homeassistant/helpers/entity.py\", line 563, in _async_write_ha_state state = self._stringify_state() \r\n File \"/usr/src/homeassistant/homeassistant/helpers/entity.py\", line 536, in _stringify_state if (state := self.state) is None: \r\n File \"/usr/src/homeassistant/homeassistant/components/sensor/__init__.py\", line 371, in state value = self.native_value \r\n File \"/usr/src/homeassistant/homeassistant/components/zha/sensor.py\", line 175, in native_value return self.formatter(raw_state) \r\n File \"/usr/src/homeassistant/homeassistant/components/zha/sensor.py\", line 472, in formatter return self._channel.summa_formatter(value) \r\n File \"/usr/src/homeassistant/homeassistant/components/zha/core/channels/smartenergy.py\", line 196, in _formatter_function return self._summa_format.format(value).lstrip() AttributeError: 'NoneType' object has no attribute 'format'\r\n```\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior: unknown\r\n\r\n**Additional context**\r\n```\r\n{\r\n \"node_descriptor\": \"NodeDescriptor(logical_type=<LogicalType.Router: 1>, complex_descriptor_available=0, user_descriptor_available=0, reserved=0, aps_flags=0, frequency_band=<FrequencyBand.Freq2400MHz: 8>, mac_capability_flags=<MACCapabilityFlags.AllocateAddress|RxOnWhenIdle|MainsPowered|FullFunctionDevice: 142>, manufacturer_code=4447, maximum_buffer_size=127, maximum_incoming_transfer_size=100, server_mask=11264, maximum_outgoing_transfer_size=100, descriptor_capability_field=<DescriptorCapability.NONE: 0>, *allocate_address=True, *is_alternate_pan_coordinator=False, *is_coordinator=False, *is_end_device=False, *is_full_function_device=True, *is_mains_powered=True, *is_receiver_on_when_idle=True, *is_router=True, *is_security_capable=False)\",\r\n \"endpoints\": {\r\n \"1\": {\r\n \"profile_id\": 260,\r\n \"device_type\": \"0x0051\",\r\n \"in_clusters\": [\r\n \"0x0000\",\r\n \"0x0002\",\r\n \"0x0003\",\r\n \"0x0004\",\r\n \"0x0005\",\r\n \"0x0006\",\r\n \"0x0009\",\r\n \"0x0702\",\r\n \"0x0b04\"\r\n ],\r\n \"out_clusters\": [\r\n \"0x000a\",\r\n \"0x0019\"\r\n ]\r\n },\r\n \"21\": {\r\n \"profile_id\": 260,\r\n \"device_type\": \"0x0009\",\r\n \"in_clusters\": [\r\n \"0x000c\"\r\n ],\r\n \"out_clusters\": [\r\n \"0x0004\",\r\n \"0x000c\"\r\n ]\r\n },\r\n \"242\": {\r\n \"profile_id\": 41440,\r\n \"device_type\": \"0x0061\",\r\n \"in_clusters\": [],\r\n \"out_clusters\": [\r\n \"0x0021\"\r\n ]\r\n }\r\n },\r\n \"manufacturer\": \"LUMI\",\r\n \"model\": \"lumi.plug.maeu01\",\r\n \"class\": \"zhaquirks.xiaomi.aqara.plug_mmeu01.Plug\"\r\n}\r\n```\r\n\n", "before_files": [{"content": "\"\"\"Xiaomi lumi.plug.mmeu01 plug.\"\"\"\nimport logging\n\nfrom zigpy.profiles import zha\nfrom zigpy.zcl.clusters.general import (\n Alarms,\n AnalogInput,\n Basic,\n DeviceTemperature,\n GreenPowerProxy,\n Groups,\n Identify,\n OnOff,\n Ota,\n Scenes,\n Time,\n)\nfrom zigpy.zcl.clusters.homeautomation import ElectricalMeasurement\nfrom zigpy.zcl.clusters.smartenergy import Metering\n\nfrom zhaquirks import Bus\nfrom zhaquirks.const import (\n DEVICE_TYPE,\n ENDPOINTS,\n INPUT_CLUSTERS,\n MODELS_INFO,\n OUTPUT_CLUSTERS,\n PROFILE_ID,\n SKIP_CONFIGURATION,\n)\nfrom zhaquirks.xiaomi import (\n LUMI,\n AnalogInputCluster,\n BasicCluster,\n ElectricalMeasurementCluster,\n XiaomiCustomDevice,\n)\n\n_LOGGER = logging.getLogger(__name__)\n\nXIAOMI_PROFILE_ID = 0xA1E0\nXIAOMI_DEVICE_TYPE = 0x61\n\n\nclass Plug(XiaomiCustomDevice):\n \"\"\"lumi.plug.mmeu01 plug.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Init.\"\"\"\n self.voltage_bus = Bus()\n self.consumption_bus = Bus()\n self.power_bus = Bus()\n super().__init__(*args, **kwargs)\n\n signature = {\n MODELS_INFO: [\n (LUMI, \"lumi.plug.mmeu01\"),\n (LUMI, \"lumi.plug.maeu01\"),\n ],\n ENDPOINTS: {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=81\n # device_version=1\n # input_clusters=[0, 2, 3, 4, 5, 6, 9, 1794, 2820]\n # output_clusters=[10, 25]>\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.SMART_PLUG,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n DeviceTemperature.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n OnOff.cluster_id,\n Alarms.cluster_id,\n Metering.cluster_id,\n ElectricalMeasurement.cluster_id,\n ],\n OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],\n },\n # <SimpleDescriptor endpoint=242 profile=41440 device_type=97\n # device_version=0\n # input_clusters=[]\n # output_clusters=[33]>\n 242: {\n PROFILE_ID: XIAOMI_PROFILE_ID,\n DEVICE_TYPE: XIAOMI_DEVICE_TYPE,\n OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],\n },\n },\n }\n replacement = {\n SKIP_CONFIGURATION: True,\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.SMART_PLUG,\n INPUT_CLUSTERS: [\n BasicCluster,\n DeviceTemperature.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n OnOff.cluster_id,\n Alarms.cluster_id,\n Metering.cluster_id,\n ElectricalMeasurementCluster,\n ],\n OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],\n },\n 21: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.MAIN_POWER_OUTLET,\n INPUT_CLUSTERS: [AnalogInputCluster],\n OUTPUT_CLUSTERS: [AnalogInput.cluster_id, Groups.cluster_id],\n },\n 242: {\n PROFILE_ID: XIAOMI_PROFILE_ID,\n DEVICE_TYPE: XIAOMI_DEVICE_TYPE,\n OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],\n },\n },\n }\n", "path": "zhaquirks/xiaomi/aqara/plug_mmeu01.py"}], "after_files": [{"content": "\"\"\"Xiaomi lumi.plug.mmeu01 plug.\"\"\"\nimport logging\n\nfrom zigpy.profiles import zha\nimport zigpy.types as types\nfrom zigpy.zcl.clusters.general import (\n Alarms,\n AnalogInput,\n Basic,\n DeviceTemperature,\n GreenPowerProxy,\n Groups,\n Identify,\n OnOff,\n Ota,\n Scenes,\n Time,\n)\nfrom zigpy.zcl.clusters.homeautomation import ElectricalMeasurement\nfrom zigpy.zcl.clusters.smartenergy import Metering\n\nfrom zhaquirks import Bus\nfrom zhaquirks.const import (\n DEVICE_TYPE,\n ENDPOINTS,\n INPUT_CLUSTERS,\n MODELS_INFO,\n OUTPUT_CLUSTERS,\n PROFILE_ID,\n SKIP_CONFIGURATION,\n)\nfrom zhaquirks.xiaomi import (\n LUMI,\n AnalogInputCluster,\n BasicCluster,\n ElectricalMeasurementCluster,\n XiaomiAqaraE1Cluster,\n XiaomiCustomDevice,\n)\n\n_LOGGER = logging.getLogger(__name__)\n\nXIAOMI_PROFILE_ID = 0xA1E0\nXIAOMI_DEVICE_TYPE = 0x61\nOPPLE_MFG_CODE = 0x115F\n\n\nclass Plug(XiaomiCustomDevice):\n \"\"\"lumi.plug.mmeu01 plug.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Init.\"\"\"\n self.voltage_bus = Bus()\n self.consumption_bus = Bus()\n self.power_bus = Bus()\n super().__init__(*args, **kwargs)\n\n signature = {\n MODELS_INFO: [\n (LUMI, \"lumi.plug.mmeu01\"),\n ],\n ENDPOINTS: {\n # <SimpleDescriptor endpoint=1 profile=260 device_type=81\n # device_version=1\n # input_clusters=[0, 2, 3, 4, 5, 6, 9, 1794, 2820]\n # output_clusters=[10, 25]>\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.SMART_PLUG,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n DeviceTemperature.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n OnOff.cluster_id,\n Alarms.cluster_id,\n Metering.cluster_id,\n ElectricalMeasurement.cluster_id,\n ],\n OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],\n },\n # <SimpleDescriptor endpoint=242 profile=41440 device_type=97\n # device_version=0\n # input_clusters=[]\n # output_clusters=[33]>\n 242: {\n PROFILE_ID: XIAOMI_PROFILE_ID,\n DEVICE_TYPE: XIAOMI_DEVICE_TYPE,\n OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],\n },\n },\n }\n replacement = {\n SKIP_CONFIGURATION: True,\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.SMART_PLUG,\n INPUT_CLUSTERS: [\n BasicCluster,\n DeviceTemperature.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n OnOff.cluster_id,\n Alarms.cluster_id,\n Metering.cluster_id,\n ElectricalMeasurementCluster,\n ],\n OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],\n },\n 21: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.MAIN_POWER_OUTLET,\n INPUT_CLUSTERS: [AnalogInputCluster],\n OUTPUT_CLUSTERS: [AnalogInput.cluster_id, Groups.cluster_id],\n },\n 242: {\n PROFILE_ID: XIAOMI_PROFILE_ID,\n DEVICE_TYPE: XIAOMI_DEVICE_TYPE,\n OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],\n },\n },\n }\n\n\nclass OppleCluster(XiaomiAqaraE1Cluster):\n \"\"\"Opple cluster.\"\"\"\n\n ep_attribute = \"opple_cluster\"\n attributes = {\n 0x0009: (\"mode\", types.uint8_t, True),\n }\n attr_config = {0x0009: 0x00}\n\n async def bind(self):\n \"\"\"Bind cluster.\"\"\"\n result = await super().bind()\n await self.write_attributes(self.attr_config, manufacturer=OPPLE_MFG_CODE)\n return result\n\n\nclass PlugMAEU01(Plug):\n \"\"\"lumi.plug.maeu01 plug.\"\"\"\n\n signature = {\n MODELS_INFO: [\n (LUMI, \"lumi.plug.maeu01\"),\n ],\n ENDPOINTS: Plug.signature[ENDPOINTS],\n }\n\n replacement = {\n SKIP_CONFIGURATION: False,\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.SMART_PLUG,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n DeviceTemperature.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n Scenes.cluster_id,\n OnOff.cluster_id,\n Alarms.cluster_id,\n Metering.cluster_id,\n ElectricalMeasurement.cluster_id,\n OppleCluster,\n ],\n OUTPUT_CLUSTERS: [Time.cluster_id, Ota.cluster_id],\n },\n 242: {\n PROFILE_ID: XIAOMI_PROFILE_ID,\n DEVICE_TYPE: XIAOMI_DEVICE_TYPE,\n OUTPUT_CLUSTERS: [GreenPowerProxy.cluster_id],\n },\n },\n }\n", "path": "zhaquirks/xiaomi/aqara/plug_mmeu01.py"}]}
| 2,456 | 746 |
gh_patches_debug_18209
|
rasdani/github-patches
|
git_diff
|
paperless-ngx__paperless-ngx-4334
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] HTTP 500 in Tika for some OpenDocument Documents
### Description
When recreating my local paperless instance, I had to rerun the document archival and thumbnail jobs for all documents.
Those failed for all OpenDocument Documents from my originals.
I noticed this being related to the latest changes in paperless_tika/parser.py. After testing the parser.py from 1.15 all worked.
I am already working on further investigation on my fork and will start the corresponding pull request.
### Steps to reproduce
upload ODT document -> parsing fails
### Webserver logs
```bash
---TIKA---
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1440)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:190)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1355)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:191)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:516)
at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.io.IOException: Stream Closed
at java.base/java.io.FileInputStream.available0(Native Method)
at java.base/java.io.FileInputStream.available(FileInputStream.java:415)
at org.apache.cxf.attachment.DelegatingInputStream.available(DelegatingInputStream.java:75)
at org.apache.cxf.helpers.IOUtils.consume(IOUtils.java:382)
at org.apache.cxf.attachment.DelegatingInputStream.close(DelegatingInputStream.java:46)
at org.apache.tika.server.core.resource.TikaResource.parse(TikaResource.java:374)
at org.apache.tika.server.core.resource.TikaResource.parseToMetadata(TikaResource.java:611)
at org.apache.tika.server.core.resource.TikaResource.getJsonFromMultipart(TikaResource.java:564)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179)
at org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96)
... 28 more
---PAPERLESS---
[2023-07-06 17:57:46,034] [INFO] [paperless.consumer] Consuming 2023-03-28 Johannes Gutenberg-Universität Mainz Datenbank Praktikum Gruppe 3.odt
[2023-07-06 17:57:46,112] [INFO] [paperless.parsing.tika] Sending /tmp/paperless/paperless-ngxzed2cbo7/2023-03-28 Johannes Gutenberg-Universität Mainz Datenbank Praktikum Gruppe 3.odt to Tika server
[2023-07-06 17:57:47,289] [ERROR] [paperless.consumer] Error while consuming document 2023-03-28 Johannes Gutenberg-Universität Mainz Datenbank Praktikum Gruppe 3.odt: Could not parse /tmp/paperless/paperless-ngxzed2cbo7/2023-03-28 Johannes Gutenberg-Universität Mainz Datenbank Praktikum Gruppe 3.odt with tika server at http://localhost:9998: Server error '500 Server Error' for url 'http://localhost:9998/tika/form/text'
For more information check: https://httpstatuses.com/500
Traceback (most recent call last):
File "/usr/src/paperless/src/paperless_tika/parsers.py", line 54, in parse
parsed = client.tika.as_text.from_file(document_path, mime_type)
File "/usr/local/lib/python3.9/site-packages/tika_client/_resource_tika.py", line 36, in from_file
return self._decoded_response(self._put_multipart(self.MULTI_PART_PLAIN_TEXT_CONTENT, filepath, mime_type))
File "/usr/local/lib/python3.9/site-packages/tika_client/_utils.py", line 46, in _put_multipart
resp.raise_for_status()
File "/usr/local/lib/python3.9/site-packages/httpx/_models.py", line 749, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Server error '500 Server Error' for url 'http://localhost:9998/tika/form/text'
For more information check: https://httpstatuses.com/500
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/paperless/src/documents/consumer.py", line 382, in try_consume_file
document_parser.parse(self.path, mime_type, self.filename)
File "/usr/src/paperless/src/paperless_tika/parsers.py", line 56, in parse
raise ParseError(
documents.parsers.ParseError: Could not parse /tmp/paperless/paperless-ngxzed2cbo7/2023-03-28 Johannes Gutenberg-Universität Mainz Datenbank Praktikum Gruppe 3.odt with tika server at http://localhost:9998: Server error '500 Server Error' for url 'http://localhost:9998/tika/form/text'
For more information check: https://httpstatuses.com/500
[2023-07-06 17:57:47,309] [ERROR] [celery.app.trace] Task documents.tasks.consume_file[1a0c8479-65a4-4de7-a431-29ecb537a030] raised unexpected: ConsumerError("2023-03-28 Johannes Gutenberg-Universität Mainz Datenbank Praktikum Gruppe 3.odt: Error while consuming document 2023-03-28 Johannes Gutenberg-Universität Mainz Datenbank Praktikum Gruppe 3.odt: Could not parse /tmp/paperless/paperless-ngxzed2cbo7/2023-03-28 Johannes Gutenberg-Universität Mainz Datenbank Praktikum Gruppe 3.odt with tika server at http://localhost:9998: Server error '500 Server Error' for url 'http://localhost:9998/tika/form/text'\nFor more information check: https://httpstatuses.com/500")
Traceback (most recent call last):
File "/usr/src/paperless/src/paperless_tika/parsers.py", line 54, in parse
parsed = client.tika.as_text.from_file(document_path, mime_type)
File "/usr/local/lib/python3.9/site-packages/tika_client/_resource_tika.py", line 36, in from_file
return self._decoded_response(self._put_multipart(self.MULTI_PART_PLAIN_TEXT_CONTENT, filepath, mime_type))
File "/usr/local/lib/python3.9/site-packages/tika_client/_utils.py", line 46, in _put_multipart
resp.raise_for_status()
File "/usr/local/lib/python3.9/site-packages/httpx/_models.py", line 749, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Server error '500 Server Error' for url 'http://localhost:9998/tika/form/text'
For more information check: https://httpstatuses.com/500
```
### Browser logs
_No response_
### Paperless-ngx version
1.16.5
### Host OS
Truenas
### Installation method
Docker - official image
### Browser
_No response_
### Configuration changes
_No response_
### Other
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/paperless_tika/parsers.py`
Content:
```
1 import os
2 from pathlib import Path
3
4 import httpx
5 from django.conf import settings
6 from django.utils import timezone
7 from tika_client import TikaClient
8
9 from documents.parsers import DocumentParser
10 from documents.parsers import ParseError
11 from documents.parsers import make_thumbnail_from_pdf
12
13
14 class TikaDocumentParser(DocumentParser):
15 """
16 This parser sends documents to a local tika server
17 """
18
19 logging_name = "paperless.parsing.tika"
20
21 def get_thumbnail(self, document_path, mime_type, file_name=None):
22 if not self.archive_path:
23 self.archive_path = self.convert_to_pdf(document_path, file_name)
24
25 return make_thumbnail_from_pdf(
26 self.archive_path,
27 self.tempdir,
28 self.logging_group,
29 )
30
31 def extract_metadata(self, document_path, mime_type):
32 try:
33 with TikaClient(tika_url=settings.TIKA_ENDPOINT) as client:
34 parsed = client.metadata.from_file(document_path, mime_type)
35 return [
36 {
37 "namespace": "",
38 "prefix": "",
39 "key": key,
40 "value": parsed.data[key],
41 }
42 for key in parsed.data
43 ]
44 except Exception as e:
45 self.log.warning(
46 f"Error while fetching document metadata for {document_path}: {e}",
47 )
48 return []
49
50 def parse(self, document_path: Path, mime_type: str, file_name=None):
51 self.log.info(f"Sending {document_path} to Tika server")
52
53 try:
54 with TikaClient(tika_url=settings.TIKA_ENDPOINT) as client:
55 parsed = client.tika.as_text.from_file(document_path, mime_type)
56 except Exception as err:
57 raise ParseError(
58 f"Could not parse {document_path} with tika server at "
59 f"{settings.TIKA_ENDPOINT}: {err}",
60 ) from err
61
62 self.text = parsed.content
63 if self.text is not None:
64 self.text = self.text.strip()
65
66 self.date = parsed.created
67 if self.date is not None and timezone.is_naive(self.date):
68 self.date = timezone.make_aware(self.date)
69
70 self.archive_path = self.convert_to_pdf(document_path, file_name)
71
72 def convert_to_pdf(self, document_path, file_name):
73 pdf_path = os.path.join(self.tempdir, "convert.pdf")
74 gotenberg_server = settings.TIKA_GOTENBERG_ENDPOINT
75 url = gotenberg_server + "/forms/libreoffice/convert"
76
77 self.log.info(f"Converting {document_path} to PDF as {pdf_path}")
78 with open(document_path, "rb") as document_handle:
79 files = {
80 "files": (
81 "convert" + os.path.splitext(document_path)[-1],
82 document_handle,
83 ),
84 }
85 headers = {}
86 data = {}
87
88 # Set the output format of the resulting PDF
89 # Valid inputs: https://gotenberg.dev/docs/modules/pdf-engines#uno
90 if settings.OCR_OUTPUT_TYPE in {"pdfa", "pdfa-2"}:
91 data["pdfFormat"] = "PDF/A-2b"
92 elif settings.OCR_OUTPUT_TYPE == "pdfa-1":
93 data["pdfFormat"] = "PDF/A-1a"
94 elif settings.OCR_OUTPUT_TYPE == "pdfa-3":
95 data["pdfFormat"] = "PDF/A-3b"
96
97 try:
98 response = httpx.post(
99 url,
100 files=files,
101 headers=headers,
102 data=data,
103 timeout=settings.CELERY_TASK_TIME_LIMIT,
104 )
105 response.raise_for_status() # ensure we notice bad responses
106 except Exception as err:
107 raise ParseError(
108 f"Error while converting document to PDF: {err}",
109 ) from err
110
111 with open(pdf_path, "wb") as file:
112 file.write(response.content)
113 file.close()
114
115 return pdf_path
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/paperless_tika/parsers.py b/src/paperless_tika/parsers.py
--- a/src/paperless_tika/parsers.py
+++ b/src/paperless_tika/parsers.py
@@ -52,7 +52,18 @@
try:
with TikaClient(tika_url=settings.TIKA_ENDPOINT) as client:
- parsed = client.tika.as_text.from_file(document_path, mime_type)
+ try:
+ parsed = client.tika.as_text.from_file(document_path, mime_type)
+ except httpx.HTTPStatusError as err:
+ # Workaround https://issues.apache.org/jira/browse/TIKA-4110
+ # Tika fails with some files as multi-part form data
+ if err.response.status_code == httpx.codes.INTERNAL_SERVER_ERROR:
+ parsed = client.tika.as_text.from_buffer(
+ document_path.read_bytes(),
+ mime_type,
+ )
+ else: # pragma: nocover
+ raise
except Exception as err:
raise ParseError(
f"Could not parse {document_path} with tika server at "
|
{"golden_diff": "diff --git a/src/paperless_tika/parsers.py b/src/paperless_tika/parsers.py\n--- a/src/paperless_tika/parsers.py\n+++ b/src/paperless_tika/parsers.py\n@@ -52,7 +52,18 @@\n \n try:\n with TikaClient(tika_url=settings.TIKA_ENDPOINT) as client:\n- parsed = client.tika.as_text.from_file(document_path, mime_type)\n+ try:\n+ parsed = client.tika.as_text.from_file(document_path, mime_type)\n+ except httpx.HTTPStatusError as err:\n+ # Workaround https://issues.apache.org/jira/browse/TIKA-4110\n+ # Tika fails with some files as multi-part form data\n+ if err.response.status_code == httpx.codes.INTERNAL_SERVER_ERROR:\n+ parsed = client.tika.as_text.from_buffer(\n+ document_path.read_bytes(),\n+ mime_type,\n+ )\n+ else: # pragma: nocover\n+ raise\n except Exception as err:\n raise ParseError(\n f\"Could not parse {document_path} with tika server at \"\n", "issue": "[BUG] HTTP 500 in Tika for some OpenDocument Documents\n### Description\n\nWhen recreating my local paperless instance, I had to rerun the document archival and thumbnail jobs for all documents.\r\n\r\nThose failed for all OpenDocument Documents from my originals. \r\n\r\nI noticed this being related to the latest changes in paperless_tika/parser.py. After testing the parser.py from 1.15 all worked.\r\n\r\nI am already working on further investigation on my fork and will start the corresponding pull request.\r\n\r\n\n\n### Steps to reproduce\n\nupload ODT document -> parsing fails\n\n### Webserver logs\n\n```bash\n---TIKA---\r\n\tat org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1440)\r\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:190)\r\n\tat org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1355)\r\n\tat org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\r\n\tat org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:191)\r\n\tat org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)\r\n\tat org.eclipse.jetty.server.Server.handle(Server.java:516)\r\n\tat org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)\r\n\tat org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)\r\n\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)\r\n\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)\r\n\tat org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)\r\n\tat org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)\r\n\tat org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)\r\n\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)\r\n\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)\r\n\tat java.base/java.lang.Thread.run(Thread.java:833)\r\nCaused by: java.io.IOException: Stream Closed\r\n\tat java.base/java.io.FileInputStream.available0(Native Method)\r\n\tat java.base/java.io.FileInputStream.available(FileInputStream.java:415)\r\n\tat org.apache.cxf.attachment.DelegatingInputStream.available(DelegatingInputStream.java:75)\r\n\tat org.apache.cxf.helpers.IOUtils.consume(IOUtils.java:382)\r\n\tat org.apache.cxf.attachment.DelegatingInputStream.close(DelegatingInputStream.java:46)\r\n\tat org.apache.tika.server.core.resource.TikaResource.parse(TikaResource.java:374)\r\n\tat org.apache.tika.server.core.resource.TikaResource.parseToMetadata(TikaResource.java:611)\r\n\tat org.apache.tika.server.core.resource.TikaResource.getJsonFromMultipart(TikaResource.java:564)\r\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\r\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)\r\n\tat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\r\n\tat java.base/java.lang.reflect.Method.invoke(Method.java:568)\r\n\tat org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:179)\r\n\tat org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96)\r\n\t... 28 more\r\n\r\n---PAPERLESS---\r\n[2023-07-06 17:57:46,034] [INFO] [paperless.consumer] Consuming 2023-03-28 Johannes Gutenberg-Universit\u00e4t Mainz Datenbank Praktikum Gruppe 3.odt\r\n[2023-07-06 17:57:46,112] [INFO] [paperless.parsing.tika] Sending /tmp/paperless/paperless-ngxzed2cbo7/2023-03-28 Johannes Gutenberg-Universit\u00e4t Mainz Datenbank Praktikum Gruppe 3.odt to Tika server\r\n[2023-07-06 17:57:47,289] [ERROR] [paperless.consumer] Error while consuming document 2023-03-28 Johannes Gutenberg-Universit\u00e4t Mainz Datenbank Praktikum Gruppe 3.odt: Could not parse /tmp/paperless/paperless-ngxzed2cbo7/2023-03-28 Johannes Gutenberg-Universit\u00e4t Mainz Datenbank Praktikum Gruppe 3.odt with tika server at http://localhost:9998: Server error '500 Server Error' for url 'http://localhost:9998/tika/form/text'\r\nFor more information check: https://httpstatuses.com/500\r\nTraceback (most recent call last):\r\n File \"/usr/src/paperless/src/paperless_tika/parsers.py\", line 54, in parse\r\n parsed = client.tika.as_text.from_file(document_path, mime_type)\r\n File \"/usr/local/lib/python3.9/site-packages/tika_client/_resource_tika.py\", line 36, in from_file\r\n return self._decoded_response(self._put_multipart(self.MULTI_PART_PLAIN_TEXT_CONTENT, filepath, mime_type))\r\n File \"/usr/local/lib/python3.9/site-packages/tika_client/_utils.py\", line 46, in _put_multipart\r\n resp.raise_for_status()\r\n File \"/usr/local/lib/python3.9/site-packages/httpx/_models.py\", line 749, in raise_for_status\r\n raise HTTPStatusError(message, request=request, response=self)\r\nhttpx.HTTPStatusError: Server error '500 Server Error' for url 'http://localhost:9998/tika/form/text'\r\nFor more information check: https://httpstatuses.com/500\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/src/paperless/src/documents/consumer.py\", line 382, in try_consume_file\r\n document_parser.parse(self.path, mime_type, self.filename)\r\n File \"/usr/src/paperless/src/paperless_tika/parsers.py\", line 56, in parse\r\n raise ParseError(\r\ndocuments.parsers.ParseError: Could not parse /tmp/paperless/paperless-ngxzed2cbo7/2023-03-28 Johannes Gutenberg-Universit\u00e4t Mainz Datenbank Praktikum Gruppe 3.odt with tika server at http://localhost:9998: Server error '500 Server Error' for url 'http://localhost:9998/tika/form/text'\r\nFor more information check: https://httpstatuses.com/500\r\n[2023-07-06 17:57:47,309] [ERROR] [celery.app.trace] Task documents.tasks.consume_file[1a0c8479-65a4-4de7-a431-29ecb537a030] raised unexpected: ConsumerError(\"2023-03-28 Johannes Gutenberg-Universit\u00e4t Mainz Datenbank Praktikum Gruppe 3.odt: Error while consuming document 2023-03-28 Johannes Gutenberg-Universit\u00e4t Mainz Datenbank Praktikum Gruppe 3.odt: Could not parse /tmp/paperless/paperless-ngxzed2cbo7/2023-03-28 Johannes Gutenberg-Universit\u00e4t Mainz Datenbank Praktikum Gruppe 3.odt with tika server at http://localhost:9998: Server error '500 Server Error' for url 'http://localhost:9998/tika/form/text'\\nFor more information check: https://httpstatuses.com/500\")\r\nTraceback (most recent call last):\r\n File \"/usr/src/paperless/src/paperless_tika/parsers.py\", line 54, in parse\r\n parsed = client.tika.as_text.from_file(document_path, mime_type)\r\n File \"/usr/local/lib/python3.9/site-packages/tika_client/_resource_tika.py\", line 36, in from_file\r\n return self._decoded_response(self._put_multipart(self.MULTI_PART_PLAIN_TEXT_CONTENT, filepath, mime_type))\r\n File \"/usr/local/lib/python3.9/site-packages/tika_client/_utils.py\", line 46, in _put_multipart\r\n resp.raise_for_status()\r\n File \"/usr/local/lib/python3.9/site-packages/httpx/_models.py\", line 749, in raise_for_status\r\n raise HTTPStatusError(message, request=request, response=self)\r\nhttpx.HTTPStatusError: Server error '500 Server Error' for url 'http://localhost:9998/tika/form/text'\r\nFor more information check: https://httpstatuses.com/500\n```\n\n\n### Browser logs\n\n_No response_\n\n### Paperless-ngx version\n\n1.16.5\n\n### Host OS\n\nTruenas\n\n### Installation method\n\nDocker - official image\n\n### Browser\n\n_No response_\n\n### Configuration changes\n\n_No response_\n\n### Other\n\n_No response_\n", "before_files": [{"content": "import os\nfrom pathlib import Path\n\nimport httpx\nfrom django.conf import settings\nfrom django.utils import timezone\nfrom tika_client import TikaClient\n\nfrom documents.parsers import DocumentParser\nfrom documents.parsers import ParseError\nfrom documents.parsers import make_thumbnail_from_pdf\n\n\nclass TikaDocumentParser(DocumentParser):\n \"\"\"\n This parser sends documents to a local tika server\n \"\"\"\n\n logging_name = \"paperless.parsing.tika\"\n\n def get_thumbnail(self, document_path, mime_type, file_name=None):\n if not self.archive_path:\n self.archive_path = self.convert_to_pdf(document_path, file_name)\n\n return make_thumbnail_from_pdf(\n self.archive_path,\n self.tempdir,\n self.logging_group,\n )\n\n def extract_metadata(self, document_path, mime_type):\n try:\n with TikaClient(tika_url=settings.TIKA_ENDPOINT) as client:\n parsed = client.metadata.from_file(document_path, mime_type)\n return [\n {\n \"namespace\": \"\",\n \"prefix\": \"\",\n \"key\": key,\n \"value\": parsed.data[key],\n }\n for key in parsed.data\n ]\n except Exception as e:\n self.log.warning(\n f\"Error while fetching document metadata for {document_path}: {e}\",\n )\n return []\n\n def parse(self, document_path: Path, mime_type: str, file_name=None):\n self.log.info(f\"Sending {document_path} to Tika server\")\n\n try:\n with TikaClient(tika_url=settings.TIKA_ENDPOINT) as client:\n parsed = client.tika.as_text.from_file(document_path, mime_type)\n except Exception as err:\n raise ParseError(\n f\"Could not parse {document_path} with tika server at \"\n f\"{settings.TIKA_ENDPOINT}: {err}\",\n ) from err\n\n self.text = parsed.content\n if self.text is not None:\n self.text = self.text.strip()\n\n self.date = parsed.created\n if self.date is not None and timezone.is_naive(self.date):\n self.date = timezone.make_aware(self.date)\n\n self.archive_path = self.convert_to_pdf(document_path, file_name)\n\n def convert_to_pdf(self, document_path, file_name):\n pdf_path = os.path.join(self.tempdir, \"convert.pdf\")\n gotenberg_server = settings.TIKA_GOTENBERG_ENDPOINT\n url = gotenberg_server + \"/forms/libreoffice/convert\"\n\n self.log.info(f\"Converting {document_path} to PDF as {pdf_path}\")\n with open(document_path, \"rb\") as document_handle:\n files = {\n \"files\": (\n \"convert\" + os.path.splitext(document_path)[-1],\n document_handle,\n ),\n }\n headers = {}\n data = {}\n\n # Set the output format of the resulting PDF\n # Valid inputs: https://gotenberg.dev/docs/modules/pdf-engines#uno\n if settings.OCR_OUTPUT_TYPE in {\"pdfa\", \"pdfa-2\"}:\n data[\"pdfFormat\"] = \"PDF/A-2b\"\n elif settings.OCR_OUTPUT_TYPE == \"pdfa-1\":\n data[\"pdfFormat\"] = \"PDF/A-1a\"\n elif settings.OCR_OUTPUT_TYPE == \"pdfa-3\":\n data[\"pdfFormat\"] = \"PDF/A-3b\"\n\n try:\n response = httpx.post(\n url,\n files=files,\n headers=headers,\n data=data,\n timeout=settings.CELERY_TASK_TIME_LIMIT,\n )\n response.raise_for_status() # ensure we notice bad responses\n except Exception as err:\n raise ParseError(\n f\"Error while converting document to PDF: {err}\",\n ) from err\n\n with open(pdf_path, \"wb\") as file:\n file.write(response.content)\n file.close()\n\n return pdf_path\n", "path": "src/paperless_tika/parsers.py"}], "after_files": [{"content": "import os\nfrom pathlib import Path\n\nimport httpx\nfrom django.conf import settings\nfrom django.utils import timezone\nfrom tika_client import TikaClient\n\nfrom documents.parsers import DocumentParser\nfrom documents.parsers import ParseError\nfrom documents.parsers import make_thumbnail_from_pdf\n\n\nclass TikaDocumentParser(DocumentParser):\n \"\"\"\n This parser sends documents to a local tika server\n \"\"\"\n\n logging_name = \"paperless.parsing.tika\"\n\n def get_thumbnail(self, document_path, mime_type, file_name=None):\n if not self.archive_path:\n self.archive_path = self.convert_to_pdf(document_path, file_name)\n\n return make_thumbnail_from_pdf(\n self.archive_path,\n self.tempdir,\n self.logging_group,\n )\n\n def extract_metadata(self, document_path, mime_type):\n try:\n with TikaClient(tika_url=settings.TIKA_ENDPOINT) as client:\n parsed = client.metadata.from_file(document_path, mime_type)\n return [\n {\n \"namespace\": \"\",\n \"prefix\": \"\",\n \"key\": key,\n \"value\": parsed.data[key],\n }\n for key in parsed.data\n ]\n except Exception as e:\n self.log.warning(\n f\"Error while fetching document metadata for {document_path}: {e}\",\n )\n return []\n\n def parse(self, document_path: Path, mime_type: str, file_name=None):\n self.log.info(f\"Sending {document_path} to Tika server\")\n\n try:\n with TikaClient(tika_url=settings.TIKA_ENDPOINT) as client:\n try:\n parsed = client.tika.as_text.from_file(document_path, mime_type)\n except httpx.HTTPStatusError as err:\n # Workaround https://issues.apache.org/jira/browse/TIKA-4110\n # Tika fails with some files as multi-part form data\n if err.response.status_code == httpx.codes.INTERNAL_SERVER_ERROR:\n parsed = client.tika.as_text.from_buffer(\n document_path.read_bytes(),\n mime_type,\n )\n else: # pragma: nocover\n raise\n except Exception as err:\n raise ParseError(\n f\"Could not parse {document_path} with tika server at \"\n f\"{settings.TIKA_ENDPOINT}: {err}\",\n ) from err\n\n self.text = parsed.content\n if self.text is not None:\n self.text = self.text.strip()\n\n self.date = parsed.created\n if self.date is not None and timezone.is_naive(self.date):\n self.date = timezone.make_aware(self.date)\n\n self.archive_path = self.convert_to_pdf(document_path, file_name)\n\n def convert_to_pdf(self, document_path, file_name):\n pdf_path = os.path.join(self.tempdir, \"convert.pdf\")\n gotenberg_server = settings.TIKA_GOTENBERG_ENDPOINT\n url = gotenberg_server + \"/forms/libreoffice/convert\"\n\n self.log.info(f\"Converting {document_path} to PDF as {pdf_path}\")\n with open(document_path, \"rb\") as document_handle:\n files = {\n \"files\": (\n \"convert\" + os.path.splitext(document_path)[-1],\n document_handle,\n ),\n }\n headers = {}\n data = {}\n\n # Set the output format of the resulting PDF\n # Valid inputs: https://gotenberg.dev/docs/modules/pdf-engines#uno\n if settings.OCR_OUTPUT_TYPE in {\"pdfa\", \"pdfa-2\"}:\n data[\"pdfFormat\"] = \"PDF/A-2b\"\n elif settings.OCR_OUTPUT_TYPE == \"pdfa-1\":\n data[\"pdfFormat\"] = \"PDF/A-1a\"\n elif settings.OCR_OUTPUT_TYPE == \"pdfa-3\":\n data[\"pdfFormat\"] = \"PDF/A-3b\"\n\n try:\n response = httpx.post(\n url,\n files=files,\n headers=headers,\n data=data,\n timeout=settings.CELERY_TASK_TIME_LIMIT,\n )\n response.raise_for_status() # ensure we notice bad responses\n except Exception as err:\n raise ParseError(\n f\"Error while converting document to PDF: {err}\",\n ) from err\n\n with open(pdf_path, \"wb\") as file:\n file.write(response.content)\n file.close()\n\n return pdf_path\n", "path": "src/paperless_tika/parsers.py"}]}
| 3,378 | 252 |
gh_patches_debug_30852
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-1798
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Rename RPC methods
### Describe the feature
The RPC server currently supports the following endpoints:
- compile
- run
- compile_project
- run_project
- seed_project
- test_project
These endpoints should be remapped to:
- compile_sql
- run_sql
- compile
- run
- seed
- test
This will obviously be a breaking change for anyone using the RPC server, but we're going to have to do it eventually, so we might as well do it now! Parity between the RPC methods and CLI arguments will be drastically less confusing for dbt users in the long run.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/dbt/task/remote.py`
Content:
```
1 import signal
2 import threading
3 from dataclasses import dataclass
4 from datetime import datetime
5 from typing import Union, List, Optional
6
7 from hologram import JsonSchemaMixin
8
9 from dbt.adapters.factory import get_adapter
10 from dbt.clients.jinja import extract_toplevel_blocks
11 from dbt.compilation import compile_manifest
12 from dbt.parser.results import ParseResult
13 from dbt.parser.rpc import RPCCallParser, RPCMacroParser
14 from dbt.parser.util import ParserUtils
15 import dbt.ui.printer
16 from dbt.logger import GLOBAL_LOGGER as logger
17 from dbt.rpc.node_runners import RPCCompileRunner, RPCExecuteRunner
18 from dbt.rpc.task import RemoteCallableResult, RPCTask
19
20 from dbt.task.run import RunTask
21 from dbt.task.seed import SeedTask
22 from dbt.task.test import TestTask
23
24
25 @dataclass
26 class RPCExecParameters(JsonSchemaMixin):
27 name: str
28 sql: str
29 macros: Optional[str]
30
31
32 @dataclass
33 class RPCCompileProjectParameters(JsonSchemaMixin):
34 models: Union[None, str, List[str]] = None
35 exclude: Union[None, str, List[str]] = None
36
37
38 @dataclass
39 class RPCTestProjectParameters(RPCCompileProjectParameters):
40 data: bool = False
41 schema: bool = False
42
43
44 @dataclass
45 class RPCSeedProjectParameters(JsonSchemaMixin):
46 show: bool = False
47
48
49 class _RPCExecTask(RPCTask):
50 def __init__(self, args, config, manifest):
51 super().__init__(args, config)
52 self._base_manifest = manifest.deepcopy(config=config)
53
54 def runtime_cleanup(self, selected_uids):
55 """Do some pre-run cleanup that is usually performed in Task __init__.
56 """
57 self.run_count = 0
58 self.num_nodes = len(selected_uids)
59 self.node_results = []
60 self._skipped_children = {}
61 self._skipped_children = {}
62 self._raise_next_tick = None
63
64 def _extract_request_data(self, data):
65 data = self.decode_sql(data)
66 macro_blocks = []
67 data_chunks = []
68 for block in extract_toplevel_blocks(data):
69 if block.block_type_name == 'macro':
70 macro_blocks.append(block.full_block)
71 else:
72 data_chunks.append(block.full_block)
73 macros = '\n'.join(macro_blocks)
74 sql = ''.join(data_chunks)
75 return sql, macros
76
77 def _get_exec_node(self, name, sql, macros):
78 results = ParseResult.rpc()
79 macro_overrides = {}
80 sql, macros = self._extract_request_data(sql)
81
82 if macros:
83 macro_parser = RPCMacroParser(results, self.config)
84 for node in macro_parser.parse_remote(macros):
85 macro_overrides[node.unique_id] = node
86
87 self._base_manifest.macros.update(macro_overrides)
88 rpc_parser = RPCCallParser(
89 results=results,
90 project=self.config,
91 root_project=self.config,
92 macro_manifest=self._base_manifest,
93 )
94 node = rpc_parser.parse_remote(sql, name)
95 self.manifest = ParserUtils.add_new_refs(
96 manifest=self._base_manifest,
97 current_project=self.config,
98 node=node,
99 macros=macro_overrides
100 )
101
102 # don't write our new, weird manifest!
103 self.linker = compile_manifest(self.config, self.manifest, write=False)
104 return node
105
106 def _raise_set_error(self):
107 if self._raise_next_tick is not None:
108 raise self._raise_next_tick
109
110 def _in_thread(self, node, thread_done):
111 runner = self.get_runner(node)
112 try:
113 self.node_results.append(runner.safe_run(self.manifest))
114 except Exception as exc:
115 logger.debug('Got exception {}'.format(exc), exc_info=True)
116 self._raise_next_tick = exc
117 finally:
118 thread_done.set()
119
120 def handle_request(
121 self, params: RPCExecParameters
122 ) -> RemoteCallableResult:
123 # we could get a ctrl+c at any time, including during parsing.
124 thread = None
125 started = datetime.utcnow()
126 try:
127 node = self._get_exec_node(params.name, params.sql, params.macros)
128
129 selected_uids = [node.unique_id]
130 self.runtime_cleanup(selected_uids)
131
132 thread_done = threading.Event()
133 thread = threading.Thread(target=self._in_thread,
134 args=(node, thread_done))
135 thread.start()
136 thread_done.wait()
137 except KeyboardInterrupt:
138 adapter = get_adapter(self.config)
139 if adapter.is_cancelable():
140
141 for conn_name in adapter.cancel_open_connections():
142 logger.debug('canceled query {}'.format(conn_name))
143 if thread:
144 thread.join()
145 else:
146 msg = ("The {} adapter does not support query "
147 "cancellation. Some queries may still be "
148 "running!".format(adapter.type()))
149
150 logger.debug(msg)
151
152 raise dbt.exceptions.RPCKilledException(signal.SIGINT)
153
154 self._raise_set_error()
155
156 ended = datetime.utcnow()
157 elapsed = (ended - started).total_seconds()
158 return self.get_result(
159 results=self.node_results,
160 elapsed_time=elapsed,
161 generated_at=ended,
162 )
163
164
165 class RemoteCompileTask(_RPCExecTask):
166 METHOD_NAME = 'compile'
167
168 def get_runner_type(self):
169 return RPCCompileRunner
170
171
172 class RemoteRunTask(_RPCExecTask, RunTask):
173 METHOD_NAME = 'run'
174
175 def get_runner_type(self):
176 return RPCExecuteRunner
177
178
179 class RemoteCompileProjectTask(RPCTask):
180 METHOD_NAME = 'compile_project'
181
182 def __init__(self, args, config, manifest):
183 super().__init__(args, config)
184 self.manifest = manifest.deepcopy(config=config)
185
186 def load_manifest(self):
187 # we started out with a manifest!
188 pass
189
190 def handle_request(
191 self, params: RPCCompileProjectParameters
192 ) -> RemoteCallableResult:
193 self.args.models = self._listify(params.models)
194 self.args.exclude = self._listify(params.exclude)
195
196 results = self.run()
197 return results
198
199
200 class RemoteRunProjectTask(RPCTask, RunTask):
201 METHOD_NAME = 'run_project'
202
203 def __init__(self, args, config, manifest):
204 super().__init__(args, config)
205 self.manifest = manifest.deepcopy(config=config)
206
207 def load_manifest(self):
208 # we started out with a manifest!
209 pass
210
211 def handle_request(
212 self, params: RPCCompileProjectParameters
213 ) -> RemoteCallableResult:
214 self.args.models = self._listify(params.models)
215 self.args.exclude = self._listify(params.exclude)
216
217 results = self.run()
218 return results
219
220
221 class RemoteSeedProjectTask(RPCTask, SeedTask):
222 METHOD_NAME = 'seed_project'
223
224 def __init__(self, args, config, manifest):
225 super().__init__(args, config)
226 self.manifest = manifest.deepcopy(config=config)
227
228 def load_manifest(self):
229 # we started out with a manifest!
230 pass
231
232 def handle_request(
233 self, params: RPCSeedProjectParameters
234 ) -> RemoteCallableResult:
235 self.args.show = params.show
236
237 results = self.run()
238 return results
239
240
241 class RemoteTestProjectTask(RPCTask, TestTask):
242 METHOD_NAME = 'test_project'
243
244 def __init__(self, args, config, manifest):
245 super().__init__(args, config)
246 self.manifest = manifest.deepcopy(config=config)
247
248 def load_manifest(self):
249 # we started out with a manifest!
250 pass
251
252 def handle_request(
253 self, params: RPCTestProjectParameters,
254 ) -> RemoteCallableResult:
255 self.args.models = self._listify(params.models)
256 self.args.exclude = self._listify(params.exclude)
257 self.args.data = params.data
258 self.args.schema = params.schema
259
260 results = self.run()
261 return results
262
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/core/dbt/task/remote.py b/core/dbt/task/remote.py
--- a/core/dbt/task/remote.py
+++ b/core/dbt/task/remote.py
@@ -163,21 +163,21 @@
class RemoteCompileTask(_RPCExecTask):
- METHOD_NAME = 'compile'
+ METHOD_NAME = 'compile_sql'
def get_runner_type(self):
return RPCCompileRunner
class RemoteRunTask(_RPCExecTask, RunTask):
- METHOD_NAME = 'run'
+ METHOD_NAME = 'run_sql'
def get_runner_type(self):
return RPCExecuteRunner
class RemoteCompileProjectTask(RPCTask):
- METHOD_NAME = 'compile_project'
+ METHOD_NAME = 'compile'
def __init__(self, args, config, manifest):
super().__init__(args, config)
@@ -198,7 +198,7 @@
class RemoteRunProjectTask(RPCTask, RunTask):
- METHOD_NAME = 'run_project'
+ METHOD_NAME = 'run'
def __init__(self, args, config, manifest):
super().__init__(args, config)
@@ -219,7 +219,7 @@
class RemoteSeedProjectTask(RPCTask, SeedTask):
- METHOD_NAME = 'seed_project'
+ METHOD_NAME = 'seed'
def __init__(self, args, config, manifest):
super().__init__(args, config)
@@ -239,7 +239,7 @@
class RemoteTestProjectTask(RPCTask, TestTask):
- METHOD_NAME = 'test_project'
+ METHOD_NAME = 'test'
def __init__(self, args, config, manifest):
super().__init__(args, config)
|
{"golden_diff": "diff --git a/core/dbt/task/remote.py b/core/dbt/task/remote.py\n--- a/core/dbt/task/remote.py\n+++ b/core/dbt/task/remote.py\n@@ -163,21 +163,21 @@\n \n \n class RemoteCompileTask(_RPCExecTask):\n- METHOD_NAME = 'compile'\n+ METHOD_NAME = 'compile_sql'\n \n def get_runner_type(self):\n return RPCCompileRunner\n \n \n class RemoteRunTask(_RPCExecTask, RunTask):\n- METHOD_NAME = 'run'\n+ METHOD_NAME = 'run_sql'\n \n def get_runner_type(self):\n return RPCExecuteRunner\n \n \n class RemoteCompileProjectTask(RPCTask):\n- METHOD_NAME = 'compile_project'\n+ METHOD_NAME = 'compile'\n \n def __init__(self, args, config, manifest):\n super().__init__(args, config)\n@@ -198,7 +198,7 @@\n \n \n class RemoteRunProjectTask(RPCTask, RunTask):\n- METHOD_NAME = 'run_project'\n+ METHOD_NAME = 'run'\n \n def __init__(self, args, config, manifest):\n super().__init__(args, config)\n@@ -219,7 +219,7 @@\n \n \n class RemoteSeedProjectTask(RPCTask, SeedTask):\n- METHOD_NAME = 'seed_project'\n+ METHOD_NAME = 'seed'\n \n def __init__(self, args, config, manifest):\n super().__init__(args, config)\n@@ -239,7 +239,7 @@\n \n \n class RemoteTestProjectTask(RPCTask, TestTask):\n- METHOD_NAME = 'test_project'\n+ METHOD_NAME = 'test'\n \n def __init__(self, args, config, manifest):\n super().__init__(args, config)\n", "issue": "Rename RPC methods\n### Describe the feature\r\nThe RPC server currently supports the following endpoints:\r\n - compile\r\n - run\r\n - compile_project\r\n - run_project\r\n - seed_project\r\n - test_project\r\n\r\nThese endpoints should be remapped to:\r\n - compile_sql\r\n - run_sql\r\n - compile\r\n - run\r\n - seed\r\n - test\r\n\r\nThis will obviously be a breaking change for anyone using the RPC server, but we're going to have to do it eventually, so we might as well do it now! Parity between the RPC methods and CLI arguments will be drastically less confusing for dbt users in the long run.\n", "before_files": [{"content": "import signal\nimport threading\nfrom dataclasses import dataclass\nfrom datetime import datetime\nfrom typing import Union, List, Optional\n\nfrom hologram import JsonSchemaMixin\n\nfrom dbt.adapters.factory import get_adapter\nfrom dbt.clients.jinja import extract_toplevel_blocks\nfrom dbt.compilation import compile_manifest\nfrom dbt.parser.results import ParseResult\nfrom dbt.parser.rpc import RPCCallParser, RPCMacroParser\nfrom dbt.parser.util import ParserUtils\nimport dbt.ui.printer\nfrom dbt.logger import GLOBAL_LOGGER as logger\nfrom dbt.rpc.node_runners import RPCCompileRunner, RPCExecuteRunner\nfrom dbt.rpc.task import RemoteCallableResult, RPCTask\n\nfrom dbt.task.run import RunTask\nfrom dbt.task.seed import SeedTask\nfrom dbt.task.test import TestTask\n\n\n@dataclass\nclass RPCExecParameters(JsonSchemaMixin):\n name: str\n sql: str\n macros: Optional[str]\n\n\n@dataclass\nclass RPCCompileProjectParameters(JsonSchemaMixin):\n models: Union[None, str, List[str]] = None\n exclude: Union[None, str, List[str]] = None\n\n\n@dataclass\nclass RPCTestProjectParameters(RPCCompileProjectParameters):\n data: bool = False\n schema: bool = False\n\n\n@dataclass\nclass RPCSeedProjectParameters(JsonSchemaMixin):\n show: bool = False\n\n\nclass _RPCExecTask(RPCTask):\n def __init__(self, args, config, manifest):\n super().__init__(args, config)\n self._base_manifest = manifest.deepcopy(config=config)\n\n def runtime_cleanup(self, selected_uids):\n \"\"\"Do some pre-run cleanup that is usually performed in Task __init__.\n \"\"\"\n self.run_count = 0\n self.num_nodes = len(selected_uids)\n self.node_results = []\n self._skipped_children = {}\n self._skipped_children = {}\n self._raise_next_tick = None\n\n def _extract_request_data(self, data):\n data = self.decode_sql(data)\n macro_blocks = []\n data_chunks = []\n for block in extract_toplevel_blocks(data):\n if block.block_type_name == 'macro':\n macro_blocks.append(block.full_block)\n else:\n data_chunks.append(block.full_block)\n macros = '\\n'.join(macro_blocks)\n sql = ''.join(data_chunks)\n return sql, macros\n\n def _get_exec_node(self, name, sql, macros):\n results = ParseResult.rpc()\n macro_overrides = {}\n sql, macros = self._extract_request_data(sql)\n\n if macros:\n macro_parser = RPCMacroParser(results, self.config)\n for node in macro_parser.parse_remote(macros):\n macro_overrides[node.unique_id] = node\n\n self._base_manifest.macros.update(macro_overrides)\n rpc_parser = RPCCallParser(\n results=results,\n project=self.config,\n root_project=self.config,\n macro_manifest=self._base_manifest,\n )\n node = rpc_parser.parse_remote(sql, name)\n self.manifest = ParserUtils.add_new_refs(\n manifest=self._base_manifest,\n current_project=self.config,\n node=node,\n macros=macro_overrides\n )\n\n # don't write our new, weird manifest!\n self.linker = compile_manifest(self.config, self.manifest, write=False)\n return node\n\n def _raise_set_error(self):\n if self._raise_next_tick is not None:\n raise self._raise_next_tick\n\n def _in_thread(self, node, thread_done):\n runner = self.get_runner(node)\n try:\n self.node_results.append(runner.safe_run(self.manifest))\n except Exception as exc:\n logger.debug('Got exception {}'.format(exc), exc_info=True)\n self._raise_next_tick = exc\n finally:\n thread_done.set()\n\n def handle_request(\n self, params: RPCExecParameters\n ) -> RemoteCallableResult:\n # we could get a ctrl+c at any time, including during parsing.\n thread = None\n started = datetime.utcnow()\n try:\n node = self._get_exec_node(params.name, params.sql, params.macros)\n\n selected_uids = [node.unique_id]\n self.runtime_cleanup(selected_uids)\n\n thread_done = threading.Event()\n thread = threading.Thread(target=self._in_thread,\n args=(node, thread_done))\n thread.start()\n thread_done.wait()\n except KeyboardInterrupt:\n adapter = get_adapter(self.config)\n if adapter.is_cancelable():\n\n for conn_name in adapter.cancel_open_connections():\n logger.debug('canceled query {}'.format(conn_name))\n if thread:\n thread.join()\n else:\n msg = (\"The {} adapter does not support query \"\n \"cancellation. Some queries may still be \"\n \"running!\".format(adapter.type()))\n\n logger.debug(msg)\n\n raise dbt.exceptions.RPCKilledException(signal.SIGINT)\n\n self._raise_set_error()\n\n ended = datetime.utcnow()\n elapsed = (ended - started).total_seconds()\n return self.get_result(\n results=self.node_results,\n elapsed_time=elapsed,\n generated_at=ended,\n )\n\n\nclass RemoteCompileTask(_RPCExecTask):\n METHOD_NAME = 'compile'\n\n def get_runner_type(self):\n return RPCCompileRunner\n\n\nclass RemoteRunTask(_RPCExecTask, RunTask):\n METHOD_NAME = 'run'\n\n def get_runner_type(self):\n return RPCExecuteRunner\n\n\nclass RemoteCompileProjectTask(RPCTask):\n METHOD_NAME = 'compile_project'\n\n def __init__(self, args, config, manifest):\n super().__init__(args, config)\n self.manifest = manifest.deepcopy(config=config)\n\n def load_manifest(self):\n # we started out with a manifest!\n pass\n\n def handle_request(\n self, params: RPCCompileProjectParameters\n ) -> RemoteCallableResult:\n self.args.models = self._listify(params.models)\n self.args.exclude = self._listify(params.exclude)\n\n results = self.run()\n return results\n\n\nclass RemoteRunProjectTask(RPCTask, RunTask):\n METHOD_NAME = 'run_project'\n\n def __init__(self, args, config, manifest):\n super().__init__(args, config)\n self.manifest = manifest.deepcopy(config=config)\n\n def load_manifest(self):\n # we started out with a manifest!\n pass\n\n def handle_request(\n self, params: RPCCompileProjectParameters\n ) -> RemoteCallableResult:\n self.args.models = self._listify(params.models)\n self.args.exclude = self._listify(params.exclude)\n\n results = self.run()\n return results\n\n\nclass RemoteSeedProjectTask(RPCTask, SeedTask):\n METHOD_NAME = 'seed_project'\n\n def __init__(self, args, config, manifest):\n super().__init__(args, config)\n self.manifest = manifest.deepcopy(config=config)\n\n def load_manifest(self):\n # we started out with a manifest!\n pass\n\n def handle_request(\n self, params: RPCSeedProjectParameters\n ) -> RemoteCallableResult:\n self.args.show = params.show\n\n results = self.run()\n return results\n\n\nclass RemoteTestProjectTask(RPCTask, TestTask):\n METHOD_NAME = 'test_project'\n\n def __init__(self, args, config, manifest):\n super().__init__(args, config)\n self.manifest = manifest.deepcopy(config=config)\n\n def load_manifest(self):\n # we started out with a manifest!\n pass\n\n def handle_request(\n self, params: RPCTestProjectParameters,\n ) -> RemoteCallableResult:\n self.args.models = self._listify(params.models)\n self.args.exclude = self._listify(params.exclude)\n self.args.data = params.data\n self.args.schema = params.schema\n\n results = self.run()\n return results\n", "path": "core/dbt/task/remote.py"}], "after_files": [{"content": "import signal\nimport threading\nfrom dataclasses import dataclass\nfrom datetime import datetime\nfrom typing import Union, List, Optional\n\nfrom hologram import JsonSchemaMixin\n\nfrom dbt.adapters.factory import get_adapter\nfrom dbt.clients.jinja import extract_toplevel_blocks\nfrom dbt.compilation import compile_manifest\nfrom dbt.parser.results import ParseResult\nfrom dbt.parser.rpc import RPCCallParser, RPCMacroParser\nfrom dbt.parser.util import ParserUtils\nimport dbt.ui.printer\nfrom dbt.logger import GLOBAL_LOGGER as logger\nfrom dbt.rpc.node_runners import RPCCompileRunner, RPCExecuteRunner\nfrom dbt.rpc.task import RemoteCallableResult, RPCTask\n\nfrom dbt.task.run import RunTask\nfrom dbt.task.seed import SeedTask\nfrom dbt.task.test import TestTask\n\n\n@dataclass\nclass RPCExecParameters(JsonSchemaMixin):\n name: str\n sql: str\n macros: Optional[str]\n\n\n@dataclass\nclass RPCCompileProjectParameters(JsonSchemaMixin):\n models: Union[None, str, List[str]] = None\n exclude: Union[None, str, List[str]] = None\n\n\n@dataclass\nclass RPCTestProjectParameters(RPCCompileProjectParameters):\n data: bool = False\n schema: bool = False\n\n\n@dataclass\nclass RPCSeedProjectParameters(JsonSchemaMixin):\n show: bool = False\n\n\nclass _RPCExecTask(RPCTask):\n def __init__(self, args, config, manifest):\n super().__init__(args, config)\n self._base_manifest = manifest.deepcopy(config=config)\n\n def runtime_cleanup(self, selected_uids):\n \"\"\"Do some pre-run cleanup that is usually performed in Task __init__.\n \"\"\"\n self.run_count = 0\n self.num_nodes = len(selected_uids)\n self.node_results = []\n self._skipped_children = {}\n self._skipped_children = {}\n self._raise_next_tick = None\n\n def _extract_request_data(self, data):\n data = self.decode_sql(data)\n macro_blocks = []\n data_chunks = []\n for block in extract_toplevel_blocks(data):\n if block.block_type_name == 'macro':\n macro_blocks.append(block.full_block)\n else:\n data_chunks.append(block.full_block)\n macros = '\\n'.join(macro_blocks)\n sql = ''.join(data_chunks)\n return sql, macros\n\n def _get_exec_node(self, name, sql, macros):\n results = ParseResult.rpc()\n macro_overrides = {}\n sql, macros = self._extract_request_data(sql)\n\n if macros:\n macro_parser = RPCMacroParser(results, self.config)\n for node in macro_parser.parse_remote(macros):\n macro_overrides[node.unique_id] = node\n\n self._base_manifest.macros.update(macro_overrides)\n rpc_parser = RPCCallParser(\n results=results,\n project=self.config,\n root_project=self.config,\n macro_manifest=self._base_manifest,\n )\n node = rpc_parser.parse_remote(sql, name)\n self.manifest = ParserUtils.add_new_refs(\n manifest=self._base_manifest,\n current_project=self.config,\n node=node,\n macros=macro_overrides\n )\n\n # don't write our new, weird manifest!\n self.linker = compile_manifest(self.config, self.manifest, write=False)\n return node\n\n def _raise_set_error(self):\n if self._raise_next_tick is not None:\n raise self._raise_next_tick\n\n def _in_thread(self, node, thread_done):\n runner = self.get_runner(node)\n try:\n self.node_results.append(runner.safe_run(self.manifest))\n except Exception as exc:\n logger.debug('Got exception {}'.format(exc), exc_info=True)\n self._raise_next_tick = exc\n finally:\n thread_done.set()\n\n def handle_request(\n self, params: RPCExecParameters\n ) -> RemoteCallableResult:\n # we could get a ctrl+c at any time, including during parsing.\n thread = None\n started = datetime.utcnow()\n try:\n node = self._get_exec_node(params.name, params.sql, params.macros)\n\n selected_uids = [node.unique_id]\n self.runtime_cleanup(selected_uids)\n\n thread_done = threading.Event()\n thread = threading.Thread(target=self._in_thread,\n args=(node, thread_done))\n thread.start()\n thread_done.wait()\n except KeyboardInterrupt:\n adapter = get_adapter(self.config)\n if adapter.is_cancelable():\n\n for conn_name in adapter.cancel_open_connections():\n logger.debug('canceled query {}'.format(conn_name))\n if thread:\n thread.join()\n else:\n msg = (\"The {} adapter does not support query \"\n \"cancellation. Some queries may still be \"\n \"running!\".format(adapter.type()))\n\n logger.debug(msg)\n\n raise dbt.exceptions.RPCKilledException(signal.SIGINT)\n\n self._raise_set_error()\n\n ended = datetime.utcnow()\n elapsed = (ended - started).total_seconds()\n return self.get_result(\n results=self.node_results,\n elapsed_time=elapsed,\n generated_at=ended,\n )\n\n\nclass RemoteCompileTask(_RPCExecTask):\n METHOD_NAME = 'compile_sql'\n\n def get_runner_type(self):\n return RPCCompileRunner\n\n\nclass RemoteRunTask(_RPCExecTask, RunTask):\n METHOD_NAME = 'run_sql'\n\n def get_runner_type(self):\n return RPCExecuteRunner\n\n\nclass RemoteCompileProjectTask(RPCTask):\n METHOD_NAME = 'compile'\n\n def __init__(self, args, config, manifest):\n super().__init__(args, config)\n self.manifest = manifest.deepcopy(config=config)\n\n def load_manifest(self):\n # we started out with a manifest!\n pass\n\n def handle_request(\n self, params: RPCCompileProjectParameters\n ) -> RemoteCallableResult:\n self.args.models = self._listify(params.models)\n self.args.exclude = self._listify(params.exclude)\n\n results = self.run()\n return results\n\n\nclass RemoteRunProjectTask(RPCTask, RunTask):\n METHOD_NAME = 'run'\n\n def __init__(self, args, config, manifest):\n super().__init__(args, config)\n self.manifest = manifest.deepcopy(config=config)\n\n def load_manifest(self):\n # we started out with a manifest!\n pass\n\n def handle_request(\n self, params: RPCCompileProjectParameters\n ) -> RemoteCallableResult:\n self.args.models = self._listify(params.models)\n self.args.exclude = self._listify(params.exclude)\n\n results = self.run()\n return results\n\n\nclass RemoteSeedProjectTask(RPCTask, SeedTask):\n METHOD_NAME = 'seed'\n\n def __init__(self, args, config, manifest):\n super().__init__(args, config)\n self.manifest = manifest.deepcopy(config=config)\n\n def load_manifest(self):\n # we started out with a manifest!\n pass\n\n def handle_request(\n self, params: RPCSeedProjectParameters\n ) -> RemoteCallableResult:\n self.args.show = params.show\n\n results = self.run()\n return results\n\n\nclass RemoteTestProjectTask(RPCTask, TestTask):\n METHOD_NAME = 'test'\n\n def __init__(self, args, config, manifest):\n super().__init__(args, config)\n self.manifest = manifest.deepcopy(config=config)\n\n def load_manifest(self):\n # we started out with a manifest!\n pass\n\n def handle_request(\n self, params: RPCTestProjectParameters,\n ) -> RemoteCallableResult:\n self.args.models = self._listify(params.models)\n self.args.exclude = self._listify(params.exclude)\n self.args.data = params.data\n self.args.schema = params.schema\n\n results = self.run()\n return results\n", "path": "core/dbt/task/remote.py"}]}
| 2,773 | 396 |
gh_patches_debug_5355
|
rasdani/github-patches
|
git_diff
|
saulpw__visidata-491
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] [v2.-4dev] dup-rows/-deep does not copy rows
I found that if I try and do a `dup-rows-deep` with a DirSheet I get the following error with no rows in the sheet (columns are correct and as expected).
This only appears to happen with a DirSheet, I just tried with a TextSheet which works as expected.
```
Traceback (most recent call last):
File "/path/vd_plugins/radare2/r2/lib/python3.7/site-packages/visidata-2._4dev-py3.7.egg/visidata/threads.py", line 201, in _toplevelTryFunc
t.status = func(*args, **kwargs)
File "/path/vd_plugins/radare2/r2/lib/python3.7/site-packages/visidata-2._4dev-py3.7.egg/visidata/threads.py", line 80, in _async_deepcopy
newlist.append(deepcopy(r))
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/copy.py", line 281, in _reconstruct
if hasattr(y, '__setstate__'):
File "/path/vd_plugins/radare2/r2/lib/python3.7/site-packages/visidata-2._4dev-py3.7.egg/visidata/path.py", line 77, in __getattr__
r = getattr(self._path, k)
File "/path/vd_plugins/radare2/r2/lib/python3.7/site-packages/visidata-2._4dev-py3.7.egg/visidata/path.py", line 77, in __getattr__
r = getattr(self._path, k)
File "/path/vd_plugins/radare2/r2/lib/python3.7/site-packages/visidata-2._4dev-py3.7.egg/visidata/path.py", line 77, in __getattr__
r = getattr(self._path, k)
[Previous line repeated 492 more times]
RecursionError: maximum recursion depth exceeded
```
### Replicate:
1. Use v2.-4dev branch
2. Open a DirSheet with `vd .`
3. Do a deep copy of the sheet with `dup-rows-deep`
### Result:
A sheet with no rows, columns as expected for a DirSheet, and the error message above.
### Expected:
A deep copy of the original DirSheet with all rows (including those selected)
### cmdlog:
```
sheet col row longname input keystrokes comment
SqliteSheet header set-option 0
UsvSheet delimiter set-option ␞
UsvSheet row_delimiter set-option ␟
override disp_date_fmt set-option %Y-%m-%d %H:%M:%S
open-file . o
_files dup-rows-deep gz" open duplicate sheet with deepcopy of all rows
```
Also, am I correct in understanding that if I make a deep copy, modifications I make to that copy should propagate to the original sheet? And this should include selecting/deselecting rows?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `visidata/path.py`
Content:
```
1 import os
2 import os.path
3 import sys
4 import pathlib
5 from urllib.parse import urlparse, urlunparse
6
7 from visidata import *
8
9 option('encoding', 'utf-8', 'encoding passed to codecs.open', replay=True)
10 option('encoding_errors', 'surrogateescape', 'encoding_errors passed to codecs.open', replay=True)
11
12 @functools.lru_cache()
13 def vstat(path, force=False):
14 try:
15 return os.stat(path)
16 except Exception as e:
17 return None
18
19 def filesize(path):
20 if hasattr(path, 'filesize') and path.filesize is not None:
21 return path.filesize
22 if path.fp or path.is_url():
23 return 0
24 st = path.stat() # vstat(path)
25 return st and st.st_size
26
27 def modtime(path):
28 st = path.stat()
29 return st and st.st_mtime
30
31
32 class Path(os.PathLike):
33 'File and path-handling class, modeled on `pathlib.Path`.'
34 def __init__(self, given, fp=None, lines=None, filesize=None):
35 # Resolve pathname shell variables and ~userdir
36 self.given = os.path.expandvars(os.path.expanduser(given))
37 self.fp = fp
38 self.lines = lines or [] # shared among all RepeatFile instances
39 self.filesize = filesize
40 self.rfile = None
41
42 @functools.lru_cache()
43 def stat(self, force=False):
44 return self._path.stat()
45
46 @property
47 def given(self):
48 return self._given
49
50 @given.setter
51 def given(self, given):
52 self._given = given
53 if isinstance(given, os.PathLike):
54 self._path = given
55 else:
56 self._path = pathlib.Path(given)
57
58 self.ext = self.suffix[1:]
59 if self.suffix:
60 self.name = self._path.name[:-len(self.suffix)]
61 else:
62 self.name = self._path.name
63
64 # check if file is compressed
65 if self.suffix in ['.gz', '.bz2', '.xz']:
66 self.compression = self.ext
67 uncompressedpath = Path(self.given[:-len(self.suffix)])
68 self.name = uncompressedpath.name
69 self.ext = uncompressedpath.ext
70 else:
71 self.compression = None
72
73 def __getattr__(self, k):
74 if hasattr(self.__dict__, k):
75 r = getattr(self.__dict__, k)
76 else:
77 r = getattr(self._path, k)
78 if isinstance(r, pathlib.Path):
79 return Path(r)
80 return r
81
82 def __fspath__(self):
83 return self._path.__fspath__()
84
85 def __lt__(self, a):
86 return self._path.__lt__(a)
87
88 def __truediv__(self, a):
89 return Path(self._path.__truediv__(a))
90
91 def open_text(self, mode='rt'):
92 # rfile makes a single-access fp reusable
93
94 if self.rfile:
95 return self.rfile
96
97 if self.fp:
98 self.rfile = RepeatFile(fp=self.fp)
99 return self.rfile
100
101 if 't' not in mode:
102 mode += 't'
103
104 if self.given == '-':
105 if 'r' in mode:
106 return vd._stdin
107 elif 'w' in mode or 'a' in mode:
108 # convert 'a' to 'w' for stdout: https://bugs.python.org/issue27805
109 return open(os.dup(vd._stdout.fileno()), 'wt')
110 else:
111 error('invalid mode "%s" for Path.open_text()' % mode)
112 return sys.stderr
113
114 return self.open(mode=mode, encoding=options.encoding, errors=options.encoding_errors)
115
116 def read_text(self, *args):
117 if self.lines:
118 return RepeatFile(iter_lines=self.lines).read()
119 elif self.fp:
120 return self.fp.read()
121 else:
122 return self._path.read_text(*args)
123
124 def open(self, *args, **kwargs):
125 fn = self
126 if self.compression == 'gz':
127 import gzip
128 return gzip.open(fn, *args, **kwargs)
129 elif self.compression == 'bz2':
130 import bz2
131 return bz2.open(fn, *args, **kwargs)
132 elif self.compression == 'xz':
133 import lzma
134 return lzma.open(fn, *args, **kwargs)
135 else:
136 return self._path.open(*args, **kwargs)
137
138 def __iter__(self):
139 with Progress(total=filesize(self)) as prog:
140 for i, line in enumerate(self.open_text()):
141 prog.addProgress(len(line))
142 yield line[:-1]
143
144 def open_bytes(self, mode='rb'):
145 if 'b' not in mode:
146 mode += 'b'
147 return self.open(mode=mode)
148
149 def read_bytes(self):
150 with self.open(mode='rb') as fp:
151 return fp.read()
152
153 def is_url(self):
154 return '://' in self.given
155
156 def __str__(self):
157 if self.is_url():
158 return self.given
159 return str(self._path)
160
161 @functools.lru_cache()
162 def stat(self, force=False):
163 try:
164 if not self.is_url():
165 return self._path.stat()
166 except Exception as e:
167 return None
168
169 def exists(self):
170 if self.fp or self.is_url():
171 return True
172 return self._path.exists()
173
174 @property
175 def scheme(self):
176 if self.is_url():
177 return urlparse(self.given).scheme
178
179 def with_name(self, name):
180 if self.is_url():
181 urlparts = list(urlparse(self.given))
182 urlparts[2] = '/'.join(Path(urlparts[2])._parts[1:-1] + [name])
183 return Path(urlunparse(urlparts))
184 else:
185 return Path(self._from_parsed_parts(self._drv, self._root, self._parts[:-1] + [name]))
186
187
188 class RepeatFile:
189 def __init__(self, *, fp=None, iter_lines=None):
190 'Provide either fp or iter_lines, and lines will be filled from it.'
191 self.fp = fp
192 self.iter_lines = iter_lines
193 self.lines = []
194 self.iter = RepeatFileIter(self)
195
196 def __enter__(self):
197 self.iter = RepeatFileIter(self)
198 return self
199
200 def __exit__(self, a,b,c):
201 pass
202
203 def read(self, n=None):
204 r = ''
205 if n is None:
206 n = 10**12 # some too huge number
207 while len(r) < n:
208 try:
209 s = next(self.iter)
210 r += s + '\n'
211 n += len(r)
212 except StopIteration:
213 break # end of file
214 return r
215
216 def seek(self, n):
217 assert n == 0, 'RepeatFile can only seek to beginning'
218 self.iter = RepeatFileIter(self)
219
220 def __iter__(self):
221 return RepeatFileIter(self)
222
223 def __next__(self):
224 return next(self.iter)
225
226 def exists(self):
227 return True
228
229
230 class RepeatFileIter:
231 def __init__(self, rf):
232 self.rf = rf
233 self.nextIndex = 0
234
235 def __iter__(self):
236 return RepeatFileIter(self.rf)
237
238 def __next__(self):
239 if self.nextIndex < len(self.rf.lines):
240 r = self.rf.lines[self.nextIndex]
241 elif self.rf.iter_lines:
242 try:
243 r = next(self.rf.iter_lines)
244 self.rf.lines.append(r)
245 except StopIteration:
246 self.rf.iter_lines = None
247 raise
248 elif self.rf.fp:
249 try:
250 r = next(self.rf.fp)
251 self.rf.lines.append(r)
252 except StopIteration:
253 self.rf.fp = None
254 raise
255 else:
256 raise StopIteration()
257
258
259 self.nextIndex += 1
260 return r
261
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/visidata/path.py b/visidata/path.py
--- a/visidata/path.py
+++ b/visidata/path.py
@@ -74,7 +74,10 @@
if hasattr(self.__dict__, k):
r = getattr(self.__dict__, k)
else:
- r = getattr(self._path, k)
+ if self.__dict__.get('_path', None) is not None:
+ r = getattr(self._path, k)
+ else:
+ raise AttributeError(k)
if isinstance(r, pathlib.Path):
return Path(r)
return r
|
{"golden_diff": "diff --git a/visidata/path.py b/visidata/path.py\n--- a/visidata/path.py\n+++ b/visidata/path.py\n@@ -74,7 +74,10 @@\n if hasattr(self.__dict__, k):\n r = getattr(self.__dict__, k)\n else:\n- r = getattr(self._path, k)\n+ if self.__dict__.get('_path', None) is not None:\n+ r = getattr(self._path, k)\n+ else:\n+ raise AttributeError(k)\n if isinstance(r, pathlib.Path):\n return Path(r)\n return r\n", "issue": "[Bug] [v2.-4dev] dup-rows/-deep does not copy rows\nI found that if I try and do a `dup-rows-deep` with a DirSheet I get the following error with no rows in the sheet (columns are correct and as expected).\r\n\r\nThis only appears to happen with a DirSheet, I just tried with a TextSheet which works as expected.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/path/vd_plugins/radare2/r2/lib/python3.7/site-packages/visidata-2._4dev-py3.7.egg/visidata/threads.py\", line 201, in _toplevelTryFunc\r\n t.status = func(*args, **kwargs)\r\n File \"/path/vd_plugins/radare2/r2/lib/python3.7/site-packages/visidata-2._4dev-py3.7.egg/visidata/threads.py\", line 80, in _async_deepcopy\r\n newlist.append(deepcopy(r))\r\n File \"/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/copy.py\", line 180, in deepcopy\r\n y = _reconstruct(x, memo, *rv)\r\n File \"/usr/local/Cellar/python/3.7.6_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/copy.py\", line 281, in _reconstruct\r\n if hasattr(y, '__setstate__'):\r\n File \"/path/vd_plugins/radare2/r2/lib/python3.7/site-packages/visidata-2._4dev-py3.7.egg/visidata/path.py\", line 77, in __getattr__\r\n r = getattr(self._path, k)\r\n File \"/path/vd_plugins/radare2/r2/lib/python3.7/site-packages/visidata-2._4dev-py3.7.egg/visidata/path.py\", line 77, in __getattr__\r\n r = getattr(self._path, k)\r\n File \"/path/vd_plugins/radare2/r2/lib/python3.7/site-packages/visidata-2._4dev-py3.7.egg/visidata/path.py\", line 77, in __getattr__\r\n r = getattr(self._path, k)\r\n [Previous line repeated 492 more times]\r\nRecursionError: maximum recursion depth exceeded\r\n```\r\n### Replicate:\r\n\r\n1. Use v2.-4dev branch\r\n2. Open a DirSheet with `vd .`\r\n3. Do a deep copy of the sheet with `dup-rows-deep`\r\n\r\n### Result:\r\nA sheet with no rows, columns as expected for a DirSheet, and the error message above.\r\n\r\n### Expected:\r\nA deep copy of the original DirSheet with all rows (including those selected)\r\n\r\n### cmdlog:\r\n```\r\nsheet\tcol\trow\tlongname\tinput\tkeystrokes\tcomment\r\n\tSqliteSheet\theader\tset-option\t0\r\n\tUsvSheet\tdelimiter\tset-option\t\u241e\r\n\tUsvSheet\trow_delimiter\tset-option\t\u241f\r\n\toverride\tdisp_date_fmt\tset-option\t%Y-%m-%d %H:%M:%S\r\n\t\t\topen-file\t.\to\r\n_files\t\t\tdup-rows-deep\t\tgz\"\topen duplicate sheet with deepcopy of all rows\r\n```\r\n\r\n\r\nAlso, am I correct in understanding that if I make a deep copy, modifications I make to that copy should propagate to the original sheet? And this should include selecting/deselecting rows?\n", "before_files": [{"content": "import os\nimport os.path\nimport sys\nimport pathlib\nfrom urllib.parse import urlparse, urlunparse\n\nfrom visidata import *\n\noption('encoding', 'utf-8', 'encoding passed to codecs.open', replay=True)\noption('encoding_errors', 'surrogateescape', 'encoding_errors passed to codecs.open', replay=True)\n\[email protected]_cache()\ndef vstat(path, force=False):\n try:\n return os.stat(path)\n except Exception as e:\n return None\n\ndef filesize(path):\n if hasattr(path, 'filesize') and path.filesize is not None:\n return path.filesize\n if path.fp or path.is_url():\n return 0\n st = path.stat() # vstat(path)\n return st and st.st_size\n\ndef modtime(path):\n st = path.stat()\n return st and st.st_mtime\n\n\nclass Path(os.PathLike):\n 'File and path-handling class, modeled on `pathlib.Path`.'\n def __init__(self, given, fp=None, lines=None, filesize=None):\n # Resolve pathname shell variables and ~userdir\n self.given = os.path.expandvars(os.path.expanduser(given))\n self.fp = fp\n self.lines = lines or [] # shared among all RepeatFile instances\n self.filesize = filesize\n self.rfile = None\n\n @functools.lru_cache()\n def stat(self, force=False):\n return self._path.stat()\n\n @property\n def given(self):\n return self._given\n\n @given.setter\n def given(self, given):\n self._given = given\n if isinstance(given, os.PathLike):\n self._path = given\n else:\n self._path = pathlib.Path(given)\n\n self.ext = self.suffix[1:]\n if self.suffix:\n self.name = self._path.name[:-len(self.suffix)]\n else:\n self.name = self._path.name\n\n # check if file is compressed\n if self.suffix in ['.gz', '.bz2', '.xz']:\n self.compression = self.ext\n uncompressedpath = Path(self.given[:-len(self.suffix)])\n self.name = uncompressedpath.name\n self.ext = uncompressedpath.ext\n else:\n self.compression = None\n\n def __getattr__(self, k):\n if hasattr(self.__dict__, k):\n r = getattr(self.__dict__, k)\n else:\n r = getattr(self._path, k)\n if isinstance(r, pathlib.Path):\n return Path(r)\n return r\n\n def __fspath__(self):\n return self._path.__fspath__()\n\n def __lt__(self, a):\n return self._path.__lt__(a)\n\n def __truediv__(self, a):\n return Path(self._path.__truediv__(a))\n\n def open_text(self, mode='rt'):\n # rfile makes a single-access fp reusable\n\n if self.rfile:\n return self.rfile\n\n if self.fp:\n self.rfile = RepeatFile(fp=self.fp)\n return self.rfile\n\n if 't' not in mode:\n mode += 't'\n\n if self.given == '-':\n if 'r' in mode:\n return vd._stdin\n elif 'w' in mode or 'a' in mode:\n # convert 'a' to 'w' for stdout: https://bugs.python.org/issue27805\n return open(os.dup(vd._stdout.fileno()), 'wt')\n else:\n error('invalid mode \"%s\" for Path.open_text()' % mode)\n return sys.stderr\n\n return self.open(mode=mode, encoding=options.encoding, errors=options.encoding_errors)\n\n def read_text(self, *args):\n if self.lines:\n return RepeatFile(iter_lines=self.lines).read()\n elif self.fp:\n return self.fp.read()\n else:\n return self._path.read_text(*args)\n\n def open(self, *args, **kwargs):\n fn = self\n if self.compression == 'gz':\n import gzip\n return gzip.open(fn, *args, **kwargs)\n elif self.compression == 'bz2':\n import bz2\n return bz2.open(fn, *args, **kwargs)\n elif self.compression == 'xz':\n import lzma\n return lzma.open(fn, *args, **kwargs)\n else:\n return self._path.open(*args, **kwargs)\n\n def __iter__(self):\n with Progress(total=filesize(self)) as prog:\n for i, line in enumerate(self.open_text()):\n prog.addProgress(len(line))\n yield line[:-1]\n\n def open_bytes(self, mode='rb'):\n if 'b' not in mode:\n mode += 'b'\n return self.open(mode=mode)\n\n def read_bytes(self):\n with self.open(mode='rb') as fp:\n return fp.read()\n\n def is_url(self):\n return '://' in self.given\n\n def __str__(self):\n if self.is_url():\n return self.given\n return str(self._path)\n\n @functools.lru_cache()\n def stat(self, force=False):\n try:\n if not self.is_url():\n return self._path.stat()\n except Exception as e:\n return None\n\n def exists(self):\n if self.fp or self.is_url():\n return True\n return self._path.exists()\n\n @property\n def scheme(self):\n if self.is_url():\n return urlparse(self.given).scheme\n\n def with_name(self, name):\n if self.is_url():\n urlparts = list(urlparse(self.given))\n urlparts[2] = '/'.join(Path(urlparts[2])._parts[1:-1] + [name])\n return Path(urlunparse(urlparts))\n else:\n return Path(self._from_parsed_parts(self._drv, self._root, self._parts[:-1] + [name]))\n\n\nclass RepeatFile:\n def __init__(self, *, fp=None, iter_lines=None):\n 'Provide either fp or iter_lines, and lines will be filled from it.'\n self.fp = fp\n self.iter_lines = iter_lines\n self.lines = []\n self.iter = RepeatFileIter(self)\n\n def __enter__(self):\n self.iter = RepeatFileIter(self)\n return self\n\n def __exit__(self, a,b,c):\n pass\n\n def read(self, n=None):\n r = ''\n if n is None:\n n = 10**12 # some too huge number\n while len(r) < n:\n try:\n s = next(self.iter)\n r += s + '\\n'\n n += len(r)\n except StopIteration:\n break # end of file\n return r\n\n def seek(self, n):\n assert n == 0, 'RepeatFile can only seek to beginning'\n self.iter = RepeatFileIter(self)\n\n def __iter__(self):\n return RepeatFileIter(self)\n\n def __next__(self):\n return next(self.iter)\n\n def exists(self):\n return True\n\n\nclass RepeatFileIter:\n def __init__(self, rf):\n self.rf = rf\n self.nextIndex = 0\n\n def __iter__(self):\n return RepeatFileIter(self.rf)\n\n def __next__(self):\n if self.nextIndex < len(self.rf.lines):\n r = self.rf.lines[self.nextIndex]\n elif self.rf.iter_lines:\n try:\n r = next(self.rf.iter_lines)\n self.rf.lines.append(r)\n except StopIteration:\n self.rf.iter_lines = None\n raise\n elif self.rf.fp:\n try:\n r = next(self.rf.fp)\n self.rf.lines.append(r)\n except StopIteration:\n self.rf.fp = None\n raise\n else:\n raise StopIteration()\n\n\n self.nextIndex += 1\n return r\n", "path": "visidata/path.py"}], "after_files": [{"content": "import os\nimport os.path\nimport sys\nimport pathlib\nfrom urllib.parse import urlparse, urlunparse\n\nfrom visidata import *\n\noption('encoding', 'utf-8', 'encoding passed to codecs.open', replay=True)\noption('encoding_errors', 'surrogateescape', 'encoding_errors passed to codecs.open', replay=True)\n\[email protected]_cache()\ndef vstat(path, force=False):\n try:\n return os.stat(path)\n except Exception as e:\n return None\n\ndef filesize(path):\n if hasattr(path, 'filesize') and path.filesize is not None:\n return path.filesize\n if path.fp or path.is_url():\n return 0\n st = path.stat() # vstat(path)\n return st and st.st_size\n\ndef modtime(path):\n st = path.stat()\n return st and st.st_mtime\n\n\nclass Path(os.PathLike):\n 'File and path-handling class, modeled on `pathlib.Path`.'\n def __init__(self, given, fp=None, lines=None, filesize=None):\n # Resolve pathname shell variables and ~userdir\n self.given = os.path.expandvars(os.path.expanduser(given))\n self.fp = fp\n self.lines = lines or [] # shared among all RepeatFile instances\n self.filesize = filesize\n self.rfile = None\n\n @functools.lru_cache()\n def stat(self, force=False):\n return self._path.stat()\n\n @property\n def given(self):\n return self._given\n\n @given.setter\n def given(self, given):\n self._given = given\n if isinstance(given, os.PathLike):\n self._path = given\n else:\n self._path = pathlib.Path(given)\n\n self.ext = self.suffix[1:]\n if self.suffix:\n self.name = self._path.name[:-len(self.suffix)]\n else:\n self.name = self._path.name\n\n # check if file is compressed\n if self.suffix in ['.gz', '.bz2', '.xz']:\n self.compression = self.ext\n uncompressedpath = Path(self.given[:-len(self.suffix)])\n self.name = uncompressedpath.name\n self.ext = uncompressedpath.ext\n else:\n self.compression = None\n\n def __getattr__(self, k):\n if hasattr(self.__dict__, k):\n r = getattr(self.__dict__, k)\n else:\n if self.__dict__.get('_path', None) is not None:\n r = getattr(self._path, k)\n else:\n raise AttributeError(k)\n if isinstance(r, pathlib.Path):\n return Path(r)\n return r\n\n def __fspath__(self):\n return self._path.__fspath__()\n\n def __lt__(self, a):\n return self._path.__lt__(a)\n\n def __truediv__(self, a):\n return Path(self._path.__truediv__(a))\n\n def open_text(self, mode='rt'):\n # rfile makes a single-access fp reusable\n\n if self.rfile:\n return self.rfile\n\n if self.fp:\n self.rfile = RepeatFile(fp=self.fp)\n return self.rfile\n\n if 't' not in mode:\n mode += 't'\n\n if self.given == '-':\n if 'r' in mode:\n return vd._stdin\n elif 'w' in mode or 'a' in mode:\n # convert 'a' to 'w' for stdout: https://bugs.python.org/issue27805\n return open(os.dup(vd._stdout.fileno()), 'wt')\n else:\n error('invalid mode \"%s\" for Path.open_text()' % mode)\n return sys.stderr\n\n return self.open(mode=mode, encoding=options.encoding, errors=options.encoding_errors)\n\n def read_text(self, *args):\n if self.lines:\n return RepeatFile(iter_lines=self.lines).read()\n elif self.fp:\n return self.fp.read()\n else:\n return self._path.read_text(*args)\n\n def open(self, *args, **kwargs):\n fn = self\n if self.compression == 'gz':\n import gzip\n return gzip.open(fn, *args, **kwargs)\n elif self.compression == 'bz2':\n import bz2\n return bz2.open(fn, *args, **kwargs)\n elif self.compression == 'xz':\n import lzma\n return lzma.open(fn, *args, **kwargs)\n else:\n return self._path.open(*args, **kwargs)\n\n def __iter__(self):\n with Progress(total=filesize(self)) as prog:\n for i, line in enumerate(self.open_text()):\n prog.addProgress(len(line))\n yield line[:-1]\n\n def open_bytes(self, mode='rb'):\n if 'b' not in mode:\n mode += 'b'\n return self.open(mode=mode)\n\n def read_bytes(self):\n with self.open(mode='rb') as fp:\n return fp.read()\n\n def is_url(self):\n return '://' in self.given\n\n def __str__(self):\n if self.is_url():\n return self.given\n return str(self._path)\n\n @functools.lru_cache()\n def stat(self, force=False):\n try:\n if not self.is_url():\n return self._path.stat()\n except Exception as e:\n return None\n\n def exists(self):\n if self.fp or self.is_url():\n return True\n return self._path.exists()\n\n @property\n def scheme(self):\n if self.is_url():\n return urlparse(self.given).scheme\n\n def with_name(self, name):\n if self.is_url():\n urlparts = list(urlparse(self.given))\n urlparts[2] = '/'.join(Path(urlparts[2])._parts[1:-1] + [name])\n return Path(urlunparse(urlparts))\n else:\n return Path(self._from_parsed_parts(self._drv, self._root, self._parts[:-1] + [name]))\n\n\nclass RepeatFile:\n def __init__(self, *, fp=None, iter_lines=None):\n 'Provide either fp or iter_lines, and lines will be filled from it.'\n self.fp = fp\n self.iter_lines = iter_lines\n self.lines = []\n self.iter = RepeatFileIter(self)\n\n def __enter__(self):\n self.iter = RepeatFileIter(self)\n return self\n\n def __exit__(self, a,b,c):\n pass\n\n def read(self, n=None):\n r = ''\n if n is None:\n n = 10**12 # some too huge number\n while len(r) < n:\n try:\n s = next(self.iter)\n r += s + '\\n'\n n += len(r)\n except StopIteration:\n break # end of file\n return r\n\n def seek(self, n):\n assert n == 0, 'RepeatFile can only seek to beginning'\n self.iter = RepeatFileIter(self)\n\n def __iter__(self):\n return RepeatFileIter(self)\n\n def __next__(self):\n return next(self.iter)\n\n def exists(self):\n return True\n\n\nclass RepeatFileIter:\n def __init__(self, rf):\n self.rf = rf\n self.nextIndex = 0\n\n def __iter__(self):\n return RepeatFileIter(self.rf)\n\n def __next__(self):\n if self.nextIndex < len(self.rf.lines):\n r = self.rf.lines[self.nextIndex]\n elif self.rf.iter_lines:\n try:\n r = next(self.rf.iter_lines)\n self.rf.lines.append(r)\n except StopIteration:\n self.rf.iter_lines = None\n raise\n elif self.rf.fp:\n try:\n r = next(self.rf.fp)\n self.rf.lines.append(r)\n except StopIteration:\n self.rf.fp = None\n raise\n else:\n raise StopIteration()\n\n\n self.nextIndex += 1\n return r\n", "path": "visidata/path.py"}]}
| 3,441 | 133 |
gh_patches_debug_12771
|
rasdani/github-patches
|
git_diff
|
qutebrowser__qutebrowser-2242
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
q<Fn> tries to record macro
I'm not sure if this is solved in #2113, but currently when doing `q<Fn>` I get `Recording macro ''...`
cc @blyxxyz
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qutebrowser/keyinput/modeparsers.py`
Content:
```
1 # vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:
2
3 # Copyright 2014-2016 Florian Bruhin (The Compiler) <[email protected]>
4 #
5 # This file is part of qutebrowser.
6 #
7 # qutebrowser is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # qutebrowser is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.
19
20 """KeyChainParser for "hint" and "normal" modes.
21
22 Module attributes:
23 STARTCHARS: Possible chars for starting a commandline input.
24 """
25
26 import traceback
27
28 from PyQt5.QtCore import pyqtSlot, Qt
29
30 from qutebrowser.commands import cmdexc
31 from qutebrowser.config import config
32 from qutebrowser.keyinput import keyparser
33 from qutebrowser.utils import usertypes, log, message, objreg, utils
34
35
36 STARTCHARS = ":/?"
37 LastPress = usertypes.enum('LastPress', ['none', 'filtertext', 'keystring'])
38
39
40 class NormalKeyParser(keyparser.CommandKeyParser):
41
42 """KeyParser for normal mode with added STARTCHARS detection and more.
43
44 Attributes:
45 _partial_timer: Timer to clear partial keypresses.
46 """
47
48 def __init__(self, win_id, parent=None):
49 super().__init__(win_id, parent, supports_count=True,
50 supports_chains=True)
51 self.read_config('normal')
52 self._partial_timer = usertypes.Timer(self, 'partial-match')
53 self._partial_timer.setSingleShot(True)
54 self._inhibited = False
55 self._inhibited_timer = usertypes.Timer(self, 'normal-inhibited')
56 self._inhibited_timer.setSingleShot(True)
57
58 def __repr__(self):
59 return utils.get_repr(self)
60
61 def _handle_single_key(self, e):
62 """Override _handle_single_key to abort if the key is a startchar.
63
64 Args:
65 e: the KeyPressEvent from Qt.
66
67 Return:
68 A self.Match member.
69 """
70 txt = e.text().strip()
71 if self._inhibited:
72 self._debug_log("Ignoring key '{}', because the normal mode is "
73 "currently inhibited.".format(txt))
74 return self.Match.none
75 match = super()._handle_single_key(e)
76 if match == self.Match.partial:
77 timeout = config.get('input', 'partial-timeout')
78 if timeout != 0:
79 self._partial_timer.setInterval(timeout)
80 self._partial_timer.timeout.connect(self._clear_partial_match)
81 self._partial_timer.start()
82 return match
83
84 def set_inhibited_timeout(self, timeout):
85 if timeout != 0:
86 self._debug_log("Inhibiting the normal mode for {}ms.".format(
87 timeout))
88 self._inhibited = True
89 self._inhibited_timer.setInterval(timeout)
90 self._inhibited_timer.timeout.connect(self._clear_inhibited)
91 self._inhibited_timer.start()
92
93 @pyqtSlot()
94 def _clear_partial_match(self):
95 """Clear a partial keystring after a timeout."""
96 self._debug_log("Clearing partial keystring {}".format(
97 self._keystring))
98 self._keystring = ''
99 self.keystring_updated.emit(self._keystring)
100
101 @pyqtSlot()
102 def _clear_inhibited(self):
103 """Reset inhibition state after a timeout."""
104 self._debug_log("Releasing inhibition state of normal mode.")
105 self._inhibited = False
106
107 @pyqtSlot()
108 def _stop_timers(self):
109 super()._stop_timers()
110 self._partial_timer.stop()
111 try:
112 self._partial_timer.timeout.disconnect(self._clear_partial_match)
113 except TypeError:
114 # no connections
115 pass
116 self._inhibited_timer.stop()
117 try:
118 self._inhibited_timer.timeout.disconnect(self._clear_inhibited)
119 except TypeError:
120 # no connections
121 pass
122
123
124 class PromptKeyParser(keyparser.CommandKeyParser):
125
126 """KeyParser for yes/no prompts."""
127
128 def __init__(self, win_id, parent=None):
129 super().__init__(win_id, parent, supports_count=False,
130 supports_chains=True)
131 # We don't want an extra section for this in the config, so we just
132 # abuse the prompt section.
133 self.read_config('prompt')
134
135 def __repr__(self):
136 return utils.get_repr(self)
137
138
139 class HintKeyParser(keyparser.CommandKeyParser):
140
141 """KeyChainParser for hints.
142
143 Attributes:
144 _filtertext: The text to filter with.
145 _last_press: The nature of the last keypress, a LastPress member.
146 """
147
148 def __init__(self, win_id, parent=None):
149 super().__init__(win_id, parent, supports_count=False,
150 supports_chains=True)
151 self._filtertext = ''
152 self._last_press = LastPress.none
153 self.read_config('hint')
154 self.keystring_updated.connect(self.on_keystring_updated)
155
156 def _handle_special_key(self, e):
157 """Override _handle_special_key to handle string filtering.
158
159 Return True if the keypress has been handled, and False if not.
160
161 Args:
162 e: the KeyPressEvent from Qt.
163
164 Return:
165 True if event has been handled, False otherwise.
166 """
167 log.keyboard.debug("Got special key 0x{:x} text {}".format(
168 e.key(), e.text()))
169 hintmanager = objreg.get('hintmanager', scope='tab',
170 window=self._win_id, tab='current')
171 if e.key() == Qt.Key_Backspace:
172 log.keyboard.debug("Got backspace, mode {}, filtertext '{}', "
173 "keystring '{}'".format(self._last_press,
174 self._filtertext,
175 self._keystring))
176 if self._last_press == LastPress.filtertext and self._filtertext:
177 self._filtertext = self._filtertext[:-1]
178 hintmanager.filter_hints(self._filtertext)
179 return True
180 elif self._last_press == LastPress.keystring and self._keystring:
181 self._keystring = self._keystring[:-1]
182 self.keystring_updated.emit(self._keystring)
183 if not self._keystring and self._filtertext:
184 # Switch back to hint filtering mode (this can happen only
185 # in numeric mode after the number has been deleted).
186 hintmanager.filter_hints(self._filtertext)
187 self._last_press = LastPress.filtertext
188 return True
189 else:
190 return super()._handle_special_key(e)
191 elif hintmanager.current_mode() != 'number':
192 return super()._handle_special_key(e)
193 elif not e.text():
194 return super()._handle_special_key(e)
195 else:
196 self._filtertext += e.text()
197 hintmanager.filter_hints(self._filtertext)
198 self._last_press = LastPress.filtertext
199 return True
200
201 def handle(self, e):
202 """Handle a new keypress and call the respective handlers.
203
204 Args:
205 e: the KeyPressEvent from Qt
206
207 Returns:
208 True if the match has been handled, False otherwise.
209 """
210 match = self._handle_single_key(e)
211 if match == self.Match.partial:
212 self.keystring_updated.emit(self._keystring)
213 self._last_press = LastPress.keystring
214 return True
215 elif match == self.Match.definitive:
216 self._last_press = LastPress.none
217 return True
218 elif match == self.Match.other:
219 pass
220 elif match == self.Match.none:
221 # We couldn't find a keychain so we check if it's a special key.
222 return self._handle_special_key(e)
223 else:
224 raise ValueError("Got invalid match type {}!".format(match))
225
226 def execute(self, cmdstr, keytype, count=None):
227 """Handle a completed keychain."""
228 if not isinstance(keytype, self.Type):
229 raise TypeError("Type {} is no Type member!".format(keytype))
230 if keytype == self.Type.chain:
231 hintmanager = objreg.get('hintmanager', scope='tab',
232 window=self._win_id, tab='current')
233 hintmanager.handle_partial_key(cmdstr)
234 else:
235 # execute as command
236 super().execute(cmdstr, keytype, count)
237
238 def update_bindings(self, strings, preserve_filter=False):
239 """Update bindings when the hint strings changed.
240
241 Args:
242 strings: A list of hint strings.
243 preserve_filter: Whether to keep the current value of
244 `self._filtertext`.
245 """
246 self.bindings = {s: s for s in strings}
247 if not preserve_filter:
248 self._filtertext = ''
249
250 @pyqtSlot(str)
251 def on_keystring_updated(self, keystr):
252 """Update hintmanager when the keystring was updated."""
253 hintmanager = objreg.get('hintmanager', scope='tab',
254 window=self._win_id, tab='current')
255 hintmanager.handle_partial_key(keystr)
256
257
258 class CaretKeyParser(keyparser.CommandKeyParser):
259
260 """KeyParser for caret mode."""
261
262 passthrough = True
263
264 def __init__(self, win_id, parent=None):
265 super().__init__(win_id, parent, supports_count=True,
266 supports_chains=True)
267 self.read_config('caret')
268
269
270 class RegisterKeyParser(keyparser.CommandKeyParser):
271
272 """KeyParser for modes that record a register key.
273
274 Attributes:
275 _mode: One of KeyMode.set_mark, KeyMode.jump_mark, KeyMode.record_macro
276 and KeyMode.run_macro.
277 """
278
279 def __init__(self, win_id, mode, parent=None):
280 super().__init__(win_id, parent, supports_count=False,
281 supports_chains=False)
282 self._mode = mode
283 self.read_config('register')
284
285 def handle(self, e):
286 """Override handle to always match the next key and use the register.
287
288 Args:
289 e: the KeyPressEvent from Qt.
290
291 Return:
292 True if event has been handled, False otherwise.
293 """
294 if super().handle(e):
295 return True
296
297 if utils.keyevent_to_string(e) is None:
298 # this is a modifier key, let it pass and keep going
299 return False
300
301 key = e.text()
302
303 tabbed_browser = objreg.get('tabbed-browser', scope='window',
304 window=self._win_id)
305 macro_recorder = objreg.get('macro-recorder')
306
307 try:
308 if self._mode == usertypes.KeyMode.set_mark:
309 tabbed_browser.set_mark(key)
310 elif self._mode == usertypes.KeyMode.jump_mark:
311 tabbed_browser.jump_mark(key)
312 elif self._mode == usertypes.KeyMode.record_macro:
313 macro_recorder.record_macro(key)
314 elif self._mode == usertypes.KeyMode.run_macro:
315 macro_recorder.run_macro(self._win_id, key)
316 else:
317 raise ValueError(
318 "{} is not a valid register mode".format(self._mode))
319 except (cmdexc.CommandMetaError, cmdexc.CommandError) as err:
320 message.error(str(err), stack=traceback.format_exc())
321
322 self.request_leave.emit(self._mode, "valid register key", True)
323
324 return True
325
326 @pyqtSlot(str)
327 def on_keyconfig_changed(self, mode):
328 """RegisterKeyParser has no config section (no bindable keys)."""
329 pass
330
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/qutebrowser/keyinput/modeparsers.py b/qutebrowser/keyinput/modeparsers.py
--- a/qutebrowser/keyinput/modeparsers.py
+++ b/qutebrowser/keyinput/modeparsers.py
@@ -294,12 +294,12 @@
if super().handle(e):
return True
- if utils.keyevent_to_string(e) is None:
- # this is a modifier key, let it pass and keep going
- return False
-
key = e.text()
+ if key == '' or utils.keyevent_to_string(e) is None:
+ # this is not a proper register key, let it pass and keep going
+ return False
+
tabbed_browser = objreg.get('tabbed-browser', scope='window',
window=self._win_id)
macro_recorder = objreg.get('macro-recorder')
|
{"golden_diff": "diff --git a/qutebrowser/keyinput/modeparsers.py b/qutebrowser/keyinput/modeparsers.py\n--- a/qutebrowser/keyinput/modeparsers.py\n+++ b/qutebrowser/keyinput/modeparsers.py\n@@ -294,12 +294,12 @@\n if super().handle(e):\n return True\n \n- if utils.keyevent_to_string(e) is None:\n- # this is a modifier key, let it pass and keep going\n- return False\n-\n key = e.text()\n \n+ if key == '' or utils.keyevent_to_string(e) is None:\n+ # this is not a proper register key, let it pass and keep going\n+ return False\n+\n tabbed_browser = objreg.get('tabbed-browser', scope='window',\n window=self._win_id)\n macro_recorder = objreg.get('macro-recorder')\n", "issue": "q<Fn> tries to record macro\nI'm not sure if this is solved in #2113, but currently when doing `q<Fn>` I get `Recording macro ''...`\r\n\r\ncc @blyxxyz\n", "before_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2016 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"KeyChainParser for \"hint\" and \"normal\" modes.\n\nModule attributes:\n STARTCHARS: Possible chars for starting a commandline input.\n\"\"\"\n\nimport traceback\n\nfrom PyQt5.QtCore import pyqtSlot, Qt\n\nfrom qutebrowser.commands import cmdexc\nfrom qutebrowser.config import config\nfrom qutebrowser.keyinput import keyparser\nfrom qutebrowser.utils import usertypes, log, message, objreg, utils\n\n\nSTARTCHARS = \":/?\"\nLastPress = usertypes.enum('LastPress', ['none', 'filtertext', 'keystring'])\n\n\nclass NormalKeyParser(keyparser.CommandKeyParser):\n\n \"\"\"KeyParser for normal mode with added STARTCHARS detection and more.\n\n Attributes:\n _partial_timer: Timer to clear partial keypresses.\n \"\"\"\n\n def __init__(self, win_id, parent=None):\n super().__init__(win_id, parent, supports_count=True,\n supports_chains=True)\n self.read_config('normal')\n self._partial_timer = usertypes.Timer(self, 'partial-match')\n self._partial_timer.setSingleShot(True)\n self._inhibited = False\n self._inhibited_timer = usertypes.Timer(self, 'normal-inhibited')\n self._inhibited_timer.setSingleShot(True)\n\n def __repr__(self):\n return utils.get_repr(self)\n\n def _handle_single_key(self, e):\n \"\"\"Override _handle_single_key to abort if the key is a startchar.\n\n Args:\n e: the KeyPressEvent from Qt.\n\n Return:\n A self.Match member.\n \"\"\"\n txt = e.text().strip()\n if self._inhibited:\n self._debug_log(\"Ignoring key '{}', because the normal mode is \"\n \"currently inhibited.\".format(txt))\n return self.Match.none\n match = super()._handle_single_key(e)\n if match == self.Match.partial:\n timeout = config.get('input', 'partial-timeout')\n if timeout != 0:\n self._partial_timer.setInterval(timeout)\n self._partial_timer.timeout.connect(self._clear_partial_match)\n self._partial_timer.start()\n return match\n\n def set_inhibited_timeout(self, timeout):\n if timeout != 0:\n self._debug_log(\"Inhibiting the normal mode for {}ms.\".format(\n timeout))\n self._inhibited = True\n self._inhibited_timer.setInterval(timeout)\n self._inhibited_timer.timeout.connect(self._clear_inhibited)\n self._inhibited_timer.start()\n\n @pyqtSlot()\n def _clear_partial_match(self):\n \"\"\"Clear a partial keystring after a timeout.\"\"\"\n self._debug_log(\"Clearing partial keystring {}\".format(\n self._keystring))\n self._keystring = ''\n self.keystring_updated.emit(self._keystring)\n\n @pyqtSlot()\n def _clear_inhibited(self):\n \"\"\"Reset inhibition state after a timeout.\"\"\"\n self._debug_log(\"Releasing inhibition state of normal mode.\")\n self._inhibited = False\n\n @pyqtSlot()\n def _stop_timers(self):\n super()._stop_timers()\n self._partial_timer.stop()\n try:\n self._partial_timer.timeout.disconnect(self._clear_partial_match)\n except TypeError:\n # no connections\n pass\n self._inhibited_timer.stop()\n try:\n self._inhibited_timer.timeout.disconnect(self._clear_inhibited)\n except TypeError:\n # no connections\n pass\n\n\nclass PromptKeyParser(keyparser.CommandKeyParser):\n\n \"\"\"KeyParser for yes/no prompts.\"\"\"\n\n def __init__(self, win_id, parent=None):\n super().__init__(win_id, parent, supports_count=False,\n supports_chains=True)\n # We don't want an extra section for this in the config, so we just\n # abuse the prompt section.\n self.read_config('prompt')\n\n def __repr__(self):\n return utils.get_repr(self)\n\n\nclass HintKeyParser(keyparser.CommandKeyParser):\n\n \"\"\"KeyChainParser for hints.\n\n Attributes:\n _filtertext: The text to filter with.\n _last_press: The nature of the last keypress, a LastPress member.\n \"\"\"\n\n def __init__(self, win_id, parent=None):\n super().__init__(win_id, parent, supports_count=False,\n supports_chains=True)\n self._filtertext = ''\n self._last_press = LastPress.none\n self.read_config('hint')\n self.keystring_updated.connect(self.on_keystring_updated)\n\n def _handle_special_key(self, e):\n \"\"\"Override _handle_special_key to handle string filtering.\n\n Return True if the keypress has been handled, and False if not.\n\n Args:\n e: the KeyPressEvent from Qt.\n\n Return:\n True if event has been handled, False otherwise.\n \"\"\"\n log.keyboard.debug(\"Got special key 0x{:x} text {}\".format(\n e.key(), e.text()))\n hintmanager = objreg.get('hintmanager', scope='tab',\n window=self._win_id, tab='current')\n if e.key() == Qt.Key_Backspace:\n log.keyboard.debug(\"Got backspace, mode {}, filtertext '{}', \"\n \"keystring '{}'\".format(self._last_press,\n self._filtertext,\n self._keystring))\n if self._last_press == LastPress.filtertext and self._filtertext:\n self._filtertext = self._filtertext[:-1]\n hintmanager.filter_hints(self._filtertext)\n return True\n elif self._last_press == LastPress.keystring and self._keystring:\n self._keystring = self._keystring[:-1]\n self.keystring_updated.emit(self._keystring)\n if not self._keystring and self._filtertext:\n # Switch back to hint filtering mode (this can happen only\n # in numeric mode after the number has been deleted).\n hintmanager.filter_hints(self._filtertext)\n self._last_press = LastPress.filtertext\n return True\n else:\n return super()._handle_special_key(e)\n elif hintmanager.current_mode() != 'number':\n return super()._handle_special_key(e)\n elif not e.text():\n return super()._handle_special_key(e)\n else:\n self._filtertext += e.text()\n hintmanager.filter_hints(self._filtertext)\n self._last_press = LastPress.filtertext\n return True\n\n def handle(self, e):\n \"\"\"Handle a new keypress and call the respective handlers.\n\n Args:\n e: the KeyPressEvent from Qt\n\n Returns:\n True if the match has been handled, False otherwise.\n \"\"\"\n match = self._handle_single_key(e)\n if match == self.Match.partial:\n self.keystring_updated.emit(self._keystring)\n self._last_press = LastPress.keystring\n return True\n elif match == self.Match.definitive:\n self._last_press = LastPress.none\n return True\n elif match == self.Match.other:\n pass\n elif match == self.Match.none:\n # We couldn't find a keychain so we check if it's a special key.\n return self._handle_special_key(e)\n else:\n raise ValueError(\"Got invalid match type {}!\".format(match))\n\n def execute(self, cmdstr, keytype, count=None):\n \"\"\"Handle a completed keychain.\"\"\"\n if not isinstance(keytype, self.Type):\n raise TypeError(\"Type {} is no Type member!\".format(keytype))\n if keytype == self.Type.chain:\n hintmanager = objreg.get('hintmanager', scope='tab',\n window=self._win_id, tab='current')\n hintmanager.handle_partial_key(cmdstr)\n else:\n # execute as command\n super().execute(cmdstr, keytype, count)\n\n def update_bindings(self, strings, preserve_filter=False):\n \"\"\"Update bindings when the hint strings changed.\n\n Args:\n strings: A list of hint strings.\n preserve_filter: Whether to keep the current value of\n `self._filtertext`.\n \"\"\"\n self.bindings = {s: s for s in strings}\n if not preserve_filter:\n self._filtertext = ''\n\n @pyqtSlot(str)\n def on_keystring_updated(self, keystr):\n \"\"\"Update hintmanager when the keystring was updated.\"\"\"\n hintmanager = objreg.get('hintmanager', scope='tab',\n window=self._win_id, tab='current')\n hintmanager.handle_partial_key(keystr)\n\n\nclass CaretKeyParser(keyparser.CommandKeyParser):\n\n \"\"\"KeyParser for caret mode.\"\"\"\n\n passthrough = True\n\n def __init__(self, win_id, parent=None):\n super().__init__(win_id, parent, supports_count=True,\n supports_chains=True)\n self.read_config('caret')\n\n\nclass RegisterKeyParser(keyparser.CommandKeyParser):\n\n \"\"\"KeyParser for modes that record a register key.\n\n Attributes:\n _mode: One of KeyMode.set_mark, KeyMode.jump_mark, KeyMode.record_macro\n and KeyMode.run_macro.\n \"\"\"\n\n def __init__(self, win_id, mode, parent=None):\n super().__init__(win_id, parent, supports_count=False,\n supports_chains=False)\n self._mode = mode\n self.read_config('register')\n\n def handle(self, e):\n \"\"\"Override handle to always match the next key and use the register.\n\n Args:\n e: the KeyPressEvent from Qt.\n\n Return:\n True if event has been handled, False otherwise.\n \"\"\"\n if super().handle(e):\n return True\n\n if utils.keyevent_to_string(e) is None:\n # this is a modifier key, let it pass and keep going\n return False\n\n key = e.text()\n\n tabbed_browser = objreg.get('tabbed-browser', scope='window',\n window=self._win_id)\n macro_recorder = objreg.get('macro-recorder')\n\n try:\n if self._mode == usertypes.KeyMode.set_mark:\n tabbed_browser.set_mark(key)\n elif self._mode == usertypes.KeyMode.jump_mark:\n tabbed_browser.jump_mark(key)\n elif self._mode == usertypes.KeyMode.record_macro:\n macro_recorder.record_macro(key)\n elif self._mode == usertypes.KeyMode.run_macro:\n macro_recorder.run_macro(self._win_id, key)\n else:\n raise ValueError(\n \"{} is not a valid register mode\".format(self._mode))\n except (cmdexc.CommandMetaError, cmdexc.CommandError) as err:\n message.error(str(err), stack=traceback.format_exc())\n\n self.request_leave.emit(self._mode, \"valid register key\", True)\n\n return True\n\n @pyqtSlot(str)\n def on_keyconfig_changed(self, mode):\n \"\"\"RegisterKeyParser has no config section (no bindable keys).\"\"\"\n pass\n", "path": "qutebrowser/keyinput/modeparsers.py"}], "after_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2016 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"KeyChainParser for \"hint\" and \"normal\" modes.\n\nModule attributes:\n STARTCHARS: Possible chars for starting a commandline input.\n\"\"\"\n\nimport traceback\n\nfrom PyQt5.QtCore import pyqtSlot, Qt\n\nfrom qutebrowser.commands import cmdexc\nfrom qutebrowser.config import config\nfrom qutebrowser.keyinput import keyparser\nfrom qutebrowser.utils import usertypes, log, message, objreg, utils\n\n\nSTARTCHARS = \":/?\"\nLastPress = usertypes.enum('LastPress', ['none', 'filtertext', 'keystring'])\n\n\nclass NormalKeyParser(keyparser.CommandKeyParser):\n\n \"\"\"KeyParser for normal mode with added STARTCHARS detection and more.\n\n Attributes:\n _partial_timer: Timer to clear partial keypresses.\n \"\"\"\n\n def __init__(self, win_id, parent=None):\n super().__init__(win_id, parent, supports_count=True,\n supports_chains=True)\n self.read_config('normal')\n self._partial_timer = usertypes.Timer(self, 'partial-match')\n self._partial_timer.setSingleShot(True)\n self._inhibited = False\n self._inhibited_timer = usertypes.Timer(self, 'normal-inhibited')\n self._inhibited_timer.setSingleShot(True)\n\n def __repr__(self):\n return utils.get_repr(self)\n\n def _handle_single_key(self, e):\n \"\"\"Override _handle_single_key to abort if the key is a startchar.\n\n Args:\n e: the KeyPressEvent from Qt.\n\n Return:\n A self.Match member.\n \"\"\"\n txt = e.text().strip()\n if self._inhibited:\n self._debug_log(\"Ignoring key '{}', because the normal mode is \"\n \"currently inhibited.\".format(txt))\n return self.Match.none\n match = super()._handle_single_key(e)\n if match == self.Match.partial:\n timeout = config.get('input', 'partial-timeout')\n if timeout != 0:\n self._partial_timer.setInterval(timeout)\n self._partial_timer.timeout.connect(self._clear_partial_match)\n self._partial_timer.start()\n return match\n\n def set_inhibited_timeout(self, timeout):\n if timeout != 0:\n self._debug_log(\"Inhibiting the normal mode for {}ms.\".format(\n timeout))\n self._inhibited = True\n self._inhibited_timer.setInterval(timeout)\n self._inhibited_timer.timeout.connect(self._clear_inhibited)\n self._inhibited_timer.start()\n\n @pyqtSlot()\n def _clear_partial_match(self):\n \"\"\"Clear a partial keystring after a timeout.\"\"\"\n self._debug_log(\"Clearing partial keystring {}\".format(\n self._keystring))\n self._keystring = ''\n self.keystring_updated.emit(self._keystring)\n\n @pyqtSlot()\n def _clear_inhibited(self):\n \"\"\"Reset inhibition state after a timeout.\"\"\"\n self._debug_log(\"Releasing inhibition state of normal mode.\")\n self._inhibited = False\n\n @pyqtSlot()\n def _stop_timers(self):\n super()._stop_timers()\n self._partial_timer.stop()\n try:\n self._partial_timer.timeout.disconnect(self._clear_partial_match)\n except TypeError:\n # no connections\n pass\n self._inhibited_timer.stop()\n try:\n self._inhibited_timer.timeout.disconnect(self._clear_inhibited)\n except TypeError:\n # no connections\n pass\n\n\nclass PromptKeyParser(keyparser.CommandKeyParser):\n\n \"\"\"KeyParser for yes/no prompts.\"\"\"\n\n def __init__(self, win_id, parent=None):\n super().__init__(win_id, parent, supports_count=False,\n supports_chains=True)\n # We don't want an extra section for this in the config, so we just\n # abuse the prompt section.\n self.read_config('prompt')\n\n def __repr__(self):\n return utils.get_repr(self)\n\n\nclass HintKeyParser(keyparser.CommandKeyParser):\n\n \"\"\"KeyChainParser for hints.\n\n Attributes:\n _filtertext: The text to filter with.\n _last_press: The nature of the last keypress, a LastPress member.\n \"\"\"\n\n def __init__(self, win_id, parent=None):\n super().__init__(win_id, parent, supports_count=False,\n supports_chains=True)\n self._filtertext = ''\n self._last_press = LastPress.none\n self.read_config('hint')\n self.keystring_updated.connect(self.on_keystring_updated)\n\n def _handle_special_key(self, e):\n \"\"\"Override _handle_special_key to handle string filtering.\n\n Return True if the keypress has been handled, and False if not.\n\n Args:\n e: the KeyPressEvent from Qt.\n\n Return:\n True if event has been handled, False otherwise.\n \"\"\"\n log.keyboard.debug(\"Got special key 0x{:x} text {}\".format(\n e.key(), e.text()))\n hintmanager = objreg.get('hintmanager', scope='tab',\n window=self._win_id, tab='current')\n if e.key() == Qt.Key_Backspace:\n log.keyboard.debug(\"Got backspace, mode {}, filtertext '{}', \"\n \"keystring '{}'\".format(self._last_press,\n self._filtertext,\n self._keystring))\n if self._last_press == LastPress.filtertext and self._filtertext:\n self._filtertext = self._filtertext[:-1]\n hintmanager.filter_hints(self._filtertext)\n return True\n elif self._last_press == LastPress.keystring and self._keystring:\n self._keystring = self._keystring[:-1]\n self.keystring_updated.emit(self._keystring)\n if not self._keystring and self._filtertext:\n # Switch back to hint filtering mode (this can happen only\n # in numeric mode after the number has been deleted).\n hintmanager.filter_hints(self._filtertext)\n self._last_press = LastPress.filtertext\n return True\n else:\n return super()._handle_special_key(e)\n elif hintmanager.current_mode() != 'number':\n return super()._handle_special_key(e)\n elif not e.text():\n return super()._handle_special_key(e)\n else:\n self._filtertext += e.text()\n hintmanager.filter_hints(self._filtertext)\n self._last_press = LastPress.filtertext\n return True\n\n def handle(self, e):\n \"\"\"Handle a new keypress and call the respective handlers.\n\n Args:\n e: the KeyPressEvent from Qt\n\n Returns:\n True if the match has been handled, False otherwise.\n \"\"\"\n match = self._handle_single_key(e)\n if match == self.Match.partial:\n self.keystring_updated.emit(self._keystring)\n self._last_press = LastPress.keystring\n return True\n elif match == self.Match.definitive:\n self._last_press = LastPress.none\n return True\n elif match == self.Match.other:\n pass\n elif match == self.Match.none:\n # We couldn't find a keychain so we check if it's a special key.\n return self._handle_special_key(e)\n else:\n raise ValueError(\"Got invalid match type {}!\".format(match))\n\n def execute(self, cmdstr, keytype, count=None):\n \"\"\"Handle a completed keychain.\"\"\"\n if not isinstance(keytype, self.Type):\n raise TypeError(\"Type {} is no Type member!\".format(keytype))\n if keytype == self.Type.chain:\n hintmanager = objreg.get('hintmanager', scope='tab',\n window=self._win_id, tab='current')\n hintmanager.handle_partial_key(cmdstr)\n else:\n # execute as command\n super().execute(cmdstr, keytype, count)\n\n def update_bindings(self, strings, preserve_filter=False):\n \"\"\"Update bindings when the hint strings changed.\n\n Args:\n strings: A list of hint strings.\n preserve_filter: Whether to keep the current value of\n `self._filtertext`.\n \"\"\"\n self.bindings = {s: s for s in strings}\n if not preserve_filter:\n self._filtertext = ''\n\n @pyqtSlot(str)\n def on_keystring_updated(self, keystr):\n \"\"\"Update hintmanager when the keystring was updated.\"\"\"\n hintmanager = objreg.get('hintmanager', scope='tab',\n window=self._win_id, tab='current')\n hintmanager.handle_partial_key(keystr)\n\n\nclass CaretKeyParser(keyparser.CommandKeyParser):\n\n \"\"\"KeyParser for caret mode.\"\"\"\n\n passthrough = True\n\n def __init__(self, win_id, parent=None):\n super().__init__(win_id, parent, supports_count=True,\n supports_chains=True)\n self.read_config('caret')\n\n\nclass RegisterKeyParser(keyparser.CommandKeyParser):\n\n \"\"\"KeyParser for modes that record a register key.\n\n Attributes:\n _mode: One of KeyMode.set_mark, KeyMode.jump_mark, KeyMode.record_macro\n and KeyMode.run_macro.\n \"\"\"\n\n def __init__(self, win_id, mode, parent=None):\n super().__init__(win_id, parent, supports_count=False,\n supports_chains=False)\n self._mode = mode\n self.read_config('register')\n\n def handle(self, e):\n \"\"\"Override handle to always match the next key and use the register.\n\n Args:\n e: the KeyPressEvent from Qt.\n\n Return:\n True if event has been handled, False otherwise.\n \"\"\"\n if super().handle(e):\n return True\n\n key = e.text()\n\n if key == '' or utils.keyevent_to_string(e) is None:\n # this is not a proper register key, let it pass and keep going\n return False\n\n tabbed_browser = objreg.get('tabbed-browser', scope='window',\n window=self._win_id)\n macro_recorder = objreg.get('macro-recorder')\n\n try:\n if self._mode == usertypes.KeyMode.set_mark:\n tabbed_browser.set_mark(key)\n elif self._mode == usertypes.KeyMode.jump_mark:\n tabbed_browser.jump_mark(key)\n elif self._mode == usertypes.KeyMode.record_macro:\n macro_recorder.record_macro(key)\n elif self._mode == usertypes.KeyMode.run_macro:\n macro_recorder.run_macro(self._win_id, key)\n else:\n raise ValueError(\n \"{} is not a valid register mode\".format(self._mode))\n except (cmdexc.CommandMetaError, cmdexc.CommandError) as err:\n message.error(str(err), stack=traceback.format_exc())\n\n self.request_leave.emit(self._mode, \"valid register key\", True)\n\n return True\n\n @pyqtSlot(str)\n def on_keyconfig_changed(self, mode):\n \"\"\"RegisterKeyParser has no config section (no bindable keys).\"\"\"\n pass\n", "path": "qutebrowser/keyinput/modeparsers.py"}]}
| 3,756 | 195 |
gh_patches_debug_60894
|
rasdani/github-patches
|
git_diff
|
tiangolo__fastapi-493
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FastAPI exceptions module mistakenly references the 'requests' package
**Describe the bug**
Starting up a FastAPI 0.38.0 app displays the following error:
```python
from fastapi import FastAPI
File ".../lib/site-packages/fastapi/__init__.py", line 7, in <module>
from .applications import FastAPI
File ".../lib/site-packages/fastapi/applications.py", line 3, in <module>
from fastapi import routing
File ".../lib/site-packages/fastapi/routing.py", line 7, in <module>
from fastapi.dependencies.models import Dependant
File ".../lib/site-packages/fastapi/dependencies/models.py", line 3, in <module>
from fastapi.security.base import SecurityBase
File ".../lib/site-packages/fastapi/security/__init__.py", line 2, in <module>
from .http import (
File ".../lib/site-packages/fastapi/security/http.py", line 5, in <module>
from fastapi.exceptions import HTTPException
File ".../lib/site-packages/fastapi/exceptions.py", line 5, in <module>
from requests import Request
ModuleNotFoundError: No module named 'requests'
```
**Expected behavior**
The app should start without import errors.
**Environment:**
- OS: Linux, Windows, and macOS
- FastAPI Version: 0.38.0
**Additional context**
It's likely the `from requests import Request` should be replaced with `from starlette.requests import Request` in line 5 of `fastapi/exceptions.py`
FastAPI exceptions module mistakenly references the 'requests' package
**Describe the bug**
Starting up a FastAPI 0.38.0 app displays the following error:
```python
from fastapi import FastAPI
File ".../lib/site-packages/fastapi/__init__.py", line 7, in <module>
from .applications import FastAPI
File ".../lib/site-packages/fastapi/applications.py", line 3, in <module>
from fastapi import routing
File ".../lib/site-packages/fastapi/routing.py", line 7, in <module>
from fastapi.dependencies.models import Dependant
File ".../lib/site-packages/fastapi/dependencies/models.py", line 3, in <module>
from fastapi.security.base import SecurityBase
File ".../lib/site-packages/fastapi/security/__init__.py", line 2, in <module>
from .http import (
File ".../lib/site-packages/fastapi/security/http.py", line 5, in <module>
from fastapi.exceptions import HTTPException
File ".../lib/site-packages/fastapi/exceptions.py", line 5, in <module>
from requests import Request
ModuleNotFoundError: No module named 'requests'
```
**Expected behavior**
The app should start without import errors.
**Environment:**
- OS: Linux, Windows, and macOS
- FastAPI Version: 0.38.0
**Additional context**
It's likely the `from requests import Request` should be replaced with `from starlette.requests import Request` in line 5 of `fastapi/exceptions.py`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `fastapi/exceptions.py`
Content:
```
1 from typing import Any, Sequence
2
3 from pydantic import ValidationError
4 from pydantic.error_wrappers import ErrorList
5 from requests import Request
6 from starlette.exceptions import HTTPException as StarletteHTTPException
7 from starlette.websockets import WebSocket
8
9
10 class HTTPException(StarletteHTTPException):
11 def __init__(
12 self, status_code: int, detail: Any = None, headers: dict = None
13 ) -> None:
14 super().__init__(status_code=status_code, detail=detail)
15 self.headers = headers
16
17
18 class RequestValidationError(ValidationError):
19 def __init__(self, errors: Sequence[ErrorList]) -> None:
20 super().__init__(errors, Request)
21
22
23 class WebSocketRequestValidationError(ValidationError):
24 def __init__(self, errors: Sequence[ErrorList]) -> None:
25 super().__init__(errors, WebSocket)
26
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/fastapi/exceptions.py b/fastapi/exceptions.py
--- a/fastapi/exceptions.py
+++ b/fastapi/exceptions.py
@@ -2,8 +2,8 @@
from pydantic import ValidationError
from pydantic.error_wrappers import ErrorList
-from requests import Request
from starlette.exceptions import HTTPException as StarletteHTTPException
+from starlette.requests import Request
from starlette.websockets import WebSocket
|
{"golden_diff": "diff --git a/fastapi/exceptions.py b/fastapi/exceptions.py\n--- a/fastapi/exceptions.py\n+++ b/fastapi/exceptions.py\n@@ -2,8 +2,8 @@\n \n from pydantic import ValidationError\n from pydantic.error_wrappers import ErrorList\n-from requests import Request\n from starlette.exceptions import HTTPException as StarletteHTTPException\n+from starlette.requests import Request\n from starlette.websockets import WebSocket\n", "issue": "FastAPI exceptions module mistakenly references the 'requests' package\n**Describe the bug**\r\nStarting up a FastAPI 0.38.0 app displays the following error:\r\n\r\n```python\r\nfrom fastapi import FastAPI\r\n File \".../lib/site-packages/fastapi/__init__.py\", line 7, in <module>\r\n from .applications import FastAPI\r\n File \".../lib/site-packages/fastapi/applications.py\", line 3, in <module>\r\n from fastapi import routing\r\n File \".../lib/site-packages/fastapi/routing.py\", line 7, in <module>\r\n from fastapi.dependencies.models import Dependant\r\n File \".../lib/site-packages/fastapi/dependencies/models.py\", line 3, in <module>\r\n from fastapi.security.base import SecurityBase\r\n File \".../lib/site-packages/fastapi/security/__init__.py\", line 2, in <module>\r\n from .http import (\r\n File \".../lib/site-packages/fastapi/security/http.py\", line 5, in <module>\r\n from fastapi.exceptions import HTTPException\r\n File \".../lib/site-packages/fastapi/exceptions.py\", line 5, in <module>\r\n from requests import Request\r\nModuleNotFoundError: No module named 'requests'\r\n```\r\n\r\n**Expected behavior**\r\nThe app should start without import errors.\r\n\r\n**Environment:**\r\n - OS: Linux, Windows, and macOS\r\n - FastAPI Version: 0.38.0\r\n\r\n**Additional context**\r\nIt's likely the `from requests import Request` should be replaced with `from starlette.requests import Request` in line 5 of `fastapi/exceptions.py`\nFastAPI exceptions module mistakenly references the 'requests' package\n**Describe the bug**\r\nStarting up a FastAPI 0.38.0 app displays the following error:\r\n\r\n```python\r\nfrom fastapi import FastAPI\r\n File \".../lib/site-packages/fastapi/__init__.py\", line 7, in <module>\r\n from .applications import FastAPI\r\n File \".../lib/site-packages/fastapi/applications.py\", line 3, in <module>\r\n from fastapi import routing\r\n File \".../lib/site-packages/fastapi/routing.py\", line 7, in <module>\r\n from fastapi.dependencies.models import Dependant\r\n File \".../lib/site-packages/fastapi/dependencies/models.py\", line 3, in <module>\r\n from fastapi.security.base import SecurityBase\r\n File \".../lib/site-packages/fastapi/security/__init__.py\", line 2, in <module>\r\n from .http import (\r\n File \".../lib/site-packages/fastapi/security/http.py\", line 5, in <module>\r\n from fastapi.exceptions import HTTPException\r\n File \".../lib/site-packages/fastapi/exceptions.py\", line 5, in <module>\r\n from requests import Request\r\nModuleNotFoundError: No module named 'requests'\r\n```\r\n\r\n**Expected behavior**\r\nThe app should start without import errors.\r\n\r\n**Environment:**\r\n - OS: Linux, Windows, and macOS\r\n - FastAPI Version: 0.38.0\r\n\r\n**Additional context**\r\nIt's likely the `from requests import Request` should be replaced with `from starlette.requests import Request` in line 5 of `fastapi/exceptions.py`\n", "before_files": [{"content": "from typing import Any, Sequence\n\nfrom pydantic import ValidationError\nfrom pydantic.error_wrappers import ErrorList\nfrom requests import Request\nfrom starlette.exceptions import HTTPException as StarletteHTTPException\nfrom starlette.websockets import WebSocket\n\n\nclass HTTPException(StarletteHTTPException):\n def __init__(\n self, status_code: int, detail: Any = None, headers: dict = None\n ) -> None:\n super().__init__(status_code=status_code, detail=detail)\n self.headers = headers\n\n\nclass RequestValidationError(ValidationError):\n def __init__(self, errors: Sequence[ErrorList]) -> None:\n super().__init__(errors, Request)\n\n\nclass WebSocketRequestValidationError(ValidationError):\n def __init__(self, errors: Sequence[ErrorList]) -> None:\n super().__init__(errors, WebSocket)\n", "path": "fastapi/exceptions.py"}], "after_files": [{"content": "from typing import Any, Sequence\n\nfrom pydantic import ValidationError\nfrom pydantic.error_wrappers import ErrorList\nfrom starlette.exceptions import HTTPException as StarletteHTTPException\nfrom starlette.requests import Request\nfrom starlette.websockets import WebSocket\n\n\nclass HTTPException(StarletteHTTPException):\n def __init__(\n self, status_code: int, detail: Any = None, headers: dict = None\n ) -> None:\n super().__init__(status_code=status_code, detail=detail)\n self.headers = headers\n\n\nclass RequestValidationError(ValidationError):\n def __init__(self, errors: Sequence[ErrorList]) -> None:\n super().__init__(errors, Request)\n\n\nclass WebSocketRequestValidationError(ValidationError):\n def __init__(self, errors: Sequence[ErrorList]) -> None:\n super().__init__(errors, WebSocket)\n", "path": "fastapi/exceptions.py"}]}
| 1,164 | 95 |
gh_patches_debug_9408
|
rasdani/github-patches
|
git_diff
|
pytorch__pytorch-5108
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
torch.nn.DataParallel supporting unequal sizes
As documented [here](http://pytorch.org/docs/master/_modules/torch/nn/parallel/data_parallel.html):
```
The batch size should be larger than the number of GPUs used. It should
also be an integer multiple of the number of GPUs so that each chunk is the
same size (so that each GPU processes the same number of samples).
```
To use `torch.nn.DataParallel`, people should carefully set the batch size according to the number of gpus they plan to use, otherwise it will pop up errors.
This issue becomes more subtle when using `torch.utils.data.DataLoader` with `drop_last=False` by default. As the total number of training/validation samples varies with the dataset, the size of the last batch of data loaded by `torch.utils.data.DataLoader` is easy to become indivisible by the number of GPUs (e.g., 2,3,4,8,...).
A feature request would be:
supporting `torch.nn.DataParallel` with batch size indivisible by the number of GPUs used.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torch/nn/parallel/data_parallel.py`
Content:
```
1 import torch
2 from ..modules import Module
3 from .scatter_gather import scatter_kwargs, gather
4 from .replicate import replicate
5 from .parallel_apply import parallel_apply
6
7
8 class DataParallel(Module):
9 r"""Implements data parallelism at the module level.
10
11 This container parallelizes the application of the given module by
12 splitting the input across the specified devices by chunking in the batch
13 dimension. In the forward pass, the module is replicated on each device,
14 and each replica handles a portion of the input. During the backwards
15 pass, gradients from each replica are summed into the original module.
16
17 The batch size should be larger than the number of GPUs used. It should
18 also be an integer multiple of the number of GPUs so that each chunk is the
19 same size (so that each GPU processes the same number of samples).
20
21 See also: :ref:`cuda-nn-dataparallel-instead`
22
23 Arbitrary positional and keyword inputs are allowed to be passed into
24 DataParallel EXCEPT Tensors. All variables will be scattered on dim
25 specified (default 0). Primitive types will be broadcasted, but all
26 other types will be a shallow copy and can be corrupted if written to in
27 the model's forward pass.
28
29 .. warning::
30 Forward and backwrad hooks defined on :attr:`module` and its submodules
31 won't be invoked anymore, unless the hooks are initialized in the
32 :meth:`forward` method.
33
34 Args:
35 module: module to be parallelized
36 device_ids: CUDA devices (default: all devices)
37 output_device: device location of output (default: device_ids[0])
38
39 Example::
40
41 >>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2])
42 >>> output = net(input_var)
43 """
44
45 # TODO: update notes/cuda.rst when this class handles 8+ GPUs well
46
47 def __init__(self, module, device_ids=None, output_device=None, dim=0):
48 super(DataParallel, self).__init__()
49
50 if not torch.cuda.is_available():
51 self.module = module
52 self.device_ids = []
53 return
54
55 if device_ids is None:
56 device_ids = list(range(torch.cuda.device_count()))
57 if output_device is None:
58 output_device = device_ids[0]
59 self.dim = dim
60 self.module = module
61 self.device_ids = device_ids
62 self.output_device = output_device
63 if len(self.device_ids) == 1:
64 self.module.cuda(device_ids[0])
65
66 def forward(self, *inputs, **kwargs):
67 if not self.device_ids:
68 return self.module(*inputs, **kwargs)
69 inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
70 if len(self.device_ids) == 1:
71 return self.module(*inputs[0], **kwargs[0])
72 replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
73 outputs = self.parallel_apply(replicas, inputs, kwargs)
74 return self.gather(outputs, self.output_device)
75
76 def replicate(self, module, device_ids):
77 return replicate(module, device_ids)
78
79 def scatter(self, inputs, kwargs, device_ids):
80 return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)
81
82 def parallel_apply(self, replicas, inputs, kwargs):
83 return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
84
85 def gather(self, outputs, output_device):
86 return gather(outputs, output_device, dim=self.dim)
87
88
89 def data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None):
90 r"""Evaluates module(input) in parallel across the GPUs given in device_ids.
91
92 This is the functional version of the DataParallel module.
93
94 Args:
95 module: the module to evaluate in parallel
96 inputs: inputs to the module
97 device_ids: GPU ids on which to replicate module
98 output_device: GPU location of the output Use -1 to indicate the CPU.
99 (default: device_ids[0])
100 Returns:
101 a Variable containing the result of module(input) located on
102 output_device
103 """
104 if not isinstance(inputs, tuple):
105 inputs = (inputs,)
106
107 if device_ids is None:
108 device_ids = list(range(torch.cuda.device_count()))
109
110 if output_device is None:
111 output_device = device_ids[0]
112
113 inputs, module_kwargs = scatter_kwargs(inputs, module_kwargs, device_ids, dim)
114 if len(device_ids) == 1:
115 return module(*inputs[0], **module_kwargs[0])
116 used_device_ids = device_ids[:len(inputs)]
117 replicas = replicate(module, used_device_ids)
118 outputs = parallel_apply(replicas, inputs, module_kwargs, used_device_ids)
119 return gather(outputs, output_device, dim)
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/torch/nn/parallel/data_parallel.py b/torch/nn/parallel/data_parallel.py
--- a/torch/nn/parallel/data_parallel.py
+++ b/torch/nn/parallel/data_parallel.py
@@ -14,9 +14,7 @@
and each replica handles a portion of the input. During the backwards
pass, gradients from each replica are summed into the original module.
- The batch size should be larger than the number of GPUs used. It should
- also be an integer multiple of the number of GPUs so that each chunk is the
- same size (so that each GPU processes the same number of samples).
+ The batch size should be larger than the number of GPUs used.
See also: :ref:`cuda-nn-dataparallel-instead`
|
{"golden_diff": "diff --git a/torch/nn/parallel/data_parallel.py b/torch/nn/parallel/data_parallel.py\n--- a/torch/nn/parallel/data_parallel.py\n+++ b/torch/nn/parallel/data_parallel.py\n@@ -14,9 +14,7 @@\n and each replica handles a portion of the input. During the backwards\n pass, gradients from each replica are summed into the original module.\n \n- The batch size should be larger than the number of GPUs used. It should\n- also be an integer multiple of the number of GPUs so that each chunk is the\n- same size (so that each GPU processes the same number of samples).\n+ The batch size should be larger than the number of GPUs used.\n \n See also: :ref:`cuda-nn-dataparallel-instead`\n", "issue": "torch.nn.DataParallel supporting unequal sizes\nAs documented [here](http://pytorch.org/docs/master/_modules/torch/nn/parallel/data_parallel.html):\r\n```\r\nThe batch size should be larger than the number of GPUs used. It should\r\n also be an integer multiple of the number of GPUs so that each chunk is the\r\n same size (so that each GPU processes the same number of samples).\r\n```\r\nTo use `torch.nn.DataParallel`, people should carefully set the batch size according to the number of gpus they plan to use, otherwise it will pop up errors. \r\n\r\nThis issue becomes more subtle when using `torch.utils.data.DataLoader` with `drop_last=False` by default. As the total number of training/validation samples varies with the dataset, the size of the last batch of data loaded by `torch.utils.data.DataLoader` is easy to become indivisible by the number of GPUs (e.g., 2,3,4,8,...).\r\n\r\nA feature request would be:\r\nsupporting `torch.nn.DataParallel` with batch size indivisible by the number of GPUs used.\n", "before_files": [{"content": "import torch\nfrom ..modules import Module\nfrom .scatter_gather import scatter_kwargs, gather\nfrom .replicate import replicate\nfrom .parallel_apply import parallel_apply\n\n\nclass DataParallel(Module):\n r\"\"\"Implements data parallelism at the module level.\n\n This container parallelizes the application of the given module by\n splitting the input across the specified devices by chunking in the batch\n dimension. In the forward pass, the module is replicated on each device,\n and each replica handles a portion of the input. During the backwards\n pass, gradients from each replica are summed into the original module.\n\n The batch size should be larger than the number of GPUs used. It should\n also be an integer multiple of the number of GPUs so that each chunk is the\n same size (so that each GPU processes the same number of samples).\n\n See also: :ref:`cuda-nn-dataparallel-instead`\n\n Arbitrary positional and keyword inputs are allowed to be passed into\n DataParallel EXCEPT Tensors. All variables will be scattered on dim\n specified (default 0). Primitive types will be broadcasted, but all\n other types will be a shallow copy and can be corrupted if written to in\n the model's forward pass.\n\n .. warning::\n Forward and backwrad hooks defined on :attr:`module` and its submodules\n won't be invoked anymore, unless the hooks are initialized in the\n :meth:`forward` method.\n\n Args:\n module: module to be parallelized\n device_ids: CUDA devices (default: all devices)\n output_device: device location of output (default: device_ids[0])\n\n Example::\n\n >>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2])\n >>> output = net(input_var)\n \"\"\"\n\n # TODO: update notes/cuda.rst when this class handles 8+ GPUs well\n\n def __init__(self, module, device_ids=None, output_device=None, dim=0):\n super(DataParallel, self).__init__()\n\n if not torch.cuda.is_available():\n self.module = module\n self.device_ids = []\n return\n\n if device_ids is None:\n device_ids = list(range(torch.cuda.device_count()))\n if output_device is None:\n output_device = device_ids[0]\n self.dim = dim\n self.module = module\n self.device_ids = device_ids\n self.output_device = output_device\n if len(self.device_ids) == 1:\n self.module.cuda(device_ids[0])\n\n def forward(self, *inputs, **kwargs):\n if not self.device_ids:\n return self.module(*inputs, **kwargs)\n inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)\n if len(self.device_ids) == 1:\n return self.module(*inputs[0], **kwargs[0])\n replicas = self.replicate(self.module, self.device_ids[:len(inputs)])\n outputs = self.parallel_apply(replicas, inputs, kwargs)\n return self.gather(outputs, self.output_device)\n\n def replicate(self, module, device_ids):\n return replicate(module, device_ids)\n\n def scatter(self, inputs, kwargs, device_ids):\n return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)\n\n def parallel_apply(self, replicas, inputs, kwargs):\n return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\n\n def gather(self, outputs, output_device):\n return gather(outputs, output_device, dim=self.dim)\n\n\ndef data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None):\n r\"\"\"Evaluates module(input) in parallel across the GPUs given in device_ids.\n\n This is the functional version of the DataParallel module.\n\n Args:\n module: the module to evaluate in parallel\n inputs: inputs to the module\n device_ids: GPU ids on which to replicate module\n output_device: GPU location of the output Use -1 to indicate the CPU.\n (default: device_ids[0])\n Returns:\n a Variable containing the result of module(input) located on\n output_device\n \"\"\"\n if not isinstance(inputs, tuple):\n inputs = (inputs,)\n\n if device_ids is None:\n device_ids = list(range(torch.cuda.device_count()))\n\n if output_device is None:\n output_device = device_ids[0]\n\n inputs, module_kwargs = scatter_kwargs(inputs, module_kwargs, device_ids, dim)\n if len(device_ids) == 1:\n return module(*inputs[0], **module_kwargs[0])\n used_device_ids = device_ids[:len(inputs)]\n replicas = replicate(module, used_device_ids)\n outputs = parallel_apply(replicas, inputs, module_kwargs, used_device_ids)\n return gather(outputs, output_device, dim)\n", "path": "torch/nn/parallel/data_parallel.py"}], "after_files": [{"content": "import torch\nfrom ..modules import Module\nfrom .scatter_gather import scatter_kwargs, gather\nfrom .replicate import replicate\nfrom .parallel_apply import parallel_apply\n\n\nclass DataParallel(Module):\n r\"\"\"Implements data parallelism at the module level.\n\n This container parallelizes the application of the given module by\n splitting the input across the specified devices by chunking in the batch\n dimension. In the forward pass, the module is replicated on each device,\n and each replica handles a portion of the input. During the backwards\n pass, gradients from each replica are summed into the original module.\n\n The batch size should be larger than the number of GPUs used.\n\n See also: :ref:`cuda-nn-dataparallel-instead`\n\n Arbitrary positional and keyword inputs are allowed to be passed into\n DataParallel EXCEPT Tensors. All variables will be scattered on dim\n specified (default 0). Primitive types will be broadcasted, but all\n other types will be a shallow copy and can be corrupted if written to in\n the model's forward pass.\n\n .. warning::\n Forward and backwrad hooks defined on :attr:`module` and its submodules\n won't be invoked anymore, unless the hooks are initialized in the\n :meth:`forward` method.\n\n Args:\n module: module to be parallelized\n device_ids: CUDA devices (default: all devices)\n output_device: device location of output (default: device_ids[0])\n\n Example::\n\n >>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2])\n >>> output = net(input_var)\n \"\"\"\n\n # TODO: update notes/cuda.rst when this class handles 8+ GPUs well\n\n def __init__(self, module, device_ids=None, output_device=None, dim=0):\n super(DataParallel, self).__init__()\n\n if not torch.cuda.is_available():\n self.module = module\n self.device_ids = []\n return\n\n if device_ids is None:\n device_ids = list(range(torch.cuda.device_count()))\n if output_device is None:\n output_device = device_ids[0]\n self.dim = dim\n self.module = module\n self.device_ids = device_ids\n self.output_device = output_device\n if len(self.device_ids) == 1:\n self.module.cuda(device_ids[0])\n\n def forward(self, *inputs, **kwargs):\n if not self.device_ids:\n return self.module(*inputs, **kwargs)\n inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)\n if len(self.device_ids) == 1:\n return self.module(*inputs[0], **kwargs[0])\n replicas = self.replicate(self.module, self.device_ids[:len(inputs)])\n outputs = self.parallel_apply(replicas, inputs, kwargs)\n return self.gather(outputs, self.output_device)\n\n def replicate(self, module, device_ids):\n return replicate(module, device_ids)\n\n def scatter(self, inputs, kwargs, device_ids):\n return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)\n\n def parallel_apply(self, replicas, inputs, kwargs):\n return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\n\n def gather(self, outputs, output_device):\n return gather(outputs, output_device, dim=self.dim)\n\n\ndef data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None):\n r\"\"\"Evaluates module(input) in parallel across the GPUs given in device_ids.\n\n This is the functional version of the DataParallel module.\n\n Args:\n module: the module to evaluate in parallel\n inputs: inputs to the module\n device_ids: GPU ids on which to replicate module\n output_device: GPU location of the output Use -1 to indicate the CPU.\n (default: device_ids[0])\n Returns:\n a Variable containing the result of module(input) located on\n output_device\n \"\"\"\n if not isinstance(inputs, tuple):\n inputs = (inputs,)\n\n if device_ids is None:\n device_ids = list(range(torch.cuda.device_count()))\n\n if output_device is None:\n output_device = device_ids[0]\n\n inputs, module_kwargs = scatter_kwargs(inputs, module_kwargs, device_ids, dim)\n if len(device_ids) == 1:\n return module(*inputs[0], **module_kwargs[0])\n used_device_ids = device_ids[:len(inputs)]\n replicas = replicate(module, used_device_ids)\n outputs = parallel_apply(replicas, inputs, module_kwargs, used_device_ids)\n return gather(outputs, output_device, dim)\n", "path": "torch/nn/parallel/data_parallel.py"}]}
| 1,783 | 176 |
gh_patches_debug_965
|
rasdani/github-patches
|
git_diff
|
tiangolo__fastapi-9468
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FastAPI tests in pydantic failing due to flask deprecation
### Privileged issue
- [X] I'm @tiangolo or he asked me directly to create an issue here.
### Issue Content
hope you don't mind me creating an issue, pydantic's 1.10.X tests are failing due to a new issue with running our fastapi tests, see
https://github.com/pydantic/pydantic/actions/runs/4832692304/jobs/8611783607?pr=5628
output from pydantic's tests:
```
==================================== ERRORS ====================================
______ ERROR collecting tests/test_tutorial/test_wsgi/test_tutorial001.py ______
tests/test_tutorial/test_wsgi/test_tutorial001.py:3: in <module>
from docs_src.wsgi.tutorial001 import app
docs_src/wsgi/tutorial001.py:3: in <module>
from flask import Flask, escape, request
<frozen importlib._bootstrap>:1075: in _handle_fromlist
???
/opt/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/site-packages/flask/__init__.py:71: in __getattr__
warnings.warn(
E DeprecationWarning: 'flask.escape' is deprecated and will be removed in Flask 2.4. Import 'markupsafe.escape' instead.
=========================== short test summary info ============================
ERROR tests/test_tutorial/test_wsgi/test_tutorial001.py - DeprecationWarning: 'flask.escape' is deprecated and will be removed in Flask 2.4. Import 'markupsafe.escape'
```
related to https://github.com/pydantic/pydantic/pull/5628
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs_src/wsgi/tutorial001.py`
Content:
```
1 from fastapi import FastAPI
2 from fastapi.middleware.wsgi import WSGIMiddleware
3 from flask import Flask, escape, request
4
5 flask_app = Flask(__name__)
6
7
8 @flask_app.route("/")
9 def flask_main():
10 name = request.args.get("name", "World")
11 return f"Hello, {escape(name)} from Flask!"
12
13
14 app = FastAPI()
15
16
17 @app.get("/v2")
18 def read_main():
19 return {"message": "Hello World"}
20
21
22 app.mount("/v1", WSGIMiddleware(flask_app))
23
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs_src/wsgi/tutorial001.py b/docs_src/wsgi/tutorial001.py
--- a/docs_src/wsgi/tutorial001.py
+++ b/docs_src/wsgi/tutorial001.py
@@ -1,6 +1,7 @@
from fastapi import FastAPI
from fastapi.middleware.wsgi import WSGIMiddleware
-from flask import Flask, escape, request
+from flask import Flask, request
+from markupsafe import escape
flask_app = Flask(__name__)
|
{"golden_diff": "diff --git a/docs_src/wsgi/tutorial001.py b/docs_src/wsgi/tutorial001.py\n--- a/docs_src/wsgi/tutorial001.py\n+++ b/docs_src/wsgi/tutorial001.py\n@@ -1,6 +1,7 @@\n from fastapi import FastAPI\n from fastapi.middleware.wsgi import WSGIMiddleware\n-from flask import Flask, escape, request\n+from flask import Flask, request\n+from markupsafe import escape\n \n flask_app = Flask(__name__)\n", "issue": "FastAPI tests in pydantic failing due to flask deprecation\n### Privileged issue\n\n- [X] I'm @tiangolo or he asked me directly to create an issue here.\n\n### Issue Content\n\nhope you don't mind me creating an issue, pydantic's 1.10.X tests are failing due to a new issue with running our fastapi tests, see\r\n\r\nhttps://github.com/pydantic/pydantic/actions/runs/4832692304/jobs/8611783607?pr=5628\r\n\r\noutput from pydantic's tests:\r\n\r\n```\r\n==================================== ERRORS ====================================\r\n______ ERROR collecting tests/test_tutorial/test_wsgi/test_tutorial001.py ______\r\ntests/test_tutorial/test_wsgi/test_tutorial001.py:3: in <module>\r\n from docs_src.wsgi.tutorial001 import app\r\ndocs_src/wsgi/tutorial001.py:3: in <module>\r\n from flask import Flask, escape, request\r\n<frozen importlib._bootstrap>:1075: in _handle_fromlist\r\n ???\r\n/opt/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/site-packages/flask/__init__.py:71: in __getattr__\r\n warnings.warn(\r\nE DeprecationWarning: 'flask.escape' is deprecated and will be removed in Flask 2.4. Import 'markupsafe.escape' instead.\r\n=========================== short test summary info ============================\r\nERROR tests/test_tutorial/test_wsgi/test_tutorial001.py - DeprecationWarning: 'flask.escape' is deprecated and will be removed in Flask 2.4. Import 'markupsafe.escape' \r\n```\r\n\r\nrelated to https://github.com/pydantic/pydantic/pull/5628\n", "before_files": [{"content": "from fastapi import FastAPI\nfrom fastapi.middleware.wsgi import WSGIMiddleware\nfrom flask import Flask, escape, request\n\nflask_app = Flask(__name__)\n\n\n@flask_app.route(\"/\")\ndef flask_main():\n name = request.args.get(\"name\", \"World\")\n return f\"Hello, {escape(name)} from Flask!\"\n\n\napp = FastAPI()\n\n\[email protected](\"/v2\")\ndef read_main():\n return {\"message\": \"Hello World\"}\n\n\napp.mount(\"/v1\", WSGIMiddleware(flask_app))\n", "path": "docs_src/wsgi/tutorial001.py"}], "after_files": [{"content": "from fastapi import FastAPI\nfrom fastapi.middleware.wsgi import WSGIMiddleware\nfrom flask import Flask, request\nfrom markupsafe import escape\n\nflask_app = Flask(__name__)\n\n\n@flask_app.route(\"/\")\ndef flask_main():\n name = request.args.get(\"name\", \"World\")\n return f\"Hello, {escape(name)} from Flask!\"\n\n\napp = FastAPI()\n\n\[email protected](\"/v2\")\ndef read_main():\n return {\"message\": \"Hello World\"}\n\n\napp.mount(\"/v1\", WSGIMiddleware(flask_app))\n", "path": "docs_src/wsgi/tutorial001.py"}]}
| 819 | 109 |
gh_patches_debug_4721
|
rasdani/github-patches
|
git_diff
|
opensearch-project__opensearch-build-3240
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: src/system/os.py does not correctly return architecture for bsd platform
### Describe the bug
Run `uname -m` will return follow in the freebsd:
```
amd64
```
The code here does not support `amd64` as input:
https://github.com/opensearch-project/opensearch-build/blob/main/src/system/os.py#L12-L19
```
def current_architecture() -> str:
architecture = subprocess.check_output(["uname", "-m"]).decode().strip()
if architecture == "x86_64":
return "x64"
elif architecture == "aarch64" or architecture == "arm64":
return "arm64"
else:
raise ValueError(f"Unsupported architecture: {architecture}")
```
Thanks.
### To reproduce
Run the build process on a freebsd server and see output:
```
$ ./build.sh manifests/2.4.0/opensearch-2.4.0.yml --component OpenSearch
Installing dependencies in . ...
Installing dependencies from Pipfile.lock (b36c9c)...
To activate this project's virtualenv, run pipenv shell.
Alternatively, run a command inside the virtualenv with pipenv run.
Running ./src/run_build.py manifests/2.4.0/opensearch-2.4.0.yml --component OpenSearch ...
2023-02-23 23:15:47 INFO Building in /tmp/tmpllimwxjs
2023-02-23 23:15:47 INFO Removing /tmp/tmpllimwxjs
Traceback (most recent call last):
File "./src/run_build.py", line 81, in <module>
sys.exit(main())
File "./src/run_build.py", line 55, in main
architecture=args.architecture or manifest.build.architecture,
File "/usr/share/opensearch/opensearch-build/src/build_workflow/build_target.py", line 45, in __init__
self.architecture = architecture or current_architecture()
File "/usr/share/opensearch/opensearch-build/src/system/os.py", line 20, in current_architecture
raise ValueError(f"Unsupported architecture: {architecture}")
ValueError: Unsupported architecture: amd64
```
### Expected behavior
The bsd x64 hosts can run the code without specifying --architecture x64.
### Screenshots
If applicable, add screenshots to help explain your problem.
### Host / Environment
_No response_
### Additional context
_No response_
### Relevant log output
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/system/os.py`
Content:
```
1 # Copyright OpenSearch Contributors
2 # SPDX-License-Identifier: Apache-2.0
3 #
4 # The OpenSearch Contributors require contributions made to
5 # this file be licensed under the Apache-2.0 license or a
6 # compatible open source license.
7
8 import os
9 import subprocess
10
11
12 def current_architecture() -> str:
13 architecture = subprocess.check_output(["uname", "-m"]).decode().strip()
14 if architecture == "x86_64":
15 return "x64"
16 elif architecture == "aarch64" or architecture == "arm64":
17 return "arm64"
18 else:
19 raise ValueError(f"Unsupported architecture: {architecture}")
20
21
22 def current_platform() -> str:
23 if os.name == "nt":
24 return "windows"
25 else:
26 return subprocess.check_output(["uname", "-s"]).decode().strip().lower()
27
28
29 def deb_architecture(architecture: str) -> str:
30 # This would convert arch from "current_architecture" to deb specific architecture alternatives
31
32 deb_architecture_map = {
33 "x64": "amd64",
34 "arm64": "arm64",
35 }
36
37 return deb_architecture_map[architecture]
38
39
40 def rpm_architecture(architecture: str) -> str:
41 # This would convert arch from "current_architecture" to rpm specific architecture alternatives
42
43 rpm_architecture_map = {
44 "x64": "x86_64",
45 "arm64": "aarch64",
46 }
47
48 return rpm_architecture_map[architecture]
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/system/os.py b/src/system/os.py
--- a/src/system/os.py
+++ b/src/system/os.py
@@ -11,7 +11,7 @@
def current_architecture() -> str:
architecture = subprocess.check_output(["uname", "-m"]).decode().strip()
- if architecture == "x86_64":
+ if architecture == "x86_64" or architecture == "amd64":
return "x64"
elif architecture == "aarch64" or architecture == "arm64":
return "arm64"
|
{"golden_diff": "diff --git a/src/system/os.py b/src/system/os.py\n--- a/src/system/os.py\n+++ b/src/system/os.py\n@@ -11,7 +11,7 @@\n \n def current_architecture() -> str:\n architecture = subprocess.check_output([\"uname\", \"-m\"]).decode().strip()\n- if architecture == \"x86_64\":\n+ if architecture == \"x86_64\" or architecture == \"amd64\":\n return \"x64\"\n elif architecture == \"aarch64\" or architecture == \"arm64\":\n return \"arm64\"\n", "issue": "[Bug]: src/system/os.py does not correctly return architecture for bsd platform\n### Describe the bug\r\n\r\n\r\nRun `uname -m` will return follow in the freebsd:\r\n```\r\namd64\r\n```\r\n\r\nThe code here does not support `amd64` as input:\r\nhttps://github.com/opensearch-project/opensearch-build/blob/main/src/system/os.py#L12-L19\r\n```\r\ndef current_architecture() -> str:\r\n architecture = subprocess.check_output([\"uname\", \"-m\"]).decode().strip()\r\n if architecture == \"x86_64\":\r\n return \"x64\"\r\n elif architecture == \"aarch64\" or architecture == \"arm64\":\r\n return \"arm64\"\r\n else:\r\n raise ValueError(f\"Unsupported architecture: {architecture}\")\r\n```\r\n\r\n\r\n\r\nThanks.\r\n\r\n\r\n### To reproduce\r\n\r\nRun the build process on a freebsd server and see output:\r\n```\r\n$ ./build.sh manifests/2.4.0/opensearch-2.4.0.yml --component OpenSearch\r\nInstalling dependencies in . ...\r\nInstalling dependencies from Pipfile.lock (b36c9c)...\r\nTo activate this project's virtualenv, run pipenv shell.\r\nAlternatively, run a command inside the virtualenv with pipenv run.\r\nRunning ./src/run_build.py manifests/2.4.0/opensearch-2.4.0.yml --component OpenSearch ...\r\n2023-02-23 23:15:47 INFO Building in /tmp/tmpllimwxjs\r\n2023-02-23 23:15:47 INFO Removing /tmp/tmpllimwxjs\r\nTraceback (most recent call last):\r\n File \"./src/run_build.py\", line 81, in <module>\r\n sys.exit(main())\r\n File \"./src/run_build.py\", line 55, in main\r\n architecture=args.architecture or manifest.build.architecture,\r\n File \"/usr/share/opensearch/opensearch-build/src/build_workflow/build_target.py\", line 45, in __init__\r\n self.architecture = architecture or current_architecture()\r\n File \"/usr/share/opensearch/opensearch-build/src/system/os.py\", line 20, in current_architecture\r\n raise ValueError(f\"Unsupported architecture: {architecture}\")\r\nValueError: Unsupported architecture: amd64\r\n\r\n```\r\n\r\n### Expected behavior\r\n\r\nThe bsd x64 hosts can run the code without specifying --architecture x64.\r\n\r\n### Screenshots\r\n\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n### Host / Environment\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\n_No response_\r\n\r\n### Relevant log output\r\n\r\n_No response_\n", "before_files": [{"content": "# Copyright OpenSearch Contributors\n# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\nimport subprocess\n\n\ndef current_architecture() -> str:\n architecture = subprocess.check_output([\"uname\", \"-m\"]).decode().strip()\n if architecture == \"x86_64\":\n return \"x64\"\n elif architecture == \"aarch64\" or architecture == \"arm64\":\n return \"arm64\"\n else:\n raise ValueError(f\"Unsupported architecture: {architecture}\")\n\n\ndef current_platform() -> str:\n if os.name == \"nt\":\n return \"windows\"\n else:\n return subprocess.check_output([\"uname\", \"-s\"]).decode().strip().lower()\n\n\ndef deb_architecture(architecture: str) -> str:\n # This would convert arch from \"current_architecture\" to deb specific architecture alternatives\n\n deb_architecture_map = {\n \"x64\": \"amd64\",\n \"arm64\": \"arm64\",\n }\n\n return deb_architecture_map[architecture]\n\n\ndef rpm_architecture(architecture: str) -> str:\n # This would convert arch from \"current_architecture\" to rpm specific architecture alternatives\n\n rpm_architecture_map = {\n \"x64\": \"x86_64\",\n \"arm64\": \"aarch64\",\n }\n\n return rpm_architecture_map[architecture]\n", "path": "src/system/os.py"}], "after_files": [{"content": "# Copyright OpenSearch Contributors\n# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\nimport subprocess\n\n\ndef current_architecture() -> str:\n architecture = subprocess.check_output([\"uname\", \"-m\"]).decode().strip()\n if architecture == \"x86_64\" or architecture == \"amd64\":\n return \"x64\"\n elif architecture == \"aarch64\" or architecture == \"arm64\":\n return \"arm64\"\n else:\n raise ValueError(f\"Unsupported architecture: {architecture}\")\n\n\ndef current_platform() -> str:\n if os.name == \"nt\":\n return \"windows\"\n else:\n return subprocess.check_output([\"uname\", \"-s\"]).decode().strip().lower()\n\n\ndef deb_architecture(architecture: str) -> str:\n # This would convert arch from \"current_architecture\" to deb specific architecture alternatives\n\n deb_architecture_map = {\n \"x64\": \"amd64\",\n \"arm64\": \"arm64\",\n }\n\n return deb_architecture_map[architecture]\n\n\ndef rpm_architecture(architecture: str) -> str:\n # This would convert arch from \"current_architecture\" to rpm specific architecture alternatives\n\n rpm_architecture_map = {\n \"x64\": \"x86_64\",\n \"arm64\": \"aarch64\",\n }\n\n return rpm_architecture_map[architecture]\n", "path": "src/system/os.py"}]}
| 1,243 | 131 |
gh_patches_debug_20228
|
rasdani/github-patches
|
git_diff
|
googleapis__google-api-python-client-2152
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`<1dev` is not a valid matcher for Python package versions in `setup.py`!
On the current master branch, you have this matcher for an `install_requires` package version in [setup.py](https://github.com/googleapis/google-api-python-client/blob/110667251e8b2c8852945bfe238f399742148cda/setup.py#L36):
```python
install_requires = [
"httplib2>=0.15.0,<1dev",
```
The `<1dev` part is ~not~ a valid version specifier (edit: it is valid, just doesn't parse correctly with distlib) in the sense of [PEP 440](https://peps.python.org/pep-0440/) and causes problems with a number of packing tools, as seen here:
```shell
$ python -c 'from distlib.version import NormalizedMatcher; NormalizedMatcher("httplib2>=0.15.0,<1dev")'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/jan/.local/share/virtualenvs/master_clean-V3vlFZeD/lib/python3.7/site-packages/distlib/version.py", line 125, in __init__
vn, prefix = self.version_class(s), False
File "/home/jan/.local/share/virtualenvs/master_clean-V3vlFZeD/lib/python3.7/site-packages/distlib/version.py", line 33, in __init__
self._parts = parts = self.parse(s)
File "/home/jan/.local/share/virtualenvs/master_clean-V3vlFZeD/lib/python3.7/site-packages/distlib/version.py", line 267, in parse
result = _normalized_key(s)
File "/home/jan/.local/share/virtualenvs/master_clean-V3vlFZeD/lib/python3.7/site-packages/distlib/version.py", line 188, in _pep_440_key
raise UnsupportedVersionError('Not a valid version: %s' % s)
distlib.version.UnsupportedVersionError: Not a valid version: 1dev
```
This is just speculation, but it looks like it was either faultily auto-generated by some tool, is an incomplete copy & paste or you meant `<1.dev0`. However, development versions are not typically considered by tooling and it's considered bad practice to but an upper bound on python dependencies for no good reason. This should be corrected and ideally the artifact files of the effected versions fixed and re-uploaded on pypi.org.
`<1dev` is not a valid matcher for Python package versions in `setup.py`!
On the current master branch, you have this matcher for an `install_requires` package version in [setup.py](https://github.com/googleapis/google-api-python-client/blob/110667251e8b2c8852945bfe238f399742148cda/setup.py#L36):
```python
install_requires = [
"httplib2>=0.15.0,<1dev",
```
The `<1dev` part is ~not~ a valid version specifier (edit: it is valid, just doesn't parse correctly with distlib) in the sense of [PEP 440](https://peps.python.org/pep-0440/) and causes problems with a number of packing tools, as seen here:
```shell
$ python -c 'from distlib.version import NormalizedMatcher; NormalizedMatcher("httplib2>=0.15.0,<1dev")'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/jan/.local/share/virtualenvs/master_clean-V3vlFZeD/lib/python3.7/site-packages/distlib/version.py", line 125, in __init__
vn, prefix = self.version_class(s), False
File "/home/jan/.local/share/virtualenvs/master_clean-V3vlFZeD/lib/python3.7/site-packages/distlib/version.py", line 33, in __init__
self._parts = parts = self.parse(s)
File "/home/jan/.local/share/virtualenvs/master_clean-V3vlFZeD/lib/python3.7/site-packages/distlib/version.py", line 267, in parse
result = _normalized_key(s)
File "/home/jan/.local/share/virtualenvs/master_clean-V3vlFZeD/lib/python3.7/site-packages/distlib/version.py", line 188, in _pep_440_key
raise UnsupportedVersionError('Not a valid version: %s' % s)
distlib.version.UnsupportedVersionError: Not a valid version: 1dev
```
This is just speculation, but it looks like it was either faultily auto-generated by some tool, is an incomplete copy & paste or you meant `<1.dev0`. However, development versions are not typically considered by tooling and it's considered bad practice to but an upper bound on python dependencies for no good reason. This should be corrected and ideally the artifact files of the effected versions fixed and re-uploaded on pypi.org.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2014 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Setup script for Google API Python client.
16
17 Also installs included versions of third party libraries, if those libraries
18 are not already installed.
19 """
20 from __future__ import print_function
21
22 import sys
23
24 if sys.version_info < (3, 7):
25 print("google-api-python-client requires python3 version >= 3.7.", file=sys.stderr)
26 sys.exit(1)
27
28 import io
29 import os
30
31 from setuptools import setup
32
33 packages = ["apiclient", "googleapiclient", "googleapiclient/discovery_cache"]
34
35 install_requires = [
36 "httplib2>=0.15.0,<1dev",
37 # NOTE: Maintainers, please do not require google-auth>=2.x.x
38 # Until this issue is closed
39 # https://github.com/googleapis/google-cloud-python/issues/10566
40 "google-auth>=1.19.0,<3.0.0dev",
41 "google-auth-httplib2>=0.1.0",
42 # NOTE: Maintainers, please do not require google-api-core>=2.x.x
43 # Until this issue is closed
44 # https://github.com/googleapis/google-cloud-python/issues/10566
45 "google-api-core >= 1.31.5, <3.0.0dev,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0",
46 "uritemplate>=3.0.1,<5",
47 ]
48
49 package_root = os.path.abspath(os.path.dirname(__file__))
50
51 readme_filename = os.path.join(package_root, "README.md")
52 with io.open(readme_filename, encoding="utf-8") as readme_file:
53 readme = readme_file.read()
54
55 package_root = os.path.abspath(os.path.dirname(__file__))
56
57 version = {}
58 with open(os.path.join(package_root, "googleapiclient/version.py")) as fp:
59 exec(fp.read(), version)
60 version = version["__version__"]
61
62 setup(
63 name="google-api-python-client",
64 version=version,
65 description="Google API Client Library for Python",
66 long_description=readme,
67 long_description_content_type="text/markdown",
68 author="Google LLC",
69 author_email="[email protected]",
70 url="https://github.com/googleapis/google-api-python-client/",
71 install_requires=install_requires,
72 python_requires=">=3.7",
73 packages=packages,
74 package_data={"googleapiclient": ["discovery_cache/documents/*.json"]},
75 license="Apache 2.0",
76 keywords="google api client",
77 classifiers=[
78 "Programming Language :: Python :: 3",
79 "Programming Language :: Python :: 3.7",
80 "Programming Language :: Python :: 3.8",
81 "Programming Language :: Python :: 3.9",
82 "Programming Language :: Python :: 3.10",
83 "Programming Language :: Python :: 3.11",
84 "Development Status :: 5 - Production/Stable",
85 "Intended Audience :: Developers",
86 "License :: OSI Approved :: Apache Software License",
87 "Operating System :: OS Independent",
88 "Topic :: Internet :: WWW/HTTP",
89 ],
90 )
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -33,16 +33,16 @@
packages = ["apiclient", "googleapiclient", "googleapiclient/discovery_cache"]
install_requires = [
- "httplib2>=0.15.0,<1dev",
+ "httplib2>=0.15.0,<1.dev0",
# NOTE: Maintainers, please do not require google-auth>=2.x.x
# Until this issue is closed
# https://github.com/googleapis/google-cloud-python/issues/10566
- "google-auth>=1.19.0,<3.0.0dev",
+ "google-auth>=1.19.0,<3.0.0.dev0",
"google-auth-httplib2>=0.1.0",
# NOTE: Maintainers, please do not require google-api-core>=2.x.x
# Until this issue is closed
# https://github.com/googleapis/google-cloud-python/issues/10566
- "google-api-core >= 1.31.5, <3.0.0dev,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0",
+ "google-api-core >= 1.31.5, <3.0.0.dev0,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0",
"uritemplate>=3.0.1,<5",
]
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -33,16 +33,16 @@\n packages = [\"apiclient\", \"googleapiclient\", \"googleapiclient/discovery_cache\"]\n \n install_requires = [\n- \"httplib2>=0.15.0,<1dev\",\n+ \"httplib2>=0.15.0,<1.dev0\",\n # NOTE: Maintainers, please do not require google-auth>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n- \"google-auth>=1.19.0,<3.0.0dev\",\n+ \"google-auth>=1.19.0,<3.0.0.dev0\",\n \"google-auth-httplib2>=0.1.0\",\n # NOTE: Maintainers, please do not require google-api-core>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n- \"google-api-core >= 1.31.5, <3.0.0dev,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0\",\n+ \"google-api-core >= 1.31.5, <3.0.0.dev0,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0\",\n \"uritemplate>=3.0.1,<5\",\n ]\n", "issue": "`<1dev` is not a valid matcher for Python package versions in `setup.py`!\nOn the current master branch, you have this matcher for an `install_requires` package version in [setup.py](https://github.com/googleapis/google-api-python-client/blob/110667251e8b2c8852945bfe238f399742148cda/setup.py#L36):\r\n\r\n```python\r\ninstall_requires = [\r\n \"httplib2>=0.15.0,<1dev\",\r\n```\r\n\r\nThe `<1dev` part is ~not~ a valid version specifier (edit: it is valid, just doesn't parse correctly with distlib) in the sense of [PEP 440](https://peps.python.org/pep-0440/) and causes problems with a number of packing tools, as seen here:\r\n\r\n```shell\r\n$ python -c 'from distlib.version import NormalizedMatcher; NormalizedMatcher(\"httplib2>=0.15.0,<1dev\")'\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/jan/.local/share/virtualenvs/master_clean-V3vlFZeD/lib/python3.7/site-packages/distlib/version.py\", line 125, in __init__\r\n vn, prefix = self.version_class(s), False\r\n File \"/home/jan/.local/share/virtualenvs/master_clean-V3vlFZeD/lib/python3.7/site-packages/distlib/version.py\", line 33, in __init__\r\n self._parts = parts = self.parse(s)\r\n File \"/home/jan/.local/share/virtualenvs/master_clean-V3vlFZeD/lib/python3.7/site-packages/distlib/version.py\", line 267, in parse\r\n result = _normalized_key(s)\r\n File \"/home/jan/.local/share/virtualenvs/master_clean-V3vlFZeD/lib/python3.7/site-packages/distlib/version.py\", line 188, in _pep_440_key\r\n raise UnsupportedVersionError('Not a valid version: %s' % s)\r\ndistlib.version.UnsupportedVersionError: Not a valid version: 1dev\r\n```\r\n\r\nThis is just speculation, but it looks like it was either faultily auto-generated by some tool, is an incomplete copy & paste or you meant `<1.dev0`. However, development versions are not typically considered by tooling and it's considered bad practice to but an upper bound on python dependencies for no good reason. This should be corrected and ideally the artifact files of the effected versions fixed and re-uploaded on pypi.org.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\n`<1dev` is not a valid matcher for Python package versions in `setup.py`!\nOn the current master branch, you have this matcher for an `install_requires` package version in [setup.py](https://github.com/googleapis/google-api-python-client/blob/110667251e8b2c8852945bfe238f399742148cda/setup.py#L36):\r\n\r\n```python\r\ninstall_requires = [\r\n \"httplib2>=0.15.0,<1dev\",\r\n```\r\n\r\nThe `<1dev` part is ~not~ a valid version specifier (edit: it is valid, just doesn't parse correctly with distlib) in the sense of [PEP 440](https://peps.python.org/pep-0440/) and causes problems with a number of packing tools, as seen here:\r\n\r\n```shell\r\n$ python -c 'from distlib.version import NormalizedMatcher; NormalizedMatcher(\"httplib2>=0.15.0,<1dev\")'\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/jan/.local/share/virtualenvs/master_clean-V3vlFZeD/lib/python3.7/site-packages/distlib/version.py\", line 125, in __init__\r\n vn, prefix = self.version_class(s), False\r\n File \"/home/jan/.local/share/virtualenvs/master_clean-V3vlFZeD/lib/python3.7/site-packages/distlib/version.py\", line 33, in __init__\r\n self._parts = parts = self.parse(s)\r\n File \"/home/jan/.local/share/virtualenvs/master_clean-V3vlFZeD/lib/python3.7/site-packages/distlib/version.py\", line 267, in parse\r\n result = _normalized_key(s)\r\n File \"/home/jan/.local/share/virtualenvs/master_clean-V3vlFZeD/lib/python3.7/site-packages/distlib/version.py\", line 188, in _pep_440_key\r\n raise UnsupportedVersionError('Not a valid version: %s' % s)\r\ndistlib.version.UnsupportedVersionError: Not a valid version: 1dev\r\n```\r\n\r\nThis is just speculation, but it looks like it was either faultily auto-generated by some tool, is an incomplete copy & paste or you meant `<1.dev0`. However, development versions are not typically considered by tooling and it's considered bad practice to but an upper bound on python dependencies for no good reason. This should be corrected and ideally the artifact files of the effected versions fixed and re-uploaded on pypi.org.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Setup script for Google API Python client.\n\nAlso installs included versions of third party libraries, if those libraries\nare not already installed.\n\"\"\"\nfrom __future__ import print_function\n\nimport sys\n\nif sys.version_info < (3, 7):\n print(\"google-api-python-client requires python3 version >= 3.7.\", file=sys.stderr)\n sys.exit(1)\n\nimport io\nimport os\n\nfrom setuptools import setup\n\npackages = [\"apiclient\", \"googleapiclient\", \"googleapiclient/discovery_cache\"]\n\ninstall_requires = [\n \"httplib2>=0.15.0,<1dev\",\n # NOTE: Maintainers, please do not require google-auth>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n \"google-auth>=1.19.0,<3.0.0dev\",\n \"google-auth-httplib2>=0.1.0\",\n # NOTE: Maintainers, please do not require google-api-core>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n \"google-api-core >= 1.31.5, <3.0.0dev,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0\",\n \"uritemplate>=3.0.1,<5\",\n]\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.md\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nversion = {}\nwith open(os.path.join(package_root, \"googleapiclient/version.py\")) as fp:\n exec(fp.read(), version)\nversion = version[\"__version__\"]\n\nsetup(\n name=\"google-api-python-client\",\n version=version,\n description=\"Google API Client Library for Python\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n url=\"https://github.com/googleapis/google-api-python-client/\",\n install_requires=install_requires,\n python_requires=\">=3.7\",\n packages=packages,\n package_data={\"googleapiclient\": [\"discovery_cache/documents/*.json\"]},\n license=\"Apache 2.0\",\n keywords=\"google api client\",\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Setup script for Google API Python client.\n\nAlso installs included versions of third party libraries, if those libraries\nare not already installed.\n\"\"\"\nfrom __future__ import print_function\n\nimport sys\n\nif sys.version_info < (3, 7):\n print(\"google-api-python-client requires python3 version >= 3.7.\", file=sys.stderr)\n sys.exit(1)\n\nimport io\nimport os\n\nfrom setuptools import setup\n\npackages = [\"apiclient\", \"googleapiclient\", \"googleapiclient/discovery_cache\"]\n\ninstall_requires = [\n \"httplib2>=0.15.0,<1.dev0\",\n # NOTE: Maintainers, please do not require google-auth>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n \"google-auth>=1.19.0,<3.0.0.dev0\",\n \"google-auth-httplib2>=0.1.0\",\n # NOTE: Maintainers, please do not require google-api-core>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n \"google-api-core >= 1.31.5, <3.0.0.dev0,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0\",\n \"uritemplate>=3.0.1,<5\",\n]\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.md\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nversion = {}\nwith open(os.path.join(package_root, \"googleapiclient/version.py\")) as fp:\n exec(fp.read(), version)\nversion = version[\"__version__\"]\n\nsetup(\n name=\"google-api-python-client\",\n version=version,\n description=\"Google API Client Library for Python\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n url=\"https://github.com/googleapis/google-api-python-client/\",\n install_requires=install_requires,\n python_requires=\">=3.7\",\n packages=packages,\n package_data={\"googleapiclient\": [\"discovery_cache/documents/*.json\"]},\n license=\"Apache 2.0\",\n keywords=\"google api client\",\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n ],\n)\n", "path": "setup.py"}]}
| 2,413 | 349 |
gh_patches_debug_558
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-691
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 1.6.4
On the docket:
+ [x] Restore pex.pex_bootstrapper.is_compressed API #684
+ [ ] Release more flexible pex binaries. #654
+ [x] If an `--interpreter-constraint` is set, it should always be honored. #656
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = '1.6.3'
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = '1.6.3'
+__version__ = '1.6.4'
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = '1.6.3'\n+__version__ = '1.6.4'\n", "issue": "Release 1.6.4\nOn the docket:\r\n+ [x] Restore pex.pex_bootstrapper.is_compressed API #684\r\n+ [ ] Release more flexible pex binaries. #654\r\n + [x] If an `--interpreter-constraint` is set, it should always be honored. #656\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '1.6.3'\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '1.6.4'\n", "path": "pex/version.py"}]}
| 382 | 94 |
gh_patches_debug_39153
|
rasdani/github-patches
|
git_diff
|
pallets__click-1059
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Allow sorting of completions based on custom types
Currently completion candidates returned by click are sorted in ZSH based on alphanumeric rules. This means that completions ['1', '2', '10'] would be displayed out of natural sort order as ['1', '10', '2']. Update the completion script to bring the sorting into Python where custom types can be sorted more appropriately. Note that the version of bash that ships with OS X is v3.2. The nosort option for `complete` was introduced in v4.4 This change should support both.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `click/_bashcomplete.py`
Content:
```
1 import collections
2 import copy
3 import os
4 import re
5
6 from .utils import echo
7 from .parser import split_arg_string
8 from .core import MultiCommand, Option, Argument
9 from .types import Choice
10
11 WORDBREAK = '='
12
13 COMPLETION_SCRIPT_BASH = '''
14 %(complete_func)s() {
15 local IFS=$'\n'
16 COMPREPLY=( $( env COMP_WORDS="${COMP_WORDS[*]}" \\
17 COMP_CWORD=$COMP_CWORD \\
18 %(autocomplete_var)s=complete $1 ) )
19 return 0
20 }
21
22 complete -F %(complete_func)s %(script_names)s
23 '''
24
25 COMPLETION_SCRIPT_ZSH = '''
26 %(complete_func)s() {
27 local -a completions
28 local -a completions_with_descriptions
29 local -a response
30 response=("${(@f)$( env COMP_WORDS=\"${words[*]}\" \\
31 COMP_CWORD=$((CURRENT-1)) \\
32 %(autocomplete_var)s=\"complete_zsh\" \\
33 %(script_names)s )}")
34
35 for key descr in ${(kv)response}; do
36 if [[ "$descr" == "_" ]]; then
37 completions+=("$key")
38 else
39 completions_with_descriptions+=("$key":"$descr")
40 fi
41 done
42
43 if [ -n "$completions_with_descriptions" ]; then
44 _describe '' completions_with_descriptions
45 fi
46 if [ -n "$completions" ]; then
47 compadd -M 'r:|=* l:|=* r:|=*' -a completions
48 fi
49 }
50
51 compdef %(complete_func)s %(script_names)s
52 '''
53
54 _invalid_ident_char_re = re.compile(r'[^a-zA-Z0-9_]')
55
56
57 def get_completion_script(prog_name, complete_var, shell):
58 cf_name = _invalid_ident_char_re.sub('', prog_name.replace('-', '_'))
59 script = COMPLETION_SCRIPT_ZSH if shell == 'zsh' else COMPLETION_SCRIPT_BASH
60 return (script % {
61 'complete_func': '_%s_completion' % cf_name,
62 'script_names': prog_name,
63 'autocomplete_var': complete_var,
64 }).strip() + ';'
65
66
67 def resolve_ctx(cli, prog_name, args):
68 """
69 Parse into a hierarchy of contexts. Contexts are connected through the parent variable.
70 :param cli: command definition
71 :param prog_name: the program that is running
72 :param args: full list of args
73 :return: the final context/command parsed
74 """
75 ctx = cli.make_context(prog_name, args, resilient_parsing=True)
76 args = ctx.protected_args + ctx.args
77 while args:
78 if isinstance(ctx.command, MultiCommand):
79 if not ctx.command.chain:
80 cmd_name, cmd, args = ctx.command.resolve_command(ctx, args)
81 if cmd is None:
82 return ctx
83 ctx = cmd.make_context(cmd_name, args, parent=ctx,
84 resilient_parsing=True)
85 args = ctx.protected_args + ctx.args
86 else:
87 # Walk chained subcommand contexts saving the last one.
88 while args:
89 cmd_name, cmd, args = ctx.command.resolve_command(ctx, args)
90 if cmd is None:
91 return ctx
92 sub_ctx = cmd.make_context(cmd_name, args, parent=ctx,
93 allow_extra_args=True,
94 allow_interspersed_args=False,
95 resilient_parsing=True)
96 args = sub_ctx.args
97 ctx = sub_ctx
98 args = sub_ctx.protected_args + sub_ctx.args
99 else:
100 break
101 return ctx
102
103
104 def start_of_option(param_str):
105 """
106 :param param_str: param_str to check
107 :return: whether or not this is the start of an option declaration (i.e. starts "-" or "--")
108 """
109 return param_str and param_str[:1] == '-'
110
111
112 def is_incomplete_option(all_args, cmd_param):
113 """
114 :param all_args: the full original list of args supplied
115 :param cmd_param: the current command paramter
116 :return: whether or not the last option declaration (i.e. starts "-" or "--") is incomplete and
117 corresponds to this cmd_param. In other words whether this cmd_param option can still accept
118 values
119 """
120 if not isinstance(cmd_param, Option):
121 return False
122 if cmd_param.is_flag:
123 return False
124 last_option = None
125 for index, arg_str in enumerate(reversed([arg for arg in all_args if arg != WORDBREAK])):
126 if index + 1 > cmd_param.nargs:
127 break
128 if start_of_option(arg_str):
129 last_option = arg_str
130
131 return True if last_option and last_option in cmd_param.opts else False
132
133
134 def is_incomplete_argument(current_params, cmd_param):
135 """
136 :param current_params: the current params and values for this argument as already entered
137 :param cmd_param: the current command parameter
138 :return: whether or not the last argument is incomplete and corresponds to this cmd_param. In
139 other words whether or not the this cmd_param argument can still accept values
140 """
141 if not isinstance(cmd_param, Argument):
142 return False
143 current_param_values = current_params[cmd_param.name]
144 if current_param_values is None:
145 return True
146 if cmd_param.nargs == -1:
147 return True
148 if isinstance(current_param_values, collections.Iterable) \
149 and cmd_param.nargs > 1 and len(current_param_values) < cmd_param.nargs:
150 return True
151 return False
152
153
154 def get_user_autocompletions(ctx, args, incomplete, cmd_param):
155 """
156 :param ctx: context associated with the parsed command
157 :param args: full list of args
158 :param incomplete: the incomplete text to autocomplete
159 :param cmd_param: command definition
160 :return: all the possible user-specified completions for the param
161 """
162 results = []
163 if isinstance(cmd_param.type, Choice):
164 # Choices don't support descriptions.
165 results = [(c, None)
166 for c in cmd_param.type.choices if c.startswith(incomplete)]
167 elif cmd_param.autocompletion is not None:
168 dynamic_completions = cmd_param.autocompletion(ctx=ctx,
169 args=args,
170 incomplete=incomplete)
171 results = [c if isinstance(c, tuple) else (c, None)
172 for c in dynamic_completions]
173 return results
174
175
176 def add_subcommand_completions(ctx, incomplete, completions_out):
177 # Add subcommand completions.
178 if isinstance(ctx.command, MultiCommand):
179 completions_out.extend(
180 [(c, ctx.command.get_command(ctx, c).get_short_help_str()) for c in ctx.command.list_commands(ctx) if c.startswith(incomplete)])
181
182 # Walk up the context list and add any other completion possibilities from chained commands
183 while ctx.parent is not None:
184 ctx = ctx.parent
185 if isinstance(ctx.command, MultiCommand) and ctx.command.chain:
186 remaining_commands = sorted(
187 set(ctx.command.list_commands(ctx)) - set(ctx.protected_args))
188 completions_out.extend(
189 [(c, ctx.command.get_command(ctx, c).get_short_help_str()) for c in remaining_commands if c.startswith(incomplete)])
190
191
192 def get_choices(cli, prog_name, args, incomplete):
193 """
194 :param cli: command definition
195 :param prog_name: the program that is running
196 :param args: full list of args
197 :param incomplete: the incomplete text to autocomplete
198 :return: all the possible completions for the incomplete
199 """
200 all_args = copy.deepcopy(args)
201
202 ctx = resolve_ctx(cli, prog_name, args)
203 if ctx is None:
204 return []
205
206 # In newer versions of bash long opts with '='s are partitioned, but it's easier to parse
207 # without the '='
208 if start_of_option(incomplete) and WORDBREAK in incomplete:
209 partition_incomplete = incomplete.partition(WORDBREAK)
210 all_args.append(partition_incomplete[0])
211 incomplete = partition_incomplete[2]
212 elif incomplete == WORDBREAK:
213 incomplete = ''
214
215 completions = []
216 if start_of_option(incomplete):
217 # completions for partial options
218 for param in ctx.command.params:
219 if isinstance(param, Option):
220 param_opts = [param_opt for param_opt in param.opts +
221 param.secondary_opts if param_opt not in all_args or param.multiple]
222 completions.extend(
223 [(o, param.help) for o in param_opts if o.startswith(incomplete)])
224 return completions
225 # completion for option values from user supplied values
226 for param in ctx.command.params:
227 if is_incomplete_option(all_args, param):
228 return get_user_autocompletions(ctx, all_args, incomplete, param)
229 # completion for argument values from user supplied values
230 for param in ctx.command.params:
231 if is_incomplete_argument(ctx.params, param):
232 return get_user_autocompletions(ctx, all_args, incomplete, param)
233
234 add_subcommand_completions(ctx, incomplete, completions)
235 return completions
236
237
238 def do_complete(cli, prog_name, include_descriptions):
239 cwords = split_arg_string(os.environ['COMP_WORDS'])
240 cword = int(os.environ['COMP_CWORD'])
241 args = cwords[1:cword]
242 try:
243 incomplete = cwords[cword]
244 except IndexError:
245 incomplete = ''
246
247 for item in get_choices(cli, prog_name, args, incomplete):
248 echo(item[0])
249 if include_descriptions:
250 # ZSH has trouble dealing with empty array parameters when returned from commands, so use a well defined character '_' to indicate no description is present.
251 echo(item[1] if item[1] else '_')
252
253 return True
254
255
256 def bashcomplete(cli, prog_name, complete_var, complete_instr):
257 if complete_instr.startswith('source'):
258 shell = 'zsh' if complete_instr == 'source_zsh' else 'bash'
259 echo(get_completion_script(prog_name, complete_var, shell))
260 return True
261 elif complete_instr == 'complete' or complete_instr == 'complete_zsh':
262 return do_complete(cli, prog_name, complete_instr == 'complete_zsh')
263 return False
264
```
Path: `examples/bashcompletion/bashcompletion.py`
Content:
```
1 import click
2 import os
3
4
5 @click.group()
6 def cli():
7 pass
8
9
10 def get_env_vars(ctx, args, incomplete):
11 for key in os.environ.keys():
12 if incomplete in key:
13 yield key
14
15
16 @cli.command(help='A command to print environment variables')
17 @click.argument("envvar", type=click.STRING, autocompletion=get_env_vars)
18 def cmd1(envvar):
19 click.echo('Environment variable: %s' % envvar)
20 click.echo('Value: %s' % os.environ[envvar])
21
22
23 @click.group(help='A group that holds a subcommand')
24 def group():
25 pass
26
27
28 def list_users(ctx, args, incomplete):
29 # Here you can generate completions dynamically
30 users = ['bob', 'alice']
31 for user in users:
32 if user.startswith(incomplete):
33 yield user
34
35
36 @group.command(help='Choose a user')
37 @click.argument("user", type=click.STRING, autocompletion=list_users)
38 def subcmd(user):
39 click.echo('Chosen user is %s' % user)
40
41 cli.add_command(group)
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/click/_bashcomplete.py b/click/_bashcomplete.py
--- a/click/_bashcomplete.py
+++ b/click/_bashcomplete.py
@@ -10,6 +10,7 @@
WORDBREAK = '='
+# Note, only BASH version 4.4 and later have the nosort option.
COMPLETION_SCRIPT_BASH = '''
%(complete_func)s() {
local IFS=$'\n'
@@ -19,7 +20,17 @@
return 0
}
-complete -F %(complete_func)s %(script_names)s
+%(complete_func)setup() {
+ local COMPLETION_OPTIONS=""
+ local BASH_VERSION_ARR=(${BASH_VERSION//./ })
+ if [ ${BASH_VERSION_ARR[0]} -ge 4 ] && [ ${BASH_VERSION_ARR[1]} -ge 4 ];then
+ COMPLETION_OPTIONS="-o nosort"
+ fi
+
+ complete $COMPLETION_OPTIONS -F %(complete_func)s %(script_names)s
+}
+
+%(complete_func)setup
'''
COMPLETION_SCRIPT_ZSH = '''
@@ -41,11 +52,13 @@
done
if [ -n "$completions_with_descriptions" ]; then
- _describe '' completions_with_descriptions
+ _describe -V unsorted completions_with_descriptions -U -Q
fi
+
if [ -n "$completions" ]; then
- compadd -M 'r:|=* l:|=* r:|=*' -a completions
+ compadd -U -V unsorted -Q -a completions
fi
+ compstate[insert]="automenu"
}
compdef %(complete_func)s %(script_names)s
@@ -232,7 +245,8 @@
return get_user_autocompletions(ctx, all_args, incomplete, param)
add_subcommand_completions(ctx, incomplete, completions)
- return completions
+ # Sort before returning so that proper ordering can be enforced in custom types.
+ return sorted(completions)
def do_complete(cli, prog_name, include_descriptions):
diff --git a/examples/bashcompletion/bashcompletion.py b/examples/bashcompletion/bashcompletion.py
--- a/examples/bashcompletion/bashcompletion.py
+++ b/examples/bashcompletion/bashcompletion.py
@@ -8,6 +8,7 @@
def get_env_vars(ctx, args, incomplete):
+ # Completions returned as strings do not have a description displayed.
for key in os.environ.keys():
if incomplete in key:
yield key
@@ -26,11 +27,13 @@
def list_users(ctx, args, incomplete):
- # Here you can generate completions dynamically
- users = ['bob', 'alice']
- for user in users:
- if user.startswith(incomplete):
- yield user
+ # You can generate completions with descriptions by returning
+ # tuples in the form (completion, description).
+ users = [('bob', 'butcher'),
+ ('alice', 'baker'),
+ ('jerry', 'candlestick maker')]
+ # Ths will allow completion matches based on matches within the description string too!
+ return [user for user in users if incomplete in user[0] or incomplete in user[1]]
@group.command(help='Choose a user')
@@ -38,4 +41,5 @@
def subcmd(user):
click.echo('Chosen user is %s' % user)
+
cli.add_command(group)
|
{"golden_diff": "diff --git a/click/_bashcomplete.py b/click/_bashcomplete.py\n--- a/click/_bashcomplete.py\n+++ b/click/_bashcomplete.py\n@@ -10,6 +10,7 @@\n \n WORDBREAK = '='\n \n+# Note, only BASH version 4.4 and later have the nosort option.\n COMPLETION_SCRIPT_BASH = '''\n %(complete_func)s() {\n local IFS=$'\\n'\n@@ -19,7 +20,17 @@\n return 0\n }\n \n-complete -F %(complete_func)s %(script_names)s\n+%(complete_func)setup() {\n+ local COMPLETION_OPTIONS=\"\"\n+ local BASH_VERSION_ARR=(${BASH_VERSION//./ })\n+ if [ ${BASH_VERSION_ARR[0]} -ge 4 ] && [ ${BASH_VERSION_ARR[1]} -ge 4 ];then\n+ COMPLETION_OPTIONS=\"-o nosort\"\n+ fi\n+\n+ complete $COMPLETION_OPTIONS -F %(complete_func)s %(script_names)s\n+}\n+\n+%(complete_func)setup\n '''\n \n COMPLETION_SCRIPT_ZSH = '''\n@@ -41,11 +52,13 @@\n done\n \n if [ -n \"$completions_with_descriptions\" ]; then\n- _describe '' completions_with_descriptions\n+ _describe -V unsorted completions_with_descriptions -U -Q\n fi\n+\n if [ -n \"$completions\" ]; then\n- compadd -M 'r:|=* l:|=* r:|=*' -a completions\n+ compadd -U -V unsorted -Q -a completions\n fi\n+ compstate[insert]=\"automenu\"\n }\n \n compdef %(complete_func)s %(script_names)s\n@@ -232,7 +245,8 @@\n return get_user_autocompletions(ctx, all_args, incomplete, param)\n \n add_subcommand_completions(ctx, incomplete, completions)\n- return completions\n+ # Sort before returning so that proper ordering can be enforced in custom types.\n+ return sorted(completions)\n \n \n def do_complete(cli, prog_name, include_descriptions):\ndiff --git a/examples/bashcompletion/bashcompletion.py b/examples/bashcompletion/bashcompletion.py\n--- a/examples/bashcompletion/bashcompletion.py\n+++ b/examples/bashcompletion/bashcompletion.py\n@@ -8,6 +8,7 @@\n \n \n def get_env_vars(ctx, args, incomplete):\n+ # Completions returned as strings do not have a description displayed.\n for key in os.environ.keys():\n if incomplete in key:\n yield key\n@@ -26,11 +27,13 @@\n \n \n def list_users(ctx, args, incomplete):\n- # Here you can generate completions dynamically\n- users = ['bob', 'alice']\n- for user in users:\n- if user.startswith(incomplete):\n- yield user\n+ # You can generate completions with descriptions by returning\n+ # tuples in the form (completion, description).\n+ users = [('bob', 'butcher'),\n+ ('alice', 'baker'),\n+ ('jerry', 'candlestick maker')]\n+ # Ths will allow completion matches based on matches within the description string too!\n+ return [user for user in users if incomplete in user[0] or incomplete in user[1]]\n \n \n @group.command(help='Choose a user')\n@@ -38,4 +41,5 @@\n def subcmd(user):\n click.echo('Chosen user is %s' % user)\n \n+\n cli.add_command(group)\n", "issue": "Allow sorting of completions based on custom types\nCurrently completion candidates returned by click are sorted in ZSH based on alphanumeric rules. This means that completions ['1', '2', '10'] would be displayed out of natural sort order as ['1', '10', '2']. Update the completion script to bring the sorting into Python where custom types can be sorted more appropriately. Note that the version of bash that ships with OS X is v3.2. The nosort option for `complete` was introduced in v4.4 This change should support both.\n", "before_files": [{"content": "import collections\nimport copy\nimport os\nimport re\n\nfrom .utils import echo\nfrom .parser import split_arg_string\nfrom .core import MultiCommand, Option, Argument\nfrom .types import Choice\n\nWORDBREAK = '='\n\nCOMPLETION_SCRIPT_BASH = '''\n%(complete_func)s() {\n local IFS=$'\\n'\n COMPREPLY=( $( env COMP_WORDS=\"${COMP_WORDS[*]}\" \\\\\n COMP_CWORD=$COMP_CWORD \\\\\n %(autocomplete_var)s=complete $1 ) )\n return 0\n}\n\ncomplete -F %(complete_func)s %(script_names)s\n'''\n\nCOMPLETION_SCRIPT_ZSH = '''\n%(complete_func)s() {\n local -a completions\n local -a completions_with_descriptions\n local -a response\n response=(\"${(@f)$( env COMP_WORDS=\\\"${words[*]}\\\" \\\\\n COMP_CWORD=$((CURRENT-1)) \\\\\n %(autocomplete_var)s=\\\"complete_zsh\\\" \\\\\n %(script_names)s )}\")\n\n for key descr in ${(kv)response}; do\n if [[ \"$descr\" == \"_\" ]]; then\n completions+=(\"$key\")\n else\n completions_with_descriptions+=(\"$key\":\"$descr\")\n fi\n done\n\n if [ -n \"$completions_with_descriptions\" ]; then\n _describe '' completions_with_descriptions\n fi\n if [ -n \"$completions\" ]; then\n compadd -M 'r:|=* l:|=* r:|=*' -a completions\n fi\n}\n\ncompdef %(complete_func)s %(script_names)s\n'''\n\n_invalid_ident_char_re = re.compile(r'[^a-zA-Z0-9_]')\n\n\ndef get_completion_script(prog_name, complete_var, shell):\n cf_name = _invalid_ident_char_re.sub('', prog_name.replace('-', '_'))\n script = COMPLETION_SCRIPT_ZSH if shell == 'zsh' else COMPLETION_SCRIPT_BASH\n return (script % {\n 'complete_func': '_%s_completion' % cf_name,\n 'script_names': prog_name,\n 'autocomplete_var': complete_var,\n }).strip() + ';'\n\n\ndef resolve_ctx(cli, prog_name, args):\n \"\"\"\n Parse into a hierarchy of contexts. Contexts are connected through the parent variable.\n :param cli: command definition\n :param prog_name: the program that is running\n :param args: full list of args\n :return: the final context/command parsed\n \"\"\"\n ctx = cli.make_context(prog_name, args, resilient_parsing=True)\n args = ctx.protected_args + ctx.args\n while args:\n if isinstance(ctx.command, MultiCommand):\n if not ctx.command.chain:\n cmd_name, cmd, args = ctx.command.resolve_command(ctx, args)\n if cmd is None:\n return ctx\n ctx = cmd.make_context(cmd_name, args, parent=ctx,\n resilient_parsing=True)\n args = ctx.protected_args + ctx.args\n else:\n # Walk chained subcommand contexts saving the last one.\n while args:\n cmd_name, cmd, args = ctx.command.resolve_command(ctx, args)\n if cmd is None:\n return ctx\n sub_ctx = cmd.make_context(cmd_name, args, parent=ctx,\n allow_extra_args=True,\n allow_interspersed_args=False,\n resilient_parsing=True)\n args = sub_ctx.args\n ctx = sub_ctx\n args = sub_ctx.protected_args + sub_ctx.args\n else:\n break\n return ctx\n\n\ndef start_of_option(param_str):\n \"\"\"\n :param param_str: param_str to check\n :return: whether or not this is the start of an option declaration (i.e. starts \"-\" or \"--\")\n \"\"\"\n return param_str and param_str[:1] == '-'\n\n\ndef is_incomplete_option(all_args, cmd_param):\n \"\"\"\n :param all_args: the full original list of args supplied\n :param cmd_param: the current command paramter\n :return: whether or not the last option declaration (i.e. starts \"-\" or \"--\") is incomplete and\n corresponds to this cmd_param. In other words whether this cmd_param option can still accept\n values\n \"\"\"\n if not isinstance(cmd_param, Option):\n return False\n if cmd_param.is_flag:\n return False\n last_option = None\n for index, arg_str in enumerate(reversed([arg for arg in all_args if arg != WORDBREAK])):\n if index + 1 > cmd_param.nargs:\n break\n if start_of_option(arg_str):\n last_option = arg_str\n\n return True if last_option and last_option in cmd_param.opts else False\n\n\ndef is_incomplete_argument(current_params, cmd_param):\n \"\"\"\n :param current_params: the current params and values for this argument as already entered\n :param cmd_param: the current command parameter\n :return: whether or not the last argument is incomplete and corresponds to this cmd_param. In\n other words whether or not the this cmd_param argument can still accept values\n \"\"\"\n if not isinstance(cmd_param, Argument):\n return False\n current_param_values = current_params[cmd_param.name]\n if current_param_values is None:\n return True\n if cmd_param.nargs == -1:\n return True\n if isinstance(current_param_values, collections.Iterable) \\\n and cmd_param.nargs > 1 and len(current_param_values) < cmd_param.nargs:\n return True\n return False\n\n\ndef get_user_autocompletions(ctx, args, incomplete, cmd_param):\n \"\"\"\n :param ctx: context associated with the parsed command\n :param args: full list of args\n :param incomplete: the incomplete text to autocomplete\n :param cmd_param: command definition\n :return: all the possible user-specified completions for the param\n \"\"\"\n results = []\n if isinstance(cmd_param.type, Choice):\n # Choices don't support descriptions.\n results = [(c, None)\n for c in cmd_param.type.choices if c.startswith(incomplete)]\n elif cmd_param.autocompletion is not None:\n dynamic_completions = cmd_param.autocompletion(ctx=ctx,\n args=args,\n incomplete=incomplete)\n results = [c if isinstance(c, tuple) else (c, None)\n for c in dynamic_completions]\n return results\n\n\ndef add_subcommand_completions(ctx, incomplete, completions_out):\n # Add subcommand completions.\n if isinstance(ctx.command, MultiCommand):\n completions_out.extend(\n [(c, ctx.command.get_command(ctx, c).get_short_help_str()) for c in ctx.command.list_commands(ctx) if c.startswith(incomplete)])\n\n # Walk up the context list and add any other completion possibilities from chained commands\n while ctx.parent is not None:\n ctx = ctx.parent\n if isinstance(ctx.command, MultiCommand) and ctx.command.chain:\n remaining_commands = sorted(\n set(ctx.command.list_commands(ctx)) - set(ctx.protected_args))\n completions_out.extend(\n [(c, ctx.command.get_command(ctx, c).get_short_help_str()) for c in remaining_commands if c.startswith(incomplete)])\n\n\ndef get_choices(cli, prog_name, args, incomplete):\n \"\"\"\n :param cli: command definition\n :param prog_name: the program that is running\n :param args: full list of args\n :param incomplete: the incomplete text to autocomplete\n :return: all the possible completions for the incomplete\n \"\"\"\n all_args = copy.deepcopy(args)\n\n ctx = resolve_ctx(cli, prog_name, args)\n if ctx is None:\n return []\n\n # In newer versions of bash long opts with '='s are partitioned, but it's easier to parse\n # without the '='\n if start_of_option(incomplete) and WORDBREAK in incomplete:\n partition_incomplete = incomplete.partition(WORDBREAK)\n all_args.append(partition_incomplete[0])\n incomplete = partition_incomplete[2]\n elif incomplete == WORDBREAK:\n incomplete = ''\n\n completions = []\n if start_of_option(incomplete):\n # completions for partial options\n for param in ctx.command.params:\n if isinstance(param, Option):\n param_opts = [param_opt for param_opt in param.opts +\n param.secondary_opts if param_opt not in all_args or param.multiple]\n completions.extend(\n [(o, param.help) for o in param_opts if o.startswith(incomplete)])\n return completions\n # completion for option values from user supplied values\n for param in ctx.command.params:\n if is_incomplete_option(all_args, param):\n return get_user_autocompletions(ctx, all_args, incomplete, param)\n # completion for argument values from user supplied values\n for param in ctx.command.params:\n if is_incomplete_argument(ctx.params, param):\n return get_user_autocompletions(ctx, all_args, incomplete, param)\n\n add_subcommand_completions(ctx, incomplete, completions)\n return completions\n\n\ndef do_complete(cli, prog_name, include_descriptions):\n cwords = split_arg_string(os.environ['COMP_WORDS'])\n cword = int(os.environ['COMP_CWORD'])\n args = cwords[1:cword]\n try:\n incomplete = cwords[cword]\n except IndexError:\n incomplete = ''\n\n for item in get_choices(cli, prog_name, args, incomplete):\n echo(item[0])\n if include_descriptions:\n # ZSH has trouble dealing with empty array parameters when returned from commands, so use a well defined character '_' to indicate no description is present.\n echo(item[1] if item[1] else '_')\n\n return True\n\n\ndef bashcomplete(cli, prog_name, complete_var, complete_instr):\n if complete_instr.startswith('source'):\n shell = 'zsh' if complete_instr == 'source_zsh' else 'bash'\n echo(get_completion_script(prog_name, complete_var, shell))\n return True\n elif complete_instr == 'complete' or complete_instr == 'complete_zsh':\n return do_complete(cli, prog_name, complete_instr == 'complete_zsh')\n return False\n", "path": "click/_bashcomplete.py"}, {"content": "import click\nimport os\n\n\[email protected]()\ndef cli():\n pass\n\n\ndef get_env_vars(ctx, args, incomplete):\n for key in os.environ.keys():\n if incomplete in key:\n yield key\n\n\[email protected](help='A command to print environment variables')\[email protected](\"envvar\", type=click.STRING, autocompletion=get_env_vars)\ndef cmd1(envvar):\n click.echo('Environment variable: %s' % envvar)\n click.echo('Value: %s' % os.environ[envvar])\n\n\[email protected](help='A group that holds a subcommand')\ndef group():\n pass\n\n\ndef list_users(ctx, args, incomplete):\n # Here you can generate completions dynamically\n users = ['bob', 'alice']\n for user in users:\n if user.startswith(incomplete):\n yield user\n\n\[email protected](help='Choose a user')\[email protected](\"user\", type=click.STRING, autocompletion=list_users)\ndef subcmd(user):\n click.echo('Chosen user is %s' % user)\n\ncli.add_command(group)\n", "path": "examples/bashcompletion/bashcompletion.py"}], "after_files": [{"content": "import collections\nimport copy\nimport os\nimport re\n\nfrom .utils import echo\nfrom .parser import split_arg_string\nfrom .core import MultiCommand, Option, Argument\nfrom .types import Choice\n\nWORDBREAK = '='\n\n# Note, only BASH version 4.4 and later have the nosort option.\nCOMPLETION_SCRIPT_BASH = '''\n%(complete_func)s() {\n local IFS=$'\\n'\n COMPREPLY=( $( env COMP_WORDS=\"${COMP_WORDS[*]}\" \\\\\n COMP_CWORD=$COMP_CWORD \\\\\n %(autocomplete_var)s=complete $1 ) )\n return 0\n}\n\n%(complete_func)setup() {\n local COMPLETION_OPTIONS=\"\"\n local BASH_VERSION_ARR=(${BASH_VERSION//./ })\n if [ ${BASH_VERSION_ARR[0]} -ge 4 ] && [ ${BASH_VERSION_ARR[1]} -ge 4 ];then\n COMPLETION_OPTIONS=\"-o nosort\"\n fi\n\n complete $COMPLETION_OPTIONS -F %(complete_func)s %(script_names)s\n}\n\n%(complete_func)setup\n'''\n\nCOMPLETION_SCRIPT_ZSH = '''\n%(complete_func)s() {\n local -a completions\n local -a completions_with_descriptions\n local -a response\n response=(\"${(@f)$( env COMP_WORDS=\\\"${words[*]}\\\" \\\\\n COMP_CWORD=$((CURRENT-1)) \\\\\n %(autocomplete_var)s=\\\"complete_zsh\\\" \\\\\n %(script_names)s )}\")\n\n for key descr in ${(kv)response}; do\n if [[ \"$descr\" == \"_\" ]]; then\n completions+=(\"$key\")\n else\n completions_with_descriptions+=(\"$key\":\"$descr\")\n fi\n done\n\n if [ -n \"$completions_with_descriptions\" ]; then\n _describe -V unsorted completions_with_descriptions -U -Q\n fi\n\n if [ -n \"$completions\" ]; then\n compadd -U -V unsorted -Q -a completions\n fi\n compstate[insert]=\"automenu\"\n}\n\ncompdef %(complete_func)s %(script_names)s\n'''\n\n_invalid_ident_char_re = re.compile(r'[^a-zA-Z0-9_]')\n\n\ndef get_completion_script(prog_name, complete_var, shell):\n cf_name = _invalid_ident_char_re.sub('', prog_name.replace('-', '_'))\n script = COMPLETION_SCRIPT_ZSH if shell == 'zsh' else COMPLETION_SCRIPT_BASH\n return (script % {\n 'complete_func': '_%s_completion' % cf_name,\n 'script_names': prog_name,\n 'autocomplete_var': complete_var,\n }).strip() + ';'\n\n\ndef resolve_ctx(cli, prog_name, args):\n \"\"\"\n Parse into a hierarchy of contexts. Contexts are connected through the parent variable.\n :param cli: command definition\n :param prog_name: the program that is running\n :param args: full list of args\n :return: the final context/command parsed\n \"\"\"\n ctx = cli.make_context(prog_name, args, resilient_parsing=True)\n args = ctx.protected_args + ctx.args\n while args:\n if isinstance(ctx.command, MultiCommand):\n if not ctx.command.chain:\n cmd_name, cmd, args = ctx.command.resolve_command(ctx, args)\n if cmd is None:\n return ctx\n ctx = cmd.make_context(cmd_name, args, parent=ctx,\n resilient_parsing=True)\n args = ctx.protected_args + ctx.args\n else:\n # Walk chained subcommand contexts saving the last one.\n while args:\n cmd_name, cmd, args = ctx.command.resolve_command(ctx, args)\n if cmd is None:\n return ctx\n sub_ctx = cmd.make_context(cmd_name, args, parent=ctx,\n allow_extra_args=True,\n allow_interspersed_args=False,\n resilient_parsing=True)\n args = sub_ctx.args\n ctx = sub_ctx\n args = sub_ctx.protected_args + sub_ctx.args\n else:\n break\n return ctx\n\n\ndef start_of_option(param_str):\n \"\"\"\n :param param_str: param_str to check\n :return: whether or not this is the start of an option declaration (i.e. starts \"-\" or \"--\")\n \"\"\"\n return param_str and param_str[:1] == '-'\n\n\ndef is_incomplete_option(all_args, cmd_param):\n \"\"\"\n :param all_args: the full original list of args supplied\n :param cmd_param: the current command paramter\n :return: whether or not the last option declaration (i.e. starts \"-\" or \"--\") is incomplete and\n corresponds to this cmd_param. In other words whether this cmd_param option can still accept\n values\n \"\"\"\n if not isinstance(cmd_param, Option):\n return False\n if cmd_param.is_flag:\n return False\n last_option = None\n for index, arg_str in enumerate(reversed([arg for arg in all_args if arg != WORDBREAK])):\n if index + 1 > cmd_param.nargs:\n break\n if start_of_option(arg_str):\n last_option = arg_str\n\n return True if last_option and last_option in cmd_param.opts else False\n\n\ndef is_incomplete_argument(current_params, cmd_param):\n \"\"\"\n :param current_params: the current params and values for this argument as already entered\n :param cmd_param: the current command parameter\n :return: whether or not the last argument is incomplete and corresponds to this cmd_param. In\n other words whether or not the this cmd_param argument can still accept values\n \"\"\"\n if not isinstance(cmd_param, Argument):\n return False\n current_param_values = current_params[cmd_param.name]\n if current_param_values is None:\n return True\n if cmd_param.nargs == -1:\n return True\n if isinstance(current_param_values, collections.Iterable) \\\n and cmd_param.nargs > 1 and len(current_param_values) < cmd_param.nargs:\n return True\n return False\n\n\ndef get_user_autocompletions(ctx, args, incomplete, cmd_param):\n \"\"\"\n :param ctx: context associated with the parsed command\n :param args: full list of args\n :param incomplete: the incomplete text to autocomplete\n :param cmd_param: command definition\n :return: all the possible user-specified completions for the param\n \"\"\"\n results = []\n if isinstance(cmd_param.type, Choice):\n # Choices don't support descriptions.\n results = [(c, None)\n for c in cmd_param.type.choices if c.startswith(incomplete)]\n elif cmd_param.autocompletion is not None:\n dynamic_completions = cmd_param.autocompletion(ctx=ctx,\n args=args,\n incomplete=incomplete)\n results = [c if isinstance(c, tuple) else (c, None)\n for c in dynamic_completions]\n return results\n\n\ndef add_subcommand_completions(ctx, incomplete, completions_out):\n # Add subcommand completions.\n if isinstance(ctx.command, MultiCommand):\n completions_out.extend(\n [(c, ctx.command.get_command(ctx, c).get_short_help_str()) for c in ctx.command.list_commands(ctx) if c.startswith(incomplete)])\n\n # Walk up the context list and add any other completion possibilities from chained commands\n while ctx.parent is not None:\n ctx = ctx.parent\n if isinstance(ctx.command, MultiCommand) and ctx.command.chain:\n remaining_commands = sorted(\n set(ctx.command.list_commands(ctx)) - set(ctx.protected_args))\n completions_out.extend(\n [(c, ctx.command.get_command(ctx, c).get_short_help_str()) for c in remaining_commands if c.startswith(incomplete)])\n\n\ndef get_choices(cli, prog_name, args, incomplete):\n \"\"\"\n :param cli: command definition\n :param prog_name: the program that is running\n :param args: full list of args\n :param incomplete: the incomplete text to autocomplete\n :return: all the possible completions for the incomplete\n \"\"\"\n all_args = copy.deepcopy(args)\n\n ctx = resolve_ctx(cli, prog_name, args)\n if ctx is None:\n return []\n\n # In newer versions of bash long opts with '='s are partitioned, but it's easier to parse\n # without the '='\n if start_of_option(incomplete) and WORDBREAK in incomplete:\n partition_incomplete = incomplete.partition(WORDBREAK)\n all_args.append(partition_incomplete[0])\n incomplete = partition_incomplete[2]\n elif incomplete == WORDBREAK:\n incomplete = ''\n\n completions = []\n if start_of_option(incomplete):\n # completions for partial options\n for param in ctx.command.params:\n if isinstance(param, Option):\n param_opts = [param_opt for param_opt in param.opts +\n param.secondary_opts if param_opt not in all_args or param.multiple]\n completions.extend(\n [(o, param.help) for o in param_opts if o.startswith(incomplete)])\n return completions\n # completion for option values from user supplied values\n for param in ctx.command.params:\n if is_incomplete_option(all_args, param):\n return get_user_autocompletions(ctx, all_args, incomplete, param)\n # completion for argument values from user supplied values\n for param in ctx.command.params:\n if is_incomplete_argument(ctx.params, param):\n return get_user_autocompletions(ctx, all_args, incomplete, param)\n\n add_subcommand_completions(ctx, incomplete, completions)\n # Sort before returning so that proper ordering can be enforced in custom types.\n return sorted(completions)\n\n\ndef do_complete(cli, prog_name, include_descriptions):\n cwords = split_arg_string(os.environ['COMP_WORDS'])\n cword = int(os.environ['COMP_CWORD'])\n args = cwords[1:cword]\n try:\n incomplete = cwords[cword]\n except IndexError:\n incomplete = ''\n\n for item in get_choices(cli, prog_name, args, incomplete):\n echo(item[0])\n if include_descriptions:\n # ZSH has trouble dealing with empty array parameters when returned from commands, so use a well defined character '_' to indicate no description is present.\n echo(item[1] if item[1] else '_')\n\n return True\n\n\ndef bashcomplete(cli, prog_name, complete_var, complete_instr):\n if complete_instr.startswith('source'):\n shell = 'zsh' if complete_instr == 'source_zsh' else 'bash'\n echo(get_completion_script(prog_name, complete_var, shell))\n return True\n elif complete_instr == 'complete' or complete_instr == 'complete_zsh':\n return do_complete(cli, prog_name, complete_instr == 'complete_zsh')\n return False\n", "path": "click/_bashcomplete.py"}, {"content": "import click\nimport os\n\n\[email protected]()\ndef cli():\n pass\n\n\ndef get_env_vars(ctx, args, incomplete):\n # Completions returned as strings do not have a description displayed.\n for key in os.environ.keys():\n if incomplete in key:\n yield key\n\n\[email protected](help='A command to print environment variables')\[email protected](\"envvar\", type=click.STRING, autocompletion=get_env_vars)\ndef cmd1(envvar):\n click.echo('Environment variable: %s' % envvar)\n click.echo('Value: %s' % os.environ[envvar])\n\n\[email protected](help='A group that holds a subcommand')\ndef group():\n pass\n\n\ndef list_users(ctx, args, incomplete):\n # You can generate completions with descriptions by returning\n # tuples in the form (completion, description).\n users = [('bob', 'butcher'),\n ('alice', 'baker'),\n ('jerry', 'candlestick maker')]\n # Ths will allow completion matches based on matches within the description string too!\n return [user for user in users if incomplete in user[0] or incomplete in user[1]]\n\n\[email protected](help='Choose a user')\[email protected](\"user\", type=click.STRING, autocompletion=list_users)\ndef subcmd(user):\n click.echo('Chosen user is %s' % user)\n\n\ncli.add_command(group)\n", "path": "examples/bashcompletion/bashcompletion.py"}]}
| 3,589 | 786 |
gh_patches_debug_25746
|
rasdani/github-patches
|
git_diff
|
mito-ds__mito-359
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
spelling mistake during mitoinstaller install
**Describe the bug**
Small issue, very minor, found a spelling mistake when running mitoinstaller install,
Starting install...
Create mito user
Upgrade mitoinstaller
Setting up **enviornment** <--- **environment**
Check dependencies
Remove mitosheet3 if present
Install mitosheet
This might take a few moments...
**To Reproduce**
Steps to reproduce the behavior:
1. run python -m mitoinstaller install
Please include the relevant dataset if the bug you encountered is dataset specific. Make sure to anonymize the data properly.
**Expected behavior**
should be corrected to "environment"
**Screenshots**

**Desktop (please complete the following information):**
N/A
**Additional context**
N/A
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitoinstaller/mitoinstaller/__main__.py`
Content:
```
1 """
2 The Mito Installer package contains utils for installing
3 Mito within your Python enviornment.
4
5 Long term, we aim to meet:
6 1. This package has minimal dependencies, both for speed of download and the ultimate portability.
7 2. The installation attempts to fail as early as possible, and to give the user as much help
8 help as possible while doing so.
9 """
10 from colorama import init
11 from termcolor import colored # type: ignore
12
13 from mitoinstaller.install import do_install
14
15
16 def main() -> None:
17 """
18 The main function of the Mito installer, this function is responsible
19 for installing and upgrading the `mitosheet` package.
20
21 To install Mito:
22 python -m mitoinstaller install
23
24 To upgrade Mito:
25 python -m mitoinstaller upgrade
26
27 To install Mito from TestPyPi
28 python -m mitoinstaller install --test-pypi
29 """
30 import sys
31 init()
32
33 if len(sys.argv) > 1:
34 command = sys.argv[1]
35 else:
36 command = ''
37
38 if command == 'install' or command == 'upgrade':
39 do_install()
40 elif command == 'uninstall':
41 print('To uninstall, run,', colored('`pip uninstall mitosheet`', 'green'))
42 else:
43 # NOTE: we don't add upgrade_to_jupyterlab_3 to the help.
44 # We only send this command to the users who need to know this (namely, those that need to upgrade)
45 print('\nProper usage is', colored('`python -m mitoinstaller install`', 'green'), 'or', colored('`python -m mitoinstaller upgrade`', 'green'), '\n\nTry running the command ', colored('`python -m mitoinstaller install`', 'green'), '\n')
46
47
48 if __name__ == '__main__':
49 main()
50
```
Path: `mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py`
Content:
```
1 import importlib
2 import os
3 import sys
4
5 from mitoinstaller import __version__
6 from mitoinstaller.commands import upgrade_mito_installer
7 from mitoinstaller.installer_steps.installer_step import InstallerStep
8 from mitoinstaller.jupyter_utils import set_prefered_jupyter_env_variable
9 from mitoinstaller.log_utils import identify, log
10 from mitoinstaller.user_install import (USER_JSON_PATH, go_pro,
11 try_create_user_json_file)
12
13
14 def initial_install_step_create_user():
15
16 if not os.path.exists(USER_JSON_PATH):
17 try_create_user_json_file(is_pro=('--pro' in sys.argv))
18
19 if not ('--pro' in sys.argv):
20 # Only try and log if we're not pro
21 identify()
22 log('install_started', {
23 'mitoinstaller_version': __version__
24 })
25 else:
26 # If the user is going pro, make sure they are set to pro
27 go_pro()
28
29 def initial_install_step_add_env_for_which_jupyter():
30 """
31 This install steps checks, up front, which very of jupyter we should
32 launch: lab or notebook. It then stores this as an enviornment variable
33 so that the final installer steps can launch it.
34
35 We do this up front, so that we can see which packages that user has
36 installed before installing Mito.
37 """
38 set_prefered_jupyter_env_variable()
39
40
41 INITIAL_INSTALLER_STEPS = [
42 InstallerStep(
43 'Create mito user',
44 initial_install_step_create_user
45 ),
46 InstallerStep(
47 'Upgrade mitoinstaller',
48 upgrade_mito_installer,
49 optional=True
50 ),
51 InstallerStep(
52 'Setting up enviornment',
53 initial_install_step_add_env_for_which_jupyter,
54 ),
55 ]
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mitoinstaller/mitoinstaller/__main__.py b/mitoinstaller/mitoinstaller/__main__.py
--- a/mitoinstaller/mitoinstaller/__main__.py
+++ b/mitoinstaller/mitoinstaller/__main__.py
@@ -1,6 +1,6 @@
"""
The Mito Installer package contains utils for installing
-Mito within your Python enviornment.
+Mito within your Python environment.
Long term, we aim to meet:
1. This package has minimal dependencies, both for speed of download and the ultimate portability.
diff --git a/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py b/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py
--- a/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py
+++ b/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py
@@ -29,7 +29,7 @@
def initial_install_step_add_env_for_which_jupyter():
"""
This install steps checks, up front, which very of jupyter we should
- launch: lab or notebook. It then stores this as an enviornment variable
+ launch: lab or notebook. It then stores this as an environment variable
so that the final installer steps can launch it.
We do this up front, so that we can see which packages that user has
@@ -49,7 +49,7 @@
optional=True
),
InstallerStep(
- 'Setting up enviornment',
+ 'Setting up environment',
initial_install_step_add_env_for_which_jupyter,
),
]
|
{"golden_diff": "diff --git a/mitoinstaller/mitoinstaller/__main__.py b/mitoinstaller/mitoinstaller/__main__.py\n--- a/mitoinstaller/mitoinstaller/__main__.py\n+++ b/mitoinstaller/mitoinstaller/__main__.py\n@@ -1,6 +1,6 @@\n \"\"\"\n The Mito Installer package contains utils for installing\n-Mito within your Python enviornment.\n+Mito within your Python environment.\n \n Long term, we aim to meet:\n 1. This package has minimal dependencies, both for speed of download and the ultimate portability.\ndiff --git a/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py b/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py\n--- a/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py\n+++ b/mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py\n@@ -29,7 +29,7 @@\n def initial_install_step_add_env_for_which_jupyter():\n \"\"\"\n This install steps checks, up front, which very of jupyter we should\n- launch: lab or notebook. It then stores this as an enviornment variable\n+ launch: lab or notebook. It then stores this as an environment variable\n so that the final installer steps can launch it. \n \n We do this up front, so that we can see which packages that user has \n@@ -49,7 +49,7 @@\n optional=True\n ),\n InstallerStep(\n- 'Setting up enviornment',\n+ 'Setting up environment',\n initial_install_step_add_env_for_which_jupyter,\n ),\n ]\n", "issue": "spelling mistake during mitoinstaller install\n**Describe the bug**\r\nSmall issue, very minor, found a spelling mistake when running mitoinstaller install, \r\n\r\nStarting install...\r\nCreate mito user\r\nUpgrade mitoinstaller\r\nSetting up **enviornment** <--- **environment**\r\nCheck dependencies\r\nRemove mitosheet3 if present\r\nInstall mitosheet\r\nThis might take a few moments...\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. run python -m mitoinstaller install\r\n\r\nPlease include the relevant dataset if the bug you encountered is dataset specific. Make sure to anonymize the data properly.\r\n\r\n**Expected behavior**\r\nshould be corrected to \"environment\"\r\n\r\n**Screenshots**\r\n\r\n\r\n**Desktop (please complete the following information):**\r\nN/A\r\n\r\n**Additional context**\r\nN/A\r\n\n", "before_files": [{"content": "\"\"\"\nThe Mito Installer package contains utils for installing\nMito within your Python enviornment.\n\nLong term, we aim to meet:\n1. This package has minimal dependencies, both for speed of download and the ultimate portability.\n2. The installation attempts to fail as early as possible, and to give the user as much help\n help as possible while doing so.\n\"\"\"\nfrom colorama import init\nfrom termcolor import colored # type: ignore\n\nfrom mitoinstaller.install import do_install\n\n\ndef main() -> None:\n \"\"\"\n The main function of the Mito installer, this function is responsible\n for installing and upgrading the `mitosheet` package.\n\n To install Mito:\n python -m mitoinstaller install\n\n To upgrade Mito:\n python -m mitoinstaller upgrade\n\n To install Mito from TestPyPi\n python -m mitoinstaller install --test-pypi\n \"\"\"\n import sys\n init()\n\n if len(sys.argv) > 1:\n command = sys.argv[1]\n else:\n command = ''\n\n if command == 'install' or command == 'upgrade':\n do_install()\n elif command == 'uninstall':\n print('To uninstall, run,', colored('`pip uninstall mitosheet`', 'green'))\n else:\n # NOTE: we don't add upgrade_to_jupyterlab_3 to the help.\n # We only send this command to the users who need to know this (namely, those that need to upgrade)\n print('\\nProper usage is', colored('`python -m mitoinstaller install`', 'green'), 'or', colored('`python -m mitoinstaller upgrade`', 'green'), '\\n\\nTry running the command ', colored('`python -m mitoinstaller install`', 'green'), '\\n')\n \n\nif __name__ == '__main__':\n main()\n", "path": "mitoinstaller/mitoinstaller/__main__.py"}, {"content": "import importlib\nimport os\nimport sys\n\nfrom mitoinstaller import __version__\nfrom mitoinstaller.commands import upgrade_mito_installer\nfrom mitoinstaller.installer_steps.installer_step import InstallerStep\nfrom mitoinstaller.jupyter_utils import set_prefered_jupyter_env_variable\nfrom mitoinstaller.log_utils import identify, log\nfrom mitoinstaller.user_install import (USER_JSON_PATH, go_pro,\n try_create_user_json_file)\n\n\ndef initial_install_step_create_user():\n\n if not os.path.exists(USER_JSON_PATH):\n try_create_user_json_file(is_pro=('--pro' in sys.argv))\n\n if not ('--pro' in sys.argv):\n # Only try and log if we're not pro\n identify()\n log('install_started', {\n 'mitoinstaller_version': __version__\n })\n else:\n # If the user is going pro, make sure they are set to pro\n go_pro()\n\ndef initial_install_step_add_env_for_which_jupyter():\n \"\"\"\n This install steps checks, up front, which very of jupyter we should\n launch: lab or notebook. It then stores this as an enviornment variable\n so that the final installer steps can launch it. \n\n We do this up front, so that we can see which packages that user has \n installed before installing Mito.\n \"\"\"\n set_prefered_jupyter_env_variable()\n\n\nINITIAL_INSTALLER_STEPS = [\n InstallerStep(\n 'Create mito user',\n initial_install_step_create_user\n ),\n InstallerStep(\n 'Upgrade mitoinstaller',\n upgrade_mito_installer,\n optional=True\n ),\n InstallerStep(\n 'Setting up enviornment',\n initial_install_step_add_env_for_which_jupyter,\n ),\n]\n", "path": "mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py"}], "after_files": [{"content": "\"\"\"\nThe Mito Installer package contains utils for installing\nMito within your Python environment.\n\nLong term, we aim to meet:\n1. This package has minimal dependencies, both for speed of download and the ultimate portability.\n2. The installation attempts to fail as early as possible, and to give the user as much help\n help as possible while doing so.\n\"\"\"\nfrom colorama import init\nfrom termcolor import colored # type: ignore\n\nfrom mitoinstaller.install import do_install\n\n\ndef main() -> None:\n \"\"\"\n The main function of the Mito installer, this function is responsible\n for installing and upgrading the `mitosheet` package.\n\n To install Mito:\n python -m mitoinstaller install\n\n To upgrade Mito:\n python -m mitoinstaller upgrade\n\n To install Mito from TestPyPi\n python -m mitoinstaller install --test-pypi\n \"\"\"\n import sys\n init()\n\n if len(sys.argv) > 1:\n command = sys.argv[1]\n else:\n command = ''\n\n if command == 'install' or command == 'upgrade':\n do_install()\n elif command == 'uninstall':\n print('To uninstall, run,', colored('`pip uninstall mitosheet`', 'green'))\n else:\n # NOTE: we don't add upgrade_to_jupyterlab_3 to the help.\n # We only send this command to the users who need to know this (namely, those that need to upgrade)\n print('\\nProper usage is', colored('`python -m mitoinstaller install`', 'green'), 'or', colored('`python -m mitoinstaller upgrade`', 'green'), '\\n\\nTry running the command ', colored('`python -m mitoinstaller install`', 'green'), '\\n')\n \n\nif __name__ == '__main__':\n main()\n", "path": "mitoinstaller/mitoinstaller/__main__.py"}, {"content": "import importlib\nimport os\nimport sys\n\nfrom mitoinstaller import __version__\nfrom mitoinstaller.commands import upgrade_mito_installer\nfrom mitoinstaller.installer_steps.installer_step import InstallerStep\nfrom mitoinstaller.jupyter_utils import set_prefered_jupyter_env_variable\nfrom mitoinstaller.log_utils import identify, log\nfrom mitoinstaller.user_install import (USER_JSON_PATH, go_pro,\n try_create_user_json_file)\n\n\ndef initial_install_step_create_user():\n\n if not os.path.exists(USER_JSON_PATH):\n try_create_user_json_file(is_pro=('--pro' in sys.argv))\n\n if not ('--pro' in sys.argv):\n # Only try and log if we're not pro\n identify()\n log('install_started', {\n 'mitoinstaller_version': __version__\n })\n else:\n # If the user is going pro, make sure they are set to pro\n go_pro()\n\ndef initial_install_step_add_env_for_which_jupyter():\n \"\"\"\n This install steps checks, up front, which very of jupyter we should\n launch: lab or notebook. It then stores this as an environment variable\n so that the final installer steps can launch it. \n\n We do this up front, so that we can see which packages that user has \n installed before installing Mito.\n \"\"\"\n set_prefered_jupyter_env_variable()\n\n\nINITIAL_INSTALLER_STEPS = [\n InstallerStep(\n 'Create mito user',\n initial_install_step_create_user\n ),\n InstallerStep(\n 'Upgrade mitoinstaller',\n upgrade_mito_installer,\n optional=True\n ),\n InstallerStep(\n 'Setting up environment',\n initial_install_step_add_env_for_which_jupyter,\n ),\n]\n", "path": "mitoinstaller/mitoinstaller/installer_steps/initial_installer_steps.py"}]}
| 1,519 | 366 |
gh_patches_debug_55591
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-10633
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for Pillow 10.0.0
### Is your proposal related to a problem?
Pillow 10.0.0 [has just been released.](https://github.com/python-pillow/Pillow/releases/tag/10.0.0) Wagtail 5.0.2 [restricts Pillow support to <10.0.0.](https://github.com/wagtail/wagtail/blob/a68f69f2d7f46943cc23b7f65349448b23044869/setup.py#L30)
Adding support for the new Pillow release is desired.
### Describe the solution you'd like
Add support for Pillow 10.0.0
### Describe alternatives you've considered
Not applicable.
### Additional context
This is a relevant dependency to the project, and to sites running it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 from wagtail import __version__
4 from wagtail.utils.setup import assets, check_bdist_egg, sdist
5
6 try:
7 from setuptools import find_packages, setup
8 except ImportError:
9 from distutils.core import setup
10
11
12 # Hack to prevent "TypeError: 'NoneType' object is not callable" error
13 # in multiprocessing/util.py _exit_function when setup.py exits
14 # (see http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)
15 try:
16 import multiprocessing # noqa: F401
17 except ImportError:
18 pass
19
20
21 install_requires = [
22 "Django>=3.2,<4.3",
23 "django-modelcluster>=6.0,<7.0",
24 "django-permissionedforms>=0.1,<1.0",
25 "django-taggit>=2.0,<5.0",
26 "django-treebeard>=4.5.1,<5.0",
27 "djangorestframework>=3.11.1,<4.0",
28 "django-filter>=2.2,<24",
29 "draftjs_exporter>=2.1.5,<3.0",
30 "Pillow>=4.0.0,<10.0.0",
31 "beautifulsoup4>=4.8,<4.12",
32 "html5lib>=0.999,<2",
33 "Willow>=1.5,<1.6",
34 "requests>=2.11.1,<3.0",
35 "l18n>=2018.5",
36 "openpyxl>=3.0.10,<4.0",
37 "anyascii>=0.1.5",
38 "telepath>=0.1.1,<1",
39 ]
40
41 # Testing dependencies
42 testing_extras = [
43 # Required for running the tests
44 "python-dateutil>=2.7",
45 "pytz>=2014.7",
46 "elasticsearch>=5.0,<6.0",
47 "Jinja2>=3.0,<3.2",
48 "boto3>=1.16,<1.17",
49 "freezegun>=0.3.8",
50 "azure-mgmt-cdn>=12.0,<13.0",
51 "azure-mgmt-frontdoor>=1.0,<1.1",
52 "django-pattern-library>=0.7,<0.8",
53 # For coverage and PEP8 linting
54 "coverage>=3.7.0",
55 "black==22.3.0",
56 "doc8==0.8.1",
57 "ruff==0.0.272",
58 # For enforcing string formatting mechanism in source files
59 "semgrep==1.3.0",
60 # For templates linting
61 "curlylint==0.13.1",
62 # For template indenting
63 "djhtml==1.5.2",
64 # for validating string formats in .po translation files
65 "polib>=1.1,<2.0",
66 # For wagtail.test.utils.wagtail_factories (used for streamfield migration toolkit)
67 "factory-boy>=3.2",
68 ]
69
70 # Documentation dependencies
71 documentation_extras = [
72 "pyenchant>=3.1.1,<4",
73 "sphinxcontrib-spelling>=5.4.0,<6",
74 "Sphinx>=1.5.2",
75 "sphinx-autobuild>=0.6.0",
76 "sphinx-wagtail-theme==6.0.0",
77 "myst_parser==0.18.1",
78 "sphinx_copybutton>=0.5,<1.0",
79 ]
80
81 setup(
82 name="wagtail",
83 version=__version__,
84 description="A Django content management system.",
85 author="Wagtail core team + contributors",
86 author_email="[email protected]", # For support queries, please see https://docs.wagtail.org/en/stable/support.html
87 url="https://wagtail.org/",
88 project_urls={
89 "Documentation": "https://docs.wagtail.org",
90 "Source": "https://github.com/wagtail/wagtail",
91 },
92 packages=find_packages(),
93 include_package_data=True,
94 license="BSD",
95 long_description="Wagtail is an open source content management \
96 system built on Django, with a strong community and commercial support. \
97 It’s focused on user experience, and offers precise control for \
98 designers and developers.\n\n\
99 For more details, see https://wagtail.org, https://docs.wagtail.org and \
100 https://github.com/wagtail/wagtail/.",
101 classifiers=[
102 "Development Status :: 5 - Production/Stable",
103 "Environment :: Web Environment",
104 "Intended Audience :: Developers",
105 "License :: OSI Approved :: BSD License",
106 "Operating System :: OS Independent",
107 "Programming Language :: Python",
108 "Programming Language :: Python :: 3",
109 "Programming Language :: Python :: 3.7",
110 "Programming Language :: Python :: 3.8",
111 "Programming Language :: Python :: 3.9",
112 "Programming Language :: Python :: 3.10",
113 "Programming Language :: Python :: 3.11",
114 "Framework :: Django",
115 "Framework :: Django :: 3.2",
116 "Framework :: Django :: 4.1",
117 "Framework :: Django :: 4.2",
118 "Framework :: Wagtail",
119 "Topic :: Internet :: WWW/HTTP :: Site Management",
120 ],
121 python_requires=">=3.7",
122 install_requires=install_requires,
123 extras_require={"testing": testing_extras, "docs": documentation_extras},
124 entry_points="""
125 [console_scripts]
126 wagtail=wagtail.bin.wagtail:main
127 """,
128 zip_safe=False,
129 cmdclass={
130 "sdist": sdist,
131 "bdist_egg": check_bdist_egg,
132 "assets": assets,
133 },
134 )
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -27,7 +27,7 @@
"djangorestframework>=3.11.1,<4.0",
"django-filter>=2.2,<24",
"draftjs_exporter>=2.1.5,<3.0",
- "Pillow>=4.0.0,<10.0.0",
+ "Pillow>=9.1.0,<11.0.0",
"beautifulsoup4>=4.8,<4.12",
"html5lib>=0.999,<2",
"Willow>=1.5,<1.6",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -27,7 +27,7 @@\n \"djangorestframework>=3.11.1,<4.0\",\n \"django-filter>=2.2,<24\",\n \"draftjs_exporter>=2.1.5,<3.0\",\n- \"Pillow>=4.0.0,<10.0.0\",\n+ \"Pillow>=9.1.0,<11.0.0\",\n \"beautifulsoup4>=4.8,<4.12\",\n \"html5lib>=0.999,<2\",\n \"Willow>=1.5,<1.6\",\n", "issue": "Add support for Pillow 10.0.0\n### Is your proposal related to a problem?\r\n\r\nPillow 10.0.0 [has just been released.](https://github.com/python-pillow/Pillow/releases/tag/10.0.0) Wagtail 5.0.2 [restricts Pillow support to <10.0.0.](https://github.com/wagtail/wagtail/blob/a68f69f2d7f46943cc23b7f65349448b23044869/setup.py#L30)\r\n\r\nAdding support for the new Pillow release is desired.\r\n\r\n### Describe the solution you'd like\r\n\r\nAdd support for Pillow 10.0.0\r\n\r\n\r\n### Describe alternatives you've considered\r\n\r\nNot applicable.\r\n\r\n### Additional context\r\n\r\nThis is a relevant dependency to the project, and to sites running it.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom wagtail import __version__\nfrom wagtail.utils.setup import assets, check_bdist_egg, sdist\n\ntry:\n from setuptools import find_packages, setup\nexcept ImportError:\n from distutils.core import setup\n\n\n# Hack to prevent \"TypeError: 'NoneType' object is not callable\" error\n# in multiprocessing/util.py _exit_function when setup.py exits\n# (see http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)\ntry:\n import multiprocessing # noqa: F401\nexcept ImportError:\n pass\n\n\ninstall_requires = [\n \"Django>=3.2,<4.3\",\n \"django-modelcluster>=6.0,<7.0\",\n \"django-permissionedforms>=0.1,<1.0\",\n \"django-taggit>=2.0,<5.0\",\n \"django-treebeard>=4.5.1,<5.0\",\n \"djangorestframework>=3.11.1,<4.0\",\n \"django-filter>=2.2,<24\",\n \"draftjs_exporter>=2.1.5,<3.0\",\n \"Pillow>=4.0.0,<10.0.0\",\n \"beautifulsoup4>=4.8,<4.12\",\n \"html5lib>=0.999,<2\",\n \"Willow>=1.5,<1.6\",\n \"requests>=2.11.1,<3.0\",\n \"l18n>=2018.5\",\n \"openpyxl>=3.0.10,<4.0\",\n \"anyascii>=0.1.5\",\n \"telepath>=0.1.1,<1\",\n]\n\n# Testing dependencies\ntesting_extras = [\n # Required for running the tests\n \"python-dateutil>=2.7\",\n \"pytz>=2014.7\",\n \"elasticsearch>=5.0,<6.0\",\n \"Jinja2>=3.0,<3.2\",\n \"boto3>=1.16,<1.17\",\n \"freezegun>=0.3.8\",\n \"azure-mgmt-cdn>=12.0,<13.0\",\n \"azure-mgmt-frontdoor>=1.0,<1.1\",\n \"django-pattern-library>=0.7,<0.8\",\n # For coverage and PEP8 linting\n \"coverage>=3.7.0\",\n \"black==22.3.0\",\n \"doc8==0.8.1\",\n \"ruff==0.0.272\",\n # For enforcing string formatting mechanism in source files\n \"semgrep==1.3.0\",\n # For templates linting\n \"curlylint==0.13.1\",\n # For template indenting\n \"djhtml==1.5.2\",\n # for validating string formats in .po translation files\n \"polib>=1.1,<2.0\",\n # For wagtail.test.utils.wagtail_factories (used for streamfield migration toolkit)\n \"factory-boy>=3.2\",\n]\n\n# Documentation dependencies\ndocumentation_extras = [\n \"pyenchant>=3.1.1,<4\",\n \"sphinxcontrib-spelling>=5.4.0,<6\",\n \"Sphinx>=1.5.2\",\n \"sphinx-autobuild>=0.6.0\",\n \"sphinx-wagtail-theme==6.0.0\",\n \"myst_parser==0.18.1\",\n \"sphinx_copybutton>=0.5,<1.0\",\n]\n\nsetup(\n name=\"wagtail\",\n version=__version__,\n description=\"A Django content management system.\",\n author=\"Wagtail core team + contributors\",\n author_email=\"[email protected]\", # For support queries, please see https://docs.wagtail.org/en/stable/support.html\n url=\"https://wagtail.org/\",\n project_urls={\n \"Documentation\": \"https://docs.wagtail.org\",\n \"Source\": \"https://github.com/wagtail/wagtail\",\n },\n packages=find_packages(),\n include_package_data=True,\n license=\"BSD\",\n long_description=\"Wagtail is an open source content management \\\nsystem built on Django, with a strong community and commercial support. \\\nIt\u2019s focused on user experience, and offers precise control for \\\ndesigners and developers.\\n\\n\\\nFor more details, see https://wagtail.org, https://docs.wagtail.org and \\\nhttps://github.com/wagtail/wagtail/.\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Framework :: Django\",\n \"Framework :: Django :: 3.2\",\n \"Framework :: Django :: 4.1\",\n \"Framework :: Django :: 4.2\",\n \"Framework :: Wagtail\",\n \"Topic :: Internet :: WWW/HTTP :: Site Management\",\n ],\n python_requires=\">=3.7\",\n install_requires=install_requires,\n extras_require={\"testing\": testing_extras, \"docs\": documentation_extras},\n entry_points=\"\"\"\n [console_scripts]\n wagtail=wagtail.bin.wagtail:main\n \"\"\",\n zip_safe=False,\n cmdclass={\n \"sdist\": sdist,\n \"bdist_egg\": check_bdist_egg,\n \"assets\": assets,\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nfrom wagtail import __version__\nfrom wagtail.utils.setup import assets, check_bdist_egg, sdist\n\ntry:\n from setuptools import find_packages, setup\nexcept ImportError:\n from distutils.core import setup\n\n\n# Hack to prevent \"TypeError: 'NoneType' object is not callable\" error\n# in multiprocessing/util.py _exit_function when setup.py exits\n# (see http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)\ntry:\n import multiprocessing # noqa: F401\nexcept ImportError:\n pass\n\n\ninstall_requires = [\n \"Django>=3.2,<4.3\",\n \"django-modelcluster>=6.0,<7.0\",\n \"django-permissionedforms>=0.1,<1.0\",\n \"django-taggit>=2.0,<5.0\",\n \"django-treebeard>=4.5.1,<5.0\",\n \"djangorestframework>=3.11.1,<4.0\",\n \"django-filter>=2.2,<24\",\n \"draftjs_exporter>=2.1.5,<3.0\",\n \"Pillow>=9.1.0,<11.0.0\",\n \"beautifulsoup4>=4.8,<4.12\",\n \"html5lib>=0.999,<2\",\n \"Willow>=1.5,<1.6\",\n \"requests>=2.11.1,<3.0\",\n \"l18n>=2018.5\",\n \"openpyxl>=3.0.10,<4.0\",\n \"anyascii>=0.1.5\",\n \"telepath>=0.1.1,<1\",\n]\n\n# Testing dependencies\ntesting_extras = [\n # Required for running the tests\n \"python-dateutil>=2.7\",\n \"pytz>=2014.7\",\n \"elasticsearch>=5.0,<6.0\",\n \"Jinja2>=3.0,<3.2\",\n \"boto3>=1.16,<1.17\",\n \"freezegun>=0.3.8\",\n \"azure-mgmt-cdn>=12.0,<13.0\",\n \"azure-mgmt-frontdoor>=1.0,<1.1\",\n \"django-pattern-library>=0.7,<0.8\",\n # For coverage and PEP8 linting\n \"coverage>=3.7.0\",\n \"black==22.3.0\",\n \"doc8==0.8.1\",\n \"ruff==0.0.272\",\n # For enforcing string formatting mechanism in source files\n \"semgrep==1.3.0\",\n # For templates linting\n \"curlylint==0.13.1\",\n # For template indenting\n \"djhtml==1.5.2\",\n # for validating string formats in .po translation files\n \"polib>=1.1,<2.0\",\n # For wagtail.test.utils.wagtail_factories (used for streamfield migration toolkit)\n \"factory-boy>=3.2\",\n]\n\n# Documentation dependencies\ndocumentation_extras = [\n \"pyenchant>=3.1.1,<4\",\n \"sphinxcontrib-spelling>=5.4.0,<6\",\n \"Sphinx>=1.5.2\",\n \"sphinx-autobuild>=0.6.0\",\n \"sphinx-wagtail-theme==6.0.0\",\n \"myst_parser==0.18.1\",\n \"sphinx_copybutton>=0.5,<1.0\",\n]\n\nsetup(\n name=\"wagtail\",\n version=__version__,\n description=\"A Django content management system.\",\n author=\"Wagtail core team + contributors\",\n author_email=\"[email protected]\", # For support queries, please see https://docs.wagtail.org/en/stable/support.html\n url=\"https://wagtail.org/\",\n project_urls={\n \"Documentation\": \"https://docs.wagtail.org\",\n \"Source\": \"https://github.com/wagtail/wagtail\",\n },\n packages=find_packages(),\n include_package_data=True,\n license=\"BSD\",\n long_description=\"Wagtail is an open source content management \\\nsystem built on Django, with a strong community and commercial support. \\\nIt\u2019s focused on user experience, and offers precise control for \\\ndesigners and developers.\\n\\n\\\nFor more details, see https://wagtail.org, https://docs.wagtail.org and \\\nhttps://github.com/wagtail/wagtail/.\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Framework :: Django\",\n \"Framework :: Django :: 3.2\",\n \"Framework :: Django :: 4.1\",\n \"Framework :: Django :: 4.2\",\n \"Framework :: Wagtail\",\n \"Topic :: Internet :: WWW/HTTP :: Site Management\",\n ],\n python_requires=\">=3.7\",\n install_requires=install_requires,\n extras_require={\"testing\": testing_extras, \"docs\": documentation_extras},\n entry_points=\"\"\"\n [console_scripts]\n wagtail=wagtail.bin.wagtail:main\n \"\"\",\n zip_safe=False,\n cmdclass={\n \"sdist\": sdist,\n \"bdist_egg\": check_bdist_egg,\n \"assets\": assets,\n },\n)\n", "path": "setup.py"}]}
| 2,067 | 159 |
gh_patches_debug_26313
|
rasdani/github-patches
|
git_diff
|
pyro-ppl__pyro-439
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
New Bernoulli parameterization lead to subtle bug in AIR model
When I merged #416 into a local branch it broke the AIR example.
Here's a simplified instance of the problem:
```python
import torch
from torch.autograd import Variable
from pyro.distributions import Bernoulli
from torch.nn.functional import sigmoid
p = Variable(torch.Tensor([0]), requires_grad=True)
b = Bernoulli(sigmoid(p) * 0.0)
log_pdf = b.batch_log_pdf(Variable(torch.Tensor([0])))
log_pdf.sum().backward()
print(p.grad)
```
Prior to #416 this returned a `grad` of zero as expected, but it now returns `nan`:
```
pyro$ git rev-parse --short HEAD
71bca18
pyro$ python3 bern.py
Variable containing:
-0
[torch.FloatTensor of size 1]
pyro$ git rev-parse --short HEAD
a85525a
pyro$ python3 bern.py
Variable containing:
nan
[torch.FloatTensor of size 1]
```
I suspect that the problem is that converting between probabilities and log odds introduces an intemediate `-inf`, which messes up autograd.
I may be able to adjust the model to work around this, but either way, should this be considered a bug? (It seems like it could trip other people up, and chasing down the source of the `nan`s is tricky.)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyro/distributions/util.py`
Content:
```
1 import torch
2 import torch.nn.functional as F
3 from torch.autograd import Variable
4
5
6 def log_gamma(xx):
7 if isinstance(xx, Variable):
8 ttype = xx.data.type()
9 elif isinstance(xx, torch.Tensor):
10 ttype = xx.type()
11 gamma_coeff = [
12 76.18009172947146,
13 -86.50532032941677,
14 24.01409824083091,
15 -1.231739572450155,
16 0.1208650973866179e-2,
17 -0.5395239384953e-5,
18 ]
19 magic1 = 1.000000000190015
20 magic2 = 2.5066282746310005
21 x = xx - 1.0
22 t = x + 5.5
23 t = t - (x + 0.5) * torch.log(t)
24 ser = Variable(torch.ones(x.size()).type(ttype)) * magic1
25 for c in gamma_coeff:
26 x = x + 1.0
27 ser = ser + torch.pow(x / c, -1)
28 return torch.log(ser * magic2) - t
29
30
31 def log_beta(t):
32 """
33 Computes log Beta function.
34
35 :param t:
36 :type t: torch.autograd.Variable of dimension 1 or 2
37 :rtype: torch.autograd.Variable of float (if t.dim() == 1) or torch.Tensor (if t.dim() == 2)
38 """
39 assert t.dim() in (1, 2)
40 if t.dim() == 1:
41 numer = torch.sum(log_gamma(t))
42 denom = log_gamma(torch.sum(t))
43 else:
44 numer = torch.sum(log_gamma(t), 1)
45 denom = log_gamma(torch.sum(t, 1))
46 return numer - denom
47
48
49 def move_to_same_host_as(source, destin):
50 """
51 Returns source or a copy of `source` such that `source.is_cuda == `destin.is_cuda`.
52 """
53 return source.cuda() if destin.is_cuda else source.cpu()
54
55
56 def torch_zeros_like(x):
57 """
58 Polyfill for `torch.zeros_like()`.
59 """
60 # Work around https://github.com/pytorch/pytorch/issues/2906
61 if isinstance(x, Variable):
62 return Variable(torch_zeros_like(x.data))
63 # Support Pytorch before https://github.com/pytorch/pytorch/pull/2489
64 try:
65 return torch.zeros_like(x)
66 except AttributeError:
67 return torch.zeros(x.size()).type_as(x)
68
69
70 def torch_ones_like(x):
71 """
72 Polyfill for `torch.ones_like()`.
73 """
74 # Work around https://github.com/pytorch/pytorch/issues/2906
75 if isinstance(x, Variable):
76 return Variable(torch_ones_like(x.data))
77 # Support Pytorch before https://github.com/pytorch/pytorch/pull/2489
78 try:
79 return torch.ones_like(x)
80 except AttributeError:
81 return torch.ones(x.size()).type_as(x)
82
83
84 def torch_eye(n, m=None, out=None):
85 """
86 Like `torch.eye()`, but works with cuda tensors.
87 """
88 if m is None:
89 m = n
90 try:
91 return torch.eye(n, m, out=out)
92 except TypeError:
93 # Only catch errors due to torch.eye() not being availble for cuda tensors.
94 module = torch.Tensor.__module__ if out is None else type(out).__module__
95 if module != 'torch.cuda':
96 raise
97 Tensor = getattr(torch, torch.Tensor.__name__)
98 cpu_out = Tensor(n, m)
99 cuda_out = torch.eye(m, n, out=cpu_out).cuda()
100 return cuda_out if out is None else out.copy_(cuda_out)
101
102
103 def torch_multinomial(input, num_samples, replacement=False):
104 """
105 Like `torch.multinomial()` but works with cuda tensors.
106 Does not support keyword argument `out`.
107 """
108 if input.is_cuda:
109 return torch_multinomial(input.cpu(), num_samples, replacement).cuda()
110 else:
111 return torch.multinomial(input, num_samples, replacement)
112
113
114 def softmax(x, dim=-1):
115 """
116 TODO: change to use the default pyTorch implementation when available
117 Source: https://discuss.pytorch.org/t/why-softmax-function-cant-specify-the-dimension-to-operate/2637
118 :param x: tensor
119 :param dim: Dimension to apply the softmax function to. The elements of the tensor in this
120 dimension must sum to 1.
121 :return: tensor having the same dimension as `x` rescaled along dim
122 """
123 input_size = x.size()
124
125 trans_input = x.transpose(dim, len(input_size) - 1)
126 trans_size = trans_input.size()
127
128 input_2d = trans_input.contiguous().view(-1, trans_size[-1])
129
130 soft_max_2d = F.softmax(input_2d)
131
132 soft_max_nd = soft_max_2d.view(*trans_size)
133 return soft_max_nd.transpose(dim, len(input_size) - 1)
134
135
136 def get_probs_and_logits(ps=None, logits=None, is_multidimensional=True):
137 """
138 Convert probability values to logits, or vice-versa. Either `ps` or
139 `logits` should be specified, but not both.
140
141 :param ps: tensor of probabilities. Should be in the interval *[0, 1]*.
142 If, `is_multidimensional = True`, then must be normalized along
143 axis -1.
144 :param logits: tensor of logit values.
145 :param is_multidimensional: determines the computation of ps from logits,
146 and vice-versa. For the multi-dimensional case, logit values are
147 assumed to be non-normalized log probabilities, whereas for the uni-
148 dimensional case, it specifically refers to log odds.
149 :return: tuple containing raw probabilities and logits as tensors
150 """
151 assert (ps is None) != (logits is None)
152 if is_multidimensional:
153 if ps is None:
154 ps = softmax(logits, -1)
155 else:
156 logits = torch.log(ps)
157 else:
158 if ps is None:
159 ps = F.sigmoid(logits)
160 else:
161 logits = torch.log(ps) - torch.log1p(-ps)
162 return ps, logits
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyro/distributions/util.py b/pyro/distributions/util.py
--- a/pyro/distributions/util.py
+++ b/pyro/distributions/util.py
@@ -133,6 +133,15 @@
return soft_max_nd.transpose(dim, len(input_size) - 1)
+def _get_clamping_buffer(tensor):
+ clamp_eps = 1e-6
+ if isinstance(tensor, Variable):
+ tensor = tensor.data
+ if isinstance(tensor, (torch.DoubleTensor, torch.cuda.DoubleTensor)):
+ clamp_eps = 1e-15
+ return clamp_eps
+
+
def get_probs_and_logits(ps=None, logits=None, is_multidimensional=True):
"""
Convert probability values to logits, or vice-versa. Either `ps` or
@@ -149,14 +158,17 @@
:return: tuple containing raw probabilities and logits as tensors
"""
assert (ps is None) != (logits is None)
+ if ps is not None:
+ eps = _get_clamping_buffer(ps)
+ ps_clamped = ps.clamp(min=eps, max=1 - eps)
if is_multidimensional:
if ps is None:
ps = softmax(logits, -1)
else:
- logits = torch.log(ps)
+ logits = torch.log(ps_clamped)
else:
if ps is None:
ps = F.sigmoid(logits)
else:
- logits = torch.log(ps) - torch.log1p(-ps)
+ logits = torch.log(ps_clamped) - torch.log1p(-ps_clamped)
return ps, logits
|
{"golden_diff": "diff --git a/pyro/distributions/util.py b/pyro/distributions/util.py\n--- a/pyro/distributions/util.py\n+++ b/pyro/distributions/util.py\n@@ -133,6 +133,15 @@\n return soft_max_nd.transpose(dim, len(input_size) - 1)\n \n \n+def _get_clamping_buffer(tensor):\n+ clamp_eps = 1e-6\n+ if isinstance(tensor, Variable):\n+ tensor = tensor.data\n+ if isinstance(tensor, (torch.DoubleTensor, torch.cuda.DoubleTensor)):\n+ clamp_eps = 1e-15\n+ return clamp_eps\n+\n+\n def get_probs_and_logits(ps=None, logits=None, is_multidimensional=True):\n \"\"\"\n Convert probability values to logits, or vice-versa. Either `ps` or\n@@ -149,14 +158,17 @@\n :return: tuple containing raw probabilities and logits as tensors\n \"\"\"\n assert (ps is None) != (logits is None)\n+ if ps is not None:\n+ eps = _get_clamping_buffer(ps)\n+ ps_clamped = ps.clamp(min=eps, max=1 - eps)\n if is_multidimensional:\n if ps is None:\n ps = softmax(logits, -1)\n else:\n- logits = torch.log(ps)\n+ logits = torch.log(ps_clamped)\n else:\n if ps is None:\n ps = F.sigmoid(logits)\n else:\n- logits = torch.log(ps) - torch.log1p(-ps)\n+ logits = torch.log(ps_clamped) - torch.log1p(-ps_clamped)\n return ps, logits\n", "issue": "New Bernoulli parameterization lead to subtle bug in AIR model\nWhen I merged #416 into a local branch it broke the AIR example.\r\n\r\nHere's a simplified instance of the problem:\r\n\r\n```python\r\nimport torch\r\nfrom torch.autograd import Variable\r\nfrom pyro.distributions import Bernoulli\r\nfrom torch.nn.functional import sigmoid\r\n\r\np = Variable(torch.Tensor([0]), requires_grad=True)\r\nb = Bernoulli(sigmoid(p) * 0.0)\r\nlog_pdf = b.batch_log_pdf(Variable(torch.Tensor([0])))\r\nlog_pdf.sum().backward()\r\nprint(p.grad)\r\n```\r\n\r\nPrior to #416 this returned a `grad` of zero as expected, but it now returns `nan`:\r\n\r\n```\r\npyro$ git rev-parse --short HEAD\r\n71bca18\r\npyro$ python3 bern.py \r\nVariable containing:\r\n-0\r\n[torch.FloatTensor of size 1]\r\n\r\npyro$ git rev-parse --short HEAD\r\na85525a\r\npyro$ python3 bern.py \r\nVariable containing:\r\nnan\r\n[torch.FloatTensor of size 1]\r\n```\r\n\r\nI suspect that the problem is that converting between probabilities and log odds introduces an intemediate `-inf`, which messes up autograd.\r\n\r\nI may be able to adjust the model to work around this, but either way, should this be considered a bug? (It seems like it could trip other people up, and chasing down the source of the `nan`s is tricky.)\r\n\n", "before_files": [{"content": "import torch\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\n\ndef log_gamma(xx):\n if isinstance(xx, Variable):\n ttype = xx.data.type()\n elif isinstance(xx, torch.Tensor):\n ttype = xx.type()\n gamma_coeff = [\n 76.18009172947146,\n -86.50532032941677,\n 24.01409824083091,\n -1.231739572450155,\n 0.1208650973866179e-2,\n -0.5395239384953e-5,\n ]\n magic1 = 1.000000000190015\n magic2 = 2.5066282746310005\n x = xx - 1.0\n t = x + 5.5\n t = t - (x + 0.5) * torch.log(t)\n ser = Variable(torch.ones(x.size()).type(ttype)) * magic1\n for c in gamma_coeff:\n x = x + 1.0\n ser = ser + torch.pow(x / c, -1)\n return torch.log(ser * magic2) - t\n\n\ndef log_beta(t):\n \"\"\"\n Computes log Beta function.\n\n :param t:\n :type t: torch.autograd.Variable of dimension 1 or 2\n :rtype: torch.autograd.Variable of float (if t.dim() == 1) or torch.Tensor (if t.dim() == 2)\n \"\"\"\n assert t.dim() in (1, 2)\n if t.dim() == 1:\n numer = torch.sum(log_gamma(t))\n denom = log_gamma(torch.sum(t))\n else:\n numer = torch.sum(log_gamma(t), 1)\n denom = log_gamma(torch.sum(t, 1))\n return numer - denom\n\n\ndef move_to_same_host_as(source, destin):\n \"\"\"\n Returns source or a copy of `source` such that `source.is_cuda == `destin.is_cuda`.\n \"\"\"\n return source.cuda() if destin.is_cuda else source.cpu()\n\n\ndef torch_zeros_like(x):\n \"\"\"\n Polyfill for `torch.zeros_like()`.\n \"\"\"\n # Work around https://github.com/pytorch/pytorch/issues/2906\n if isinstance(x, Variable):\n return Variable(torch_zeros_like(x.data))\n # Support Pytorch before https://github.com/pytorch/pytorch/pull/2489\n try:\n return torch.zeros_like(x)\n except AttributeError:\n return torch.zeros(x.size()).type_as(x)\n\n\ndef torch_ones_like(x):\n \"\"\"\n Polyfill for `torch.ones_like()`.\n \"\"\"\n # Work around https://github.com/pytorch/pytorch/issues/2906\n if isinstance(x, Variable):\n return Variable(torch_ones_like(x.data))\n # Support Pytorch before https://github.com/pytorch/pytorch/pull/2489\n try:\n return torch.ones_like(x)\n except AttributeError:\n return torch.ones(x.size()).type_as(x)\n\n\ndef torch_eye(n, m=None, out=None):\n \"\"\"\n Like `torch.eye()`, but works with cuda tensors.\n \"\"\"\n if m is None:\n m = n\n try:\n return torch.eye(n, m, out=out)\n except TypeError:\n # Only catch errors due to torch.eye() not being availble for cuda tensors.\n module = torch.Tensor.__module__ if out is None else type(out).__module__\n if module != 'torch.cuda':\n raise\n Tensor = getattr(torch, torch.Tensor.__name__)\n cpu_out = Tensor(n, m)\n cuda_out = torch.eye(m, n, out=cpu_out).cuda()\n return cuda_out if out is None else out.copy_(cuda_out)\n\n\ndef torch_multinomial(input, num_samples, replacement=False):\n \"\"\"\n Like `torch.multinomial()` but works with cuda tensors.\n Does not support keyword argument `out`.\n \"\"\"\n if input.is_cuda:\n return torch_multinomial(input.cpu(), num_samples, replacement).cuda()\n else:\n return torch.multinomial(input, num_samples, replacement)\n\n\ndef softmax(x, dim=-1):\n \"\"\"\n TODO: change to use the default pyTorch implementation when available\n Source: https://discuss.pytorch.org/t/why-softmax-function-cant-specify-the-dimension-to-operate/2637\n :param x: tensor\n :param dim: Dimension to apply the softmax function to. The elements of the tensor in this\n dimension must sum to 1.\n :return: tensor having the same dimension as `x` rescaled along dim\n \"\"\"\n input_size = x.size()\n\n trans_input = x.transpose(dim, len(input_size) - 1)\n trans_size = trans_input.size()\n\n input_2d = trans_input.contiguous().view(-1, trans_size[-1])\n\n soft_max_2d = F.softmax(input_2d)\n\n soft_max_nd = soft_max_2d.view(*trans_size)\n return soft_max_nd.transpose(dim, len(input_size) - 1)\n\n\ndef get_probs_and_logits(ps=None, logits=None, is_multidimensional=True):\n \"\"\"\n Convert probability values to logits, or vice-versa. Either `ps` or\n `logits` should be specified, but not both.\n\n :param ps: tensor of probabilities. Should be in the interval *[0, 1]*.\n If, `is_multidimensional = True`, then must be normalized along\n axis -1.\n :param logits: tensor of logit values.\n :param is_multidimensional: determines the computation of ps from logits,\n and vice-versa. For the multi-dimensional case, logit values are\n assumed to be non-normalized log probabilities, whereas for the uni-\n dimensional case, it specifically refers to log odds.\n :return: tuple containing raw probabilities and logits as tensors\n \"\"\"\n assert (ps is None) != (logits is None)\n if is_multidimensional:\n if ps is None:\n ps = softmax(logits, -1)\n else:\n logits = torch.log(ps)\n else:\n if ps is None:\n ps = F.sigmoid(logits)\n else:\n logits = torch.log(ps) - torch.log1p(-ps)\n return ps, logits\n", "path": "pyro/distributions/util.py"}], "after_files": [{"content": "import torch\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\n\n\ndef log_gamma(xx):\n if isinstance(xx, Variable):\n ttype = xx.data.type()\n elif isinstance(xx, torch.Tensor):\n ttype = xx.type()\n gamma_coeff = [\n 76.18009172947146,\n -86.50532032941677,\n 24.01409824083091,\n -1.231739572450155,\n 0.1208650973866179e-2,\n -0.5395239384953e-5,\n ]\n magic1 = 1.000000000190015\n magic2 = 2.5066282746310005\n x = xx - 1.0\n t = x + 5.5\n t = t - (x + 0.5) * torch.log(t)\n ser = Variable(torch.ones(x.size()).type(ttype)) * magic1\n for c in gamma_coeff:\n x = x + 1.0\n ser = ser + torch.pow(x / c, -1)\n return torch.log(ser * magic2) - t\n\n\ndef log_beta(t):\n \"\"\"\n Computes log Beta function.\n\n :param t:\n :type t: torch.autograd.Variable of dimension 1 or 2\n :rtype: torch.autograd.Variable of float (if t.dim() == 1) or torch.Tensor (if t.dim() == 2)\n \"\"\"\n assert t.dim() in (1, 2)\n if t.dim() == 1:\n numer = torch.sum(log_gamma(t))\n denom = log_gamma(torch.sum(t))\n else:\n numer = torch.sum(log_gamma(t), 1)\n denom = log_gamma(torch.sum(t, 1))\n return numer - denom\n\n\ndef move_to_same_host_as(source, destin):\n \"\"\"\n Returns source or a copy of `source` such that `source.is_cuda == `destin.is_cuda`.\n \"\"\"\n return source.cuda() if destin.is_cuda else source.cpu()\n\n\ndef torch_zeros_like(x):\n \"\"\"\n Polyfill for `torch.zeros_like()`.\n \"\"\"\n # Work around https://github.com/pytorch/pytorch/issues/2906\n if isinstance(x, Variable):\n return Variable(torch_zeros_like(x.data))\n # Support Pytorch before https://github.com/pytorch/pytorch/pull/2489\n try:\n return torch.zeros_like(x)\n except AttributeError:\n return torch.zeros(x.size()).type_as(x)\n\n\ndef torch_ones_like(x):\n \"\"\"\n Polyfill for `torch.ones_like()`.\n \"\"\"\n # Work around https://github.com/pytorch/pytorch/issues/2906\n if isinstance(x, Variable):\n return Variable(torch_ones_like(x.data))\n # Support Pytorch before https://github.com/pytorch/pytorch/pull/2489\n try:\n return torch.ones_like(x)\n except AttributeError:\n return torch.ones(x.size()).type_as(x)\n\n\ndef torch_eye(n, m=None, out=None):\n \"\"\"\n Like `torch.eye()`, but works with cuda tensors.\n \"\"\"\n if m is None:\n m = n\n try:\n return torch.eye(n, m, out=out)\n except TypeError:\n # Only catch errors due to torch.eye() not being availble for cuda tensors.\n module = torch.Tensor.__module__ if out is None else type(out).__module__\n if module != 'torch.cuda':\n raise\n Tensor = getattr(torch, torch.Tensor.__name__)\n cpu_out = Tensor(n, m)\n cuda_out = torch.eye(m, n, out=cpu_out).cuda()\n return cuda_out if out is None else out.copy_(cuda_out)\n\n\ndef torch_multinomial(input, num_samples, replacement=False):\n \"\"\"\n Like `torch.multinomial()` but works with cuda tensors.\n Does not support keyword argument `out`.\n \"\"\"\n if input.is_cuda:\n return torch_multinomial(input.cpu(), num_samples, replacement).cuda()\n else:\n return torch.multinomial(input, num_samples, replacement)\n\n\ndef softmax(x, dim=-1):\n \"\"\"\n TODO: change to use the default pyTorch implementation when available\n Source: https://discuss.pytorch.org/t/why-softmax-function-cant-specify-the-dimension-to-operate/2637\n :param x: tensor\n :param dim: Dimension to apply the softmax function to. The elements of the tensor in this\n dimension must sum to 1.\n :return: tensor having the same dimension as `x` rescaled along dim\n \"\"\"\n input_size = x.size()\n\n trans_input = x.transpose(dim, len(input_size) - 1)\n trans_size = trans_input.size()\n\n input_2d = trans_input.contiguous().view(-1, trans_size[-1])\n\n soft_max_2d = F.softmax(input_2d)\n\n soft_max_nd = soft_max_2d.view(*trans_size)\n return soft_max_nd.transpose(dim, len(input_size) - 1)\n\n\ndef _get_clamping_buffer(tensor):\n clamp_eps = 1e-6\n if isinstance(tensor, Variable):\n tensor = tensor.data\n if isinstance(tensor, (torch.DoubleTensor, torch.cuda.DoubleTensor)):\n clamp_eps = 1e-15\n return clamp_eps\n\n\ndef get_probs_and_logits(ps=None, logits=None, is_multidimensional=True):\n \"\"\"\n Convert probability values to logits, or vice-versa. Either `ps` or\n `logits` should be specified, but not both.\n\n :param ps: tensor of probabilities. Should be in the interval *[0, 1]*.\n If, `is_multidimensional = True`, then must be normalized along\n axis -1.\n :param logits: tensor of logit values.\n :param is_multidimensional: determines the computation of ps from logits,\n and vice-versa. For the multi-dimensional case, logit values are\n assumed to be non-normalized log probabilities, whereas for the uni-\n dimensional case, it specifically refers to log odds.\n :return: tuple containing raw probabilities and logits as tensors\n \"\"\"\n assert (ps is None) != (logits is None)\n if ps is not None:\n eps = _get_clamping_buffer(ps)\n ps_clamped = ps.clamp(min=eps, max=1 - eps)\n if is_multidimensional:\n if ps is None:\n ps = softmax(logits, -1)\n else:\n logits = torch.log(ps_clamped)\n else:\n if ps is None:\n ps = F.sigmoid(logits)\n else:\n logits = torch.log(ps_clamped) - torch.log1p(-ps_clamped)\n return ps, logits\n", "path": "pyro/distributions/util.py"}]}
| 2,407 | 363 |
gh_patches_debug_1231
|
rasdani/github-patches
|
git_diff
|
scoutapp__scout_apm_python-583
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support Python 3.9
Python 3.9 will be released 2020-10-05.
Here are some steps before its release:
* Start testing with prerelease
After release:
* Ensure tests run with released version
* Add 3.9 PyPI classifier
* Enable PYthon wheel building in release
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import os
5 import sys
6
7 from setuptools import Extension, find_packages, setup
8
9 with open("README.md", "r") as fp:
10 long_description = fp.read()
11
12 packages = find_packages("src")
13 if sys.version_info < (3, 6):
14 packages = [p for p in packages if not p.startswith("scout_apm.async_")]
15
16 compile_extensions = (
17 # Python 3+
18 sys.version_info >= (3,)
19 # Not Jython
20 and not sys.platform.startswith("java")
21 # Not PyPy
22 and "__pypy__" not in sys.builtin_module_names
23 # Not explicitly disabled
24 and (os.environ.get("SCOUT_DISABLE_EXTENSIONS", "") == "")
25 )
26 if compile_extensions:
27 ext_modules = [
28 Extension(
29 name=str("scout_apm.core._objtrace"),
30 sources=[str("src/scout_apm/core/_objtrace.c")],
31 optional=True,
32 )
33 ]
34 else:
35 ext_modules = []
36
37 setup(
38 name="scout_apm",
39 version="2.16.2",
40 description="Scout Application Performance Monitoring Agent",
41 long_description=long_description,
42 long_description_content_type="text/markdown",
43 url="https://github.com/scoutapp/scout_apm_python",
44 project_urls={
45 "Documentation": "https://docs.scoutapm.com/#python-agent",
46 "Changelog": (
47 "https://github.com/scoutapp/scout_apm_python/blob/master/CHANGELOG.md"
48 ),
49 },
50 author="Scout",
51 author_email="[email protected]",
52 license="MIT",
53 zip_safe=False,
54 python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4",
55 packages=packages,
56 package_dir={str(""): str("src")},
57 ext_modules=ext_modules,
58 entry_points={
59 "console_scripts": [
60 "core-agent-manager = scout_apm.core.cli.core_agent_manager:main"
61 ]
62 },
63 install_requires=[
64 'asgiref ; python_version >= "3.5"',
65 'importlib-metadata ; python_version < "3.8"',
66 "psutil>=5,<6",
67 'urllib3[secure] < 1.25 ; python_version < "3.5"',
68 'urllib3[secure] < 2 ; python_version >= "3.5"',
69 "wrapt>=1.10,<2.0",
70 ],
71 keywords="apm performance monitoring development",
72 classifiers=[
73 "Development Status :: 5 - Production/Stable",
74 "Framework :: Bottle",
75 "Framework :: Django",
76 "Framework :: Django :: 1.8",
77 "Framework :: Django :: 1.9",
78 "Framework :: Django :: 1.10",
79 "Framework :: Django :: 1.11",
80 "Framework :: Django :: 2.0",
81 "Framework :: Django :: 2.1",
82 "Framework :: Django :: 2.2",
83 "Framework :: Django :: 3.0",
84 "Framework :: Django :: 3.1",
85 "Framework :: Flask",
86 "Framework :: Pyramid",
87 "Intended Audience :: Developers",
88 "Topic :: System :: Monitoring",
89 "License :: OSI Approved :: MIT License",
90 "Operating System :: MacOS",
91 "Operating System :: POSIX",
92 "Operating System :: POSIX :: Linux",
93 "Programming Language :: Python :: 2",
94 "Programming Language :: Python :: 2.7",
95 "Programming Language :: Python :: 3",
96 "Programming Language :: Python :: 3.4",
97 "Programming Language :: Python :: 3.5",
98 "Programming Language :: Python :: 3.6",
99 "Programming Language :: Python :: 3.7",
100 "Programming Language :: Python :: 3.8",
101 ],
102 )
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -98,5 +98,6 @@
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
],
)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -98,5 +98,6 @@\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n+ \"Programming Language :: Python :: 3.9\",\n ],\n )\n", "issue": "Support Python 3.9\nPython 3.9 will be released 2020-10-05.\r\n\r\nHere are some steps before its release:\r\n\r\n* Start testing with prerelease\r\n\r\nAfter release:\r\n* Ensure tests run with released version\r\n* Add 3.9 PyPI classifier\r\n* Enable PYthon wheel building in release\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport os\nimport sys\n\nfrom setuptools import Extension, find_packages, setup\n\nwith open(\"README.md\", \"r\") as fp:\n long_description = fp.read()\n\npackages = find_packages(\"src\")\nif sys.version_info < (3, 6):\n packages = [p for p in packages if not p.startswith(\"scout_apm.async_\")]\n\ncompile_extensions = (\n # Python 3+\n sys.version_info >= (3,)\n # Not Jython\n and not sys.platform.startswith(\"java\")\n # Not PyPy\n and \"__pypy__\" not in sys.builtin_module_names\n # Not explicitly disabled\n and (os.environ.get(\"SCOUT_DISABLE_EXTENSIONS\", \"\") == \"\")\n)\nif compile_extensions:\n ext_modules = [\n Extension(\n name=str(\"scout_apm.core._objtrace\"),\n sources=[str(\"src/scout_apm/core/_objtrace.c\")],\n optional=True,\n )\n ]\nelse:\n ext_modules = []\n\nsetup(\n name=\"scout_apm\",\n version=\"2.16.2\",\n description=\"Scout Application Performance Monitoring Agent\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/scoutapp/scout_apm_python\",\n project_urls={\n \"Documentation\": \"https://docs.scoutapm.com/#python-agent\",\n \"Changelog\": (\n \"https://github.com/scoutapp/scout_apm_python/blob/master/CHANGELOG.md\"\n ),\n },\n author=\"Scout\",\n author_email=\"[email protected]\",\n license=\"MIT\",\n zip_safe=False,\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4\",\n packages=packages,\n package_dir={str(\"\"): str(\"src\")},\n ext_modules=ext_modules,\n entry_points={\n \"console_scripts\": [\n \"core-agent-manager = scout_apm.core.cli.core_agent_manager:main\"\n ]\n },\n install_requires=[\n 'asgiref ; python_version >= \"3.5\"',\n 'importlib-metadata ; python_version < \"3.8\"',\n \"psutil>=5,<6\",\n 'urllib3[secure] < 1.25 ; python_version < \"3.5\"',\n 'urllib3[secure] < 2 ; python_version >= \"3.5\"',\n \"wrapt>=1.10,<2.0\",\n ],\n keywords=\"apm performance monitoring development\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Framework :: Bottle\",\n \"Framework :: Django\",\n \"Framework :: Django :: 1.8\",\n \"Framework :: Django :: 1.9\",\n \"Framework :: Django :: 1.10\",\n \"Framework :: Django :: 1.11\",\n \"Framework :: Django :: 2.0\",\n \"Framework :: Django :: 2.1\",\n \"Framework :: Django :: 2.2\",\n \"Framework :: Django :: 3.0\",\n \"Framework :: Django :: 3.1\",\n \"Framework :: Flask\",\n \"Framework :: Pyramid\",\n \"Intended Audience :: Developers\",\n \"Topic :: System :: Monitoring\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: MacOS\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport os\nimport sys\n\nfrom setuptools import Extension, find_packages, setup\n\nwith open(\"README.md\", \"r\") as fp:\n long_description = fp.read()\n\npackages = find_packages(\"src\")\nif sys.version_info < (3, 6):\n packages = [p for p in packages if not p.startswith(\"scout_apm.async_\")]\n\ncompile_extensions = (\n # Python 3+\n sys.version_info >= (3,)\n # Not Jython\n and not sys.platform.startswith(\"java\")\n # Not PyPy\n and \"__pypy__\" not in sys.builtin_module_names\n # Not explicitly disabled\n and (os.environ.get(\"SCOUT_DISABLE_EXTENSIONS\", \"\") == \"\")\n)\nif compile_extensions:\n ext_modules = [\n Extension(\n name=str(\"scout_apm.core._objtrace\"),\n sources=[str(\"src/scout_apm/core/_objtrace.c\")],\n optional=True,\n )\n ]\nelse:\n ext_modules = []\n\nsetup(\n name=\"scout_apm\",\n version=\"2.16.2\",\n description=\"Scout Application Performance Monitoring Agent\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/scoutapp/scout_apm_python\",\n project_urls={\n \"Documentation\": \"https://docs.scoutapm.com/#python-agent\",\n \"Changelog\": (\n \"https://github.com/scoutapp/scout_apm_python/blob/master/CHANGELOG.md\"\n ),\n },\n author=\"Scout\",\n author_email=\"[email protected]\",\n license=\"MIT\",\n zip_safe=False,\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4\",\n packages=packages,\n package_dir={str(\"\"): str(\"src\")},\n ext_modules=ext_modules,\n entry_points={\n \"console_scripts\": [\n \"core-agent-manager = scout_apm.core.cli.core_agent_manager:main\"\n ]\n },\n install_requires=[\n 'asgiref ; python_version >= \"3.5\"',\n 'importlib-metadata ; python_version < \"3.8\"',\n \"psutil>=5,<6\",\n 'urllib3[secure] < 1.25 ; python_version < \"3.5\"',\n 'urllib3[secure] < 2 ; python_version >= \"3.5\"',\n \"wrapt>=1.10,<2.0\",\n ],\n keywords=\"apm performance monitoring development\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Framework :: Bottle\",\n \"Framework :: Django\",\n \"Framework :: Django :: 1.8\",\n \"Framework :: Django :: 1.9\",\n \"Framework :: Django :: 1.10\",\n \"Framework :: Django :: 1.11\",\n \"Framework :: Django :: 2.0\",\n \"Framework :: Django :: 2.1\",\n \"Framework :: Django :: 2.2\",\n \"Framework :: Django :: 3.0\",\n \"Framework :: Django :: 3.1\",\n \"Framework :: Flask\",\n \"Framework :: Pyramid\",\n \"Intended Audience :: Developers\",\n \"Topic :: System :: Monitoring\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: MacOS\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n ],\n)\n", "path": "setup.py"}]}
| 1,387 | 84 |
gh_patches_debug_37571
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-4133
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
What should `pip install dbt` do after v1?
Split off from https://github.com/dbt-labs/dbt-core/issues/3968
Everyone will be able and encouraged to do the following after v1:
- `pip install dbt-core`
- `pip install dbt-<adapter>`
The big question is, what to do with the PyPi package named `dbt`, defined in this repo as a thing _separate_ from `dbt-core`? Note that this is just a cosmetic consideration—it shouldn't change anything about how we're packaging/distributing the underlying components—we just know that a lot of people are still using `pip install dbt`. Starting in v1, this could:
1. **Full backwards compatibility, with warning:** Raise a deprecation warning, then install `dbt-core`, `dbt-postgres`, `dbt-redshift`, `dbt-bigquery`. This might be tricky if we've released a newer patch/prerelease of dbt-core than the others, but I think our compatibility operator (`~=`) could work here
2. **Very limited backwards compatibility, with warning**: Raise a deprecation warning, then install only the code contained in this repository (`dbt-core` + `dbt-postgres`, or `dbt-core` only), knowing that most people will encounter errors and have to switch)
3. **Raise an explicit error:** If all of that is too tricky to figure out, we should keep it simple and just raise a good old fashioned error message when someone tries `pip install dbt` (unqualified) or `pip install dbt==1.0.0`: "Going forward, you must install dbt-<adapter`"
I'm leaning toward the first option right now. I'm very open to other opinions.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/postgres/setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 6):
6 print('Error: dbt does not support this version of Python.')
7 print('Please upgrade to Python 3.6 or higher.')
8 sys.exit(1)
9
10
11 from setuptools import setup
12 try:
13 from setuptools import find_namespace_packages
14 except ImportError:
15 # the user has a downlevel version of setuptools.
16 print('Error: dbt requires setuptools v40.1.0 or higher.')
17 print('Please upgrade setuptools with "pip install --upgrade setuptools" '
18 'and try again')
19 sys.exit(1)
20
21
22 PSYCOPG2_MESSAGE = '''
23 No package name override was set.
24 Using 'psycopg2-binary' package to satisfy 'psycopg2'
25
26 If you experience segmentation faults, silent crashes, or installation errors,
27 consider retrying with the 'DBT_PSYCOPG2_NAME' environment variable set to
28 'psycopg2'. It may require a compiler toolchain and development libraries!
29 '''.strip()
30
31
32 def _dbt_psycopg2_name():
33 # if the user chose something, use that
34 package_name = os.getenv('DBT_PSYCOPG2_NAME', '')
35 if package_name:
36 return package_name
37
38 # default to psycopg2-binary for all OSes/versions
39 print(PSYCOPG2_MESSAGE)
40 return 'psycopg2-binary'
41
42
43 package_name = "dbt-postgres"
44 package_version = "1.0.0b2"
45 description = """The postgres adpter plugin for dbt (data build tool)"""
46
47 this_directory = os.path.abspath(os.path.dirname(__file__))
48 with open(os.path.join(this_directory, 'README.md')) as f:
49 long_description = f.read()
50
51 DBT_PSYCOPG2_NAME = _dbt_psycopg2_name()
52
53 setup(
54 name=package_name,
55 version=package_version,
56 description=description,
57 long_description=description,
58 long_description_content_type='text/markdown',
59 author="dbt Labs",
60 author_email="[email protected]",
61 url="https://github.com/dbt-labs/dbt-core",
62 packages=find_namespace_packages(include=['dbt', 'dbt.*']),
63 package_data={
64 'dbt': [
65 'include/postgres/dbt_project.yml',
66 'include/postgres/sample_profiles.yml',
67 'include/postgres/macros/*.sql',
68 'include/postgres/macros/**/*.sql',
69 ]
70 },
71 install_requires=[
72 'dbt-core=={}'.format(package_version),
73 '{}~=2.8'.format(DBT_PSYCOPG2_NAME),
74 ],
75 zip_safe=False,
76 classifiers=[
77 'Development Status :: 5 - Production/Stable',
78
79 'License :: OSI Approved :: Apache Software License',
80
81 'Operating System :: Microsoft :: Windows',
82 'Operating System :: MacOS :: MacOS X',
83 'Operating System :: POSIX :: Linux',
84
85 'Programming Language :: Python :: 3.6',
86 'Programming Language :: Python :: 3.7',
87 'Programming Language :: Python :: 3.8',
88 'Programming Language :: Python :: 3.9',
89 ],
90 python_requires=">=3.6.2",
91 )
92
```
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 6):
6 print('Error: dbt does not support this version of Python.')
7 print('Please upgrade to Python 3.6 or higher.')
8 sys.exit(1)
9
10
11 from setuptools import setup
12 try:
13 from setuptools import find_namespace_packages
14 except ImportError:
15 # the user has a downlevel version of setuptools.
16 print('Error: dbt requires setuptools v40.1.0 or higher.')
17 print('Please upgrade setuptools with "pip install --upgrade setuptools" '
18 'and try again')
19 sys.exit(1)
20
21 this_directory = os.path.abspath(os.path.dirname(__file__))
22 with open(os.path.join(this_directory, 'README.md')) as f:
23 long_description = f.read()
24
25
26 package_name = "dbt"
27 package_version = "1.0.0b2"
28 description = """With dbt, data analysts and engineers can build analytics \
29 the way engineers build applications."""
30
31
32 setup(
33 name=package_name,
34 version=package_version,
35
36 description=description,
37 long_description=long_description,
38 long_description_content_type='text/markdown',
39
40 author="dbt Labs",
41 author_email="[email protected]",
42 url="https://github.com/dbt-labs/dbt-core",
43 packages=[],
44 install_requires=[
45 'dbt-core=={}'.format(package_version),
46 'dbt-postgres=={}'.format(package_version),
47 ],
48 zip_safe=False,
49 classifiers=[
50 'Development Status :: 5 - Production/Stable',
51
52 'License :: OSI Approved :: Apache Software License',
53
54 'Operating System :: Microsoft :: Windows',
55 'Operating System :: MacOS :: MacOS X',
56 'Operating System :: POSIX :: Linux',
57
58 'Programming Language :: Python :: 3.6',
59 'Programming Language :: Python :: 3.7',
60 'Programming Language :: Python :: 3.8',
61 'Programming Language :: Python :: 3.9',
62 ],
63 python_requires=">=3.6.2",
64 )
65
```
Path: `core/setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 6):
6 print('Error: dbt does not support this version of Python.')
7 print('Please upgrade to Python 3.6 or higher.')
8 sys.exit(1)
9
10
11 from setuptools import setup
12 try:
13 from setuptools import find_namespace_packages
14 except ImportError:
15 # the user has a downlevel version of setuptools.
16 print('Error: dbt requires setuptools v40.1.0 or higher.')
17 print('Please upgrade setuptools with "pip install --upgrade setuptools" '
18 'and try again')
19 sys.exit(1)
20
21
22 def read(fname):
23 return open(os.path.join(os.path.dirname(__file__), fname)).read()
24
25
26 package_name = "dbt-core"
27 package_version = "1.0.0b2"
28 description = """dbt (data build tool) is a command line tool that helps \
29 analysts and engineers transform data in their warehouse more effectively"""
30
31
32 setup(
33 name=package_name,
34 version=package_version,
35 description=description,
36 long_description=description,
37 author="dbt Labs",
38 author_email="[email protected]",
39 url="https://github.com/dbt-labs/dbt-core",
40 packages=find_namespace_packages(include=['dbt', 'dbt.*']),
41 include_package_data = True,
42 test_suite='test',
43 entry_points={
44 'console_scripts': [
45 'dbt = dbt.main:main',
46 ],
47 },
48 scripts=[
49 'scripts/dbt',
50 ],
51 install_requires=[
52 'Jinja2==2.11.3',
53 'agate>=1.6,<1.6.2',
54 'click>=8,<9',
55 'colorama>=0.3.9,<0.4.5',
56 'dataclasses>=0.6,<0.9;python_version<"3.7"',
57 'hologram==0.0.14',
58 'isodate>=0.6,<0.7',
59 'logbook>=1.5,<1.6',
60 'mashumaro==2.5',
61 'minimal-snowplow-tracker==0.0.2',
62 'networkx>=2.3,<3',
63 'packaging>=20.9,<22.0',
64 'sqlparse>=0.2.3,<0.5',
65 'dbt-extractor==0.4.0',
66 'typing-extensions>=3.7.4,<3.11',
67 'werkzeug>=1,<3',
68 # the following are all to match snowflake-connector-python
69 'requests<3.0.0',
70 'idna>=2.5,<4',
71 'cffi>=1.9,<2.0.0',
72 ],
73 zip_safe=False,
74 classifiers=[
75 'Development Status :: 5 - Production/Stable',
76
77 'License :: OSI Approved :: Apache Software License',
78
79 'Operating System :: Microsoft :: Windows',
80 'Operating System :: MacOS :: MacOS X',
81 'Operating System :: POSIX :: Linux',
82
83 'Programming Language :: Python :: 3.6',
84 'Programming Language :: Python :: 3.7',
85 'Programming Language :: Python :: 3.8',
86 'Programming Language :: Python :: 3.9',
87 ],
88 python_requires=">=3.6.3",
89 )
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/core/setup.py b/core/setup.py
--- a/core/setup.py
+++ b/core/setup.py
@@ -19,21 +19,23 @@
sys.exit(1)
-def read(fname):
- return open(os.path.join(os.path.dirname(__file__), fname)).read()
+this_directory = os.path.abspath(os.path.dirname(__file__))
+with open(os.path.join(this_directory, 'README.md')) as f:
+ long_description = f.read()
package_name = "dbt-core"
package_version = "1.0.0b2"
-description = """dbt (data build tool) is a command line tool that helps \
-analysts and engineers transform data in their warehouse more effectively"""
+description = """With dbt, data analysts and engineers can build analytics \
+the way engineers build applications."""
setup(
name=package_name,
version=package_version,
description=description,
- long_description=description,
+ long_description=long_description,
+ long_description_content_type='text/markdown',
author="dbt Labs",
author_email="[email protected]",
url="https://github.com/dbt-labs/dbt-core",
diff --git a/plugins/postgres/setup.py b/plugins/postgres/setup.py
--- a/plugins/postgres/setup.py
+++ b/plugins/postgres/setup.py
@@ -54,7 +54,7 @@
name=package_name,
version=package_version,
description=description,
- long_description=description,
+ long_description=long_description,
long_description_content_type='text/markdown',
author="dbt Labs",
author_email="[email protected]",
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -2,6 +2,18 @@
import os
import sys
+
+if 'sdist' not in sys.argv:
+ print('')
+ print('As of v1.0.0, `pip install dbt` is no longer supported.')
+ print('Instead, please use either:')
+ print(' - `pip install dbt-core`, for core functionality')
+ print(' - `pip install dbt-<adapter>`, to use dbt with your database, platform, or query engine')
+ print('See full list: https://docs.getdbt.com/docs/available-adapters')
+ print('')
+ sys.exit(1)
+
+
if sys.version_info < (3, 6):
print('Error: dbt does not support this version of Python.')
print('Please upgrade to Python 3.6 or higher.')
@@ -40,14 +52,9 @@
author="dbt Labs",
author_email="[email protected]",
url="https://github.com/dbt-labs/dbt-core",
- packages=[],
- install_requires=[
- 'dbt-core=={}'.format(package_version),
- 'dbt-postgres=={}'.format(package_version),
- ],
zip_safe=False,
classifiers=[
- 'Development Status :: 5 - Production/Stable',
+ 'Development Status :: 7 - Inactive',
'License :: OSI Approved :: Apache Software License',
|
{"golden_diff": "diff --git a/core/setup.py b/core/setup.py\n--- a/core/setup.py\n+++ b/core/setup.py\n@@ -19,21 +19,23 @@\n sys.exit(1)\n \n \n-def read(fname):\n- return open(os.path.join(os.path.dirname(__file__), fname)).read()\n+this_directory = os.path.abspath(os.path.dirname(__file__))\n+with open(os.path.join(this_directory, 'README.md')) as f:\n+ long_description = f.read()\n \n \n package_name = \"dbt-core\"\n package_version = \"1.0.0b2\"\n-description = \"\"\"dbt (data build tool) is a command line tool that helps \\\n-analysts and engineers transform data in their warehouse more effectively\"\"\"\n+description = \"\"\"With dbt, data analysts and engineers can build analytics \\\n+the way engineers build applications.\"\"\"\n \n \n setup(\n name=package_name,\n version=package_version,\n description=description,\n- long_description=description,\n+ long_description=long_description,\n+ long_description_content_type='text/markdown',\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\ndiff --git a/plugins/postgres/setup.py b/plugins/postgres/setup.py\n--- a/plugins/postgres/setup.py\n+++ b/plugins/postgres/setup.py\n@@ -54,7 +54,7 @@\n name=package_name,\n version=package_version,\n description=description,\n- long_description=description,\n+ long_description=long_description,\n long_description_content_type='text/markdown',\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -2,6 +2,18 @@\n import os\n import sys\n \n+\n+if 'sdist' not in sys.argv:\n+ print('')\n+ print('As of v1.0.0, `pip install dbt` is no longer supported.')\n+ print('Instead, please use either:')\n+ print(' - `pip install dbt-core`, for core functionality')\n+ print(' - `pip install dbt-<adapter>`, to use dbt with your database, platform, or query engine')\n+ print('See full list: https://docs.getdbt.com/docs/available-adapters')\n+ print('')\n+ sys.exit(1)\n+\n+\n if sys.version_info < (3, 6):\n print('Error: dbt does not support this version of Python.')\n print('Please upgrade to Python 3.6 or higher.')\n@@ -40,14 +52,9 @@\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n- packages=[],\n- install_requires=[\n- 'dbt-core=={}'.format(package_version),\n- 'dbt-postgres=={}'.format(package_version),\n- ],\n zip_safe=False,\n classifiers=[\n- 'Development Status :: 5 - Production/Stable',\n+ 'Development Status :: 7 - Inactive',\n \n 'License :: OSI Approved :: Apache Software License',\n", "issue": "What should `pip install dbt` do after v1?\nSplit off from https://github.com/dbt-labs/dbt-core/issues/3968\r\n\r\nEveryone will be able and encouraged to do the following after v1:\r\n- `pip install dbt-core`\r\n- `pip install dbt-<adapter>`\r\n\r\nThe big question is, what to do with the PyPi package named `dbt`, defined in this repo as a thing _separate_ from `dbt-core`? Note that this is just a cosmetic consideration\u2014it shouldn't change anything about how we're packaging/distributing the underlying components\u2014we just know that a lot of people are still using `pip install dbt`. Starting in v1, this could:\r\n\r\n1. **Full backwards compatibility, with warning:** Raise a deprecation warning, then install `dbt-core`, `dbt-postgres`, `dbt-redshift`, `dbt-bigquery`. This might be tricky if we've released a newer patch/prerelease of dbt-core than the others, but I think our compatibility operator (`~=`) could work here\r\n2. **Very limited backwards compatibility, with warning**: Raise a deprecation warning, then install only the code contained in this repository (`dbt-core` + `dbt-postgres`, or `dbt-core` only), knowing that most people will encounter errors and have to switch)\r\n3. **Raise an explicit error:** If all of that is too tricky to figure out, we should keep it simple and just raise a good old fashioned error message when someone tries `pip install dbt` (unqualified) or `pip install dbt==1.0.0`: \"Going forward, you must install dbt-<adapter`\"\r\n\r\nI'm leaning toward the first option right now. I'm very open to other opinions.\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 6):\n print('Error: dbt does not support this version of Python.')\n print('Please upgrade to Python 3.6 or higher.')\n sys.exit(1)\n\n\nfrom setuptools import setup\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print('Error: dbt requires setuptools v40.1.0 or higher.')\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" '\n 'and try again')\n sys.exit(1)\n\n\nPSYCOPG2_MESSAGE = '''\nNo package name override was set.\nUsing 'psycopg2-binary' package to satisfy 'psycopg2'\n\nIf you experience segmentation faults, silent crashes, or installation errors,\nconsider retrying with the 'DBT_PSYCOPG2_NAME' environment variable set to\n'psycopg2'. It may require a compiler toolchain and development libraries!\n'''.strip()\n\n\ndef _dbt_psycopg2_name():\n # if the user chose something, use that\n package_name = os.getenv('DBT_PSYCOPG2_NAME', '')\n if package_name:\n return package_name\n\n # default to psycopg2-binary for all OSes/versions\n print(PSYCOPG2_MESSAGE)\n return 'psycopg2-binary'\n\n\npackage_name = \"dbt-postgres\"\npackage_version = \"1.0.0b2\"\ndescription = \"\"\"The postgres adpter plugin for dbt (data build tool)\"\"\"\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, 'README.md')) as f:\n long_description = f.read()\n\nDBT_PSYCOPG2_NAME = _dbt_psycopg2_name()\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=description,\n long_description_content_type='text/markdown',\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=['dbt', 'dbt.*']),\n package_data={\n 'dbt': [\n 'include/postgres/dbt_project.yml',\n 'include/postgres/sample_profiles.yml',\n 'include/postgres/macros/*.sql',\n 'include/postgres/macros/**/*.sql',\n ]\n },\n install_requires=[\n 'dbt-core=={}'.format(package_version),\n '{}~=2.8'.format(DBT_PSYCOPG2_NAME),\n ],\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n\n 'License :: OSI Approved :: Apache Software License',\n\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n python_requires=\">=3.6.2\",\n)\n", "path": "plugins/postgres/setup.py"}, {"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 6):\n print('Error: dbt does not support this version of Python.')\n print('Please upgrade to Python 3.6 or higher.')\n sys.exit(1)\n\n\nfrom setuptools import setup\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print('Error: dbt requires setuptools v40.1.0 or higher.')\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" '\n 'and try again')\n sys.exit(1)\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, 'README.md')) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt\"\npackage_version = \"1.0.0b2\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n\n description=description,\n long_description=long_description,\n long_description_content_type='text/markdown',\n\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=[],\n install_requires=[\n 'dbt-core=={}'.format(package_version),\n 'dbt-postgres=={}'.format(package_version),\n ],\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n\n 'License :: OSI Approved :: Apache Software License',\n\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n python_requires=\">=3.6.2\",\n)\n", "path": "setup.py"}, {"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 6):\n print('Error: dbt does not support this version of Python.')\n print('Please upgrade to Python 3.6 or higher.')\n sys.exit(1)\n\n\nfrom setuptools import setup\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print('Error: dbt requires setuptools v40.1.0 or higher.')\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" '\n 'and try again')\n sys.exit(1)\n\n\ndef read(fname):\n return open(os.path.join(os.path.dirname(__file__), fname)).read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.0.0b2\"\ndescription = \"\"\"dbt (data build tool) is a command line tool that helps \\\nanalysts and engineers transform data in their warehouse more effectively\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=description,\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=['dbt', 'dbt.*']),\n include_package_data = True,\n test_suite='test',\n entry_points={\n 'console_scripts': [\n 'dbt = dbt.main:main',\n ],\n },\n scripts=[\n 'scripts/dbt',\n ],\n install_requires=[\n 'Jinja2==2.11.3',\n 'agate>=1.6,<1.6.2',\n 'click>=8,<9',\n 'colorama>=0.3.9,<0.4.5',\n 'dataclasses>=0.6,<0.9;python_version<\"3.7\"',\n 'hologram==0.0.14',\n 'isodate>=0.6,<0.7',\n 'logbook>=1.5,<1.6',\n 'mashumaro==2.5',\n 'minimal-snowplow-tracker==0.0.2',\n 'networkx>=2.3,<3',\n 'packaging>=20.9,<22.0',\n 'sqlparse>=0.2.3,<0.5',\n 'dbt-extractor==0.4.0',\n 'typing-extensions>=3.7.4,<3.11',\n 'werkzeug>=1,<3',\n # the following are all to match snowflake-connector-python\n 'requests<3.0.0',\n 'idna>=2.5,<4',\n 'cffi>=1.9,<2.0.0',\n ],\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n\n 'License :: OSI Approved :: Apache Software License',\n\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n python_requires=\">=3.6.3\",\n)\n", "path": "core/setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 6):\n print('Error: dbt does not support this version of Python.')\n print('Please upgrade to Python 3.6 or higher.')\n sys.exit(1)\n\n\nfrom setuptools import setup\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print('Error: dbt requires setuptools v40.1.0 or higher.')\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" '\n 'and try again')\n sys.exit(1)\n\n\nPSYCOPG2_MESSAGE = '''\nNo package name override was set.\nUsing 'psycopg2-binary' package to satisfy 'psycopg2'\n\nIf you experience segmentation faults, silent crashes, or installation errors,\nconsider retrying with the 'DBT_PSYCOPG2_NAME' environment variable set to\n'psycopg2'. It may require a compiler toolchain and development libraries!\n'''.strip()\n\n\ndef _dbt_psycopg2_name():\n # if the user chose something, use that\n package_name = os.getenv('DBT_PSYCOPG2_NAME', '')\n if package_name:\n return package_name\n\n # default to psycopg2-binary for all OSes/versions\n print(PSYCOPG2_MESSAGE)\n return 'psycopg2-binary'\n\n\npackage_name = \"dbt-postgres\"\npackage_version = \"1.0.0b2\"\ndescription = \"\"\"The postgres adpter plugin for dbt (data build tool)\"\"\"\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, 'README.md')) as f:\n long_description = f.read()\n\nDBT_PSYCOPG2_NAME = _dbt_psycopg2_name()\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type='text/markdown',\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=['dbt', 'dbt.*']),\n package_data={\n 'dbt': [\n 'include/postgres/dbt_project.yml',\n 'include/postgres/sample_profiles.yml',\n 'include/postgres/macros/*.sql',\n 'include/postgres/macros/**/*.sql',\n ]\n },\n install_requires=[\n 'dbt-core=={}'.format(package_version),\n '{}~=2.8'.format(DBT_PSYCOPG2_NAME),\n ],\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n\n 'License :: OSI Approved :: Apache Software License',\n\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n python_requires=\">=3.6.2\",\n)\n", "path": "plugins/postgres/setup.py"}, {"content": "#!/usr/bin/env python\nimport os\nimport sys\n\n\nif 'sdist' not in sys.argv:\n print('')\n print('As of v1.0.0, `pip install dbt` is no longer supported.')\n print('Instead, please use either:')\n print(' - `pip install dbt-core`, for core functionality')\n print(' - `pip install dbt-<adapter>`, to use dbt with your database, platform, or query engine')\n print('See full list: https://docs.getdbt.com/docs/available-adapters')\n print('')\n sys.exit(1)\n\n\nif sys.version_info < (3, 6):\n print('Error: dbt does not support this version of Python.')\n print('Please upgrade to Python 3.6 or higher.')\n sys.exit(1)\n\n\nfrom setuptools import setup\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print('Error: dbt requires setuptools v40.1.0 or higher.')\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" '\n 'and try again')\n sys.exit(1)\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, 'README.md')) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt\"\npackage_version = \"1.0.0b2\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n\n description=description,\n long_description=long_description,\n long_description_content_type='text/markdown',\n\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n zip_safe=False,\n classifiers=[\n 'Development Status :: 7 - Inactive',\n\n 'License :: OSI Approved :: Apache Software License',\n\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n python_requires=\">=3.6.2\",\n)\n", "path": "setup.py"}, {"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 6):\n print('Error: dbt does not support this version of Python.')\n print('Please upgrade to Python 3.6 or higher.')\n sys.exit(1)\n\n\nfrom setuptools import setup\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print('Error: dbt requires setuptools v40.1.0 or higher.')\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" '\n 'and try again')\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, 'README.md')) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.0.0b2\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type='text/markdown',\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=['dbt', 'dbt.*']),\n include_package_data = True,\n test_suite='test',\n entry_points={\n 'console_scripts': [\n 'dbt = dbt.main:main',\n ],\n },\n scripts=[\n 'scripts/dbt',\n ],\n install_requires=[\n 'Jinja2==2.11.3',\n 'agate>=1.6,<1.6.2',\n 'click>=8,<9',\n 'colorama>=0.3.9,<0.4.5',\n 'dataclasses>=0.6,<0.9;python_version<\"3.7\"',\n 'hologram==0.0.14',\n 'isodate>=0.6,<0.7',\n 'logbook>=1.5,<1.6',\n 'mashumaro==2.5',\n 'minimal-snowplow-tracker==0.0.2',\n 'networkx>=2.3,<3',\n 'packaging>=20.9,<22.0',\n 'sqlparse>=0.2.3,<0.5',\n 'dbt-extractor==0.4.0',\n 'typing-extensions>=3.7.4,<3.11',\n 'werkzeug>=1,<3',\n # the following are all to match snowflake-connector-python\n 'requests<3.0.0',\n 'idna>=2.5,<4',\n 'cffi>=1.9,<2.0.0',\n ],\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n\n 'License :: OSI Approved :: Apache Software License',\n\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n python_requires=\">=3.6.3\",\n)\n", "path": "core/setup.py"}]}
| 3,007 | 698 |
gh_patches_debug_34386
|
rasdani/github-patches
|
git_diff
|
optuna__optuna-1678
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use function annotation syntax for Type Hints.
After dropping Python 2.7 support at #710, we can define type hints with function annotation syntax.
~~Do you have a plan to update the coding style guideline?~~
https://github.com/optuna/optuna/wiki/Coding-Style-Conventions
## Progress
- [x] `optuna/integration/sklearn.py` (#1735)
- [x] `optuna/study.py` - assigned to harpy
## Note to the questioner
We still cannot use variable annotation syntax introduced by [PEP 526](https://www.python.org/dev/peps/pep-0526/) because we supports Python 3.5.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `optuna/samplers/_random.py`
Content:
```
1 import numpy
2
3 from optuna import distributions
4 from optuna.samplers import BaseSampler
5 from optuna import type_checking
6
7 if type_checking.TYPE_CHECKING:
8 from typing import Any # NOQA
9 from typing import Dict # NOQA
10 from typing import Optional # NOQA
11
12 from optuna.distributions import BaseDistribution # NOQA
13 from optuna.study import Study # NOQA
14 from optuna.trial import FrozenTrial # NOQA
15
16
17 class RandomSampler(BaseSampler):
18 """Sampler using random sampling.
19
20 This sampler is based on *independent sampling*.
21 See also :class:`~optuna.samplers.BaseSampler` for more details of 'independent sampling'.
22
23 Example:
24
25 .. testcode::
26
27 import optuna
28 from optuna.samplers import RandomSampler
29
30 def objective(trial):
31 x = trial.suggest_uniform('x', -5, 5)
32 return x**2
33
34 study = optuna.create_study(sampler=RandomSampler())
35 study.optimize(objective, n_trials=10)
36
37 Args:
38 seed: Seed for random number generator.
39 """
40
41 def __init__(self, seed=None):
42 # type: (Optional[int]) -> None
43
44 self._rng = numpy.random.RandomState(seed)
45
46 def reseed_rng(self) -> None:
47
48 self._rng = numpy.random.RandomState()
49
50 def infer_relative_search_space(self, study, trial):
51 # type: (Study, FrozenTrial) -> Dict[str, BaseDistribution]
52
53 return {}
54
55 def sample_relative(self, study, trial, search_space):
56 # type: (Study, FrozenTrial, Dict[str, BaseDistribution]) -> Dict[str, Any]
57
58 return {}
59
60 def sample_independent(self, study, trial, param_name, param_distribution):
61 # type: (Study, FrozenTrial, str, distributions.BaseDistribution) -> Any
62
63 if isinstance(param_distribution, distributions.UniformDistribution):
64 return self._rng.uniform(param_distribution.low, param_distribution.high)
65 elif isinstance(param_distribution, distributions.LogUniformDistribution):
66 log_low = numpy.log(param_distribution.low)
67 log_high = numpy.log(param_distribution.high)
68 return float(numpy.exp(self._rng.uniform(log_low, log_high)))
69 elif isinstance(param_distribution, distributions.DiscreteUniformDistribution):
70 q = param_distribution.q
71 r = param_distribution.high - param_distribution.low
72 # [low, high] is shifted to [0, r] to align sampled values at regular intervals.
73 low = 0 - 0.5 * q
74 high = r + 0.5 * q
75 s = self._rng.uniform(low, high)
76 v = numpy.round(s / q) * q + param_distribution.low
77 # v may slightly exceed range due to round-off errors.
78 return float(min(max(v, param_distribution.low), param_distribution.high))
79 elif isinstance(param_distribution, distributions.IntUniformDistribution):
80 # [low, high] is shifted to [0, r] to align sampled values at regular intervals.
81 r = (param_distribution.high - param_distribution.low) / param_distribution.step
82 # numpy.random.randint includes low but excludes high.
83 s = self._rng.randint(0, r + 1)
84 v = s * param_distribution.step + param_distribution.low
85 return int(v)
86 elif isinstance(param_distribution, distributions.IntLogUniformDistribution):
87 log_low = numpy.log(param_distribution.low - 0.5)
88 log_high = numpy.log(param_distribution.high + 0.5)
89 s = numpy.exp(self._rng.uniform(log_low, log_high))
90 v = numpy.round(s)
91 return int(min(max(v, param_distribution.low), param_distribution.high))
92 elif isinstance(param_distribution, distributions.CategoricalDistribution):
93 choices = param_distribution.choices
94 index = self._rng.randint(0, len(choices))
95 return choices[index]
96 else:
97 raise NotImplementedError
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/optuna/samplers/_random.py b/optuna/samplers/_random.py
--- a/optuna/samplers/_random.py
+++ b/optuna/samplers/_random.py
@@ -1,17 +1,14 @@
+from typing import Any
+from typing import Dict
+from typing import Optional
+
import numpy
from optuna import distributions
+from optuna.distributions import BaseDistribution
from optuna.samplers import BaseSampler
-from optuna import type_checking
-
-if type_checking.TYPE_CHECKING:
- from typing import Any # NOQA
- from typing import Dict # NOQA
- from typing import Optional # NOQA
-
- from optuna.distributions import BaseDistribution # NOQA
- from optuna.study import Study # NOQA
- from optuna.trial import FrozenTrial # NOQA
+from optuna.study import Study
+from optuna.trial import FrozenTrial
class RandomSampler(BaseSampler):
@@ -38,8 +35,7 @@
seed: Seed for random number generator.
"""
- def __init__(self, seed=None):
- # type: (Optional[int]) -> None
+ def __init__(self, seed: Optional[int] = None) -> None:
self._rng = numpy.random.RandomState(seed)
@@ -47,18 +43,25 @@
self._rng = numpy.random.RandomState()
- def infer_relative_search_space(self, study, trial):
- # type: (Study, FrozenTrial) -> Dict[str, BaseDistribution]
+ def infer_relative_search_space(
+ self, study: Study, trial: FrozenTrial
+ ) -> Dict[str, BaseDistribution]:
return {}
- def sample_relative(self, study, trial, search_space):
- # type: (Study, FrozenTrial, Dict[str, BaseDistribution]) -> Dict[str, Any]
+ def sample_relative(
+ self, study: Study, trial: FrozenTrial, search_space: Dict[str, BaseDistribution]
+ ) -> Dict[str, Any]:
return {}
- def sample_independent(self, study, trial, param_name, param_distribution):
- # type: (Study, FrozenTrial, str, distributions.BaseDistribution) -> Any
+ def sample_independent(
+ self,
+ study: Study,
+ trial: FrozenTrial,
+ param_name: str,
+ param_distribution: distributions.BaseDistribution,
+ ) -> Any:
if isinstance(param_distribution, distributions.UniformDistribution):
return self._rng.uniform(param_distribution.low, param_distribution.high)
|
{"golden_diff": "diff --git a/optuna/samplers/_random.py b/optuna/samplers/_random.py\n--- a/optuna/samplers/_random.py\n+++ b/optuna/samplers/_random.py\n@@ -1,17 +1,14 @@\n+from typing import Any\n+from typing import Dict\n+from typing import Optional\n+\n import numpy\n \n from optuna import distributions\n+from optuna.distributions import BaseDistribution\n from optuna.samplers import BaseSampler\n-from optuna import type_checking\n-\n-if type_checking.TYPE_CHECKING:\n- from typing import Any # NOQA\n- from typing import Dict # NOQA\n- from typing import Optional # NOQA\n-\n- from optuna.distributions import BaseDistribution # NOQA\n- from optuna.study import Study # NOQA\n- from optuna.trial import FrozenTrial # NOQA\n+from optuna.study import Study\n+from optuna.trial import FrozenTrial\n \n \n class RandomSampler(BaseSampler):\n@@ -38,8 +35,7 @@\n seed: Seed for random number generator.\n \"\"\"\n \n- def __init__(self, seed=None):\n- # type: (Optional[int]) -> None\n+ def __init__(self, seed: Optional[int] = None) -> None:\n \n self._rng = numpy.random.RandomState(seed)\n \n@@ -47,18 +43,25 @@\n \n self._rng = numpy.random.RandomState()\n \n- def infer_relative_search_space(self, study, trial):\n- # type: (Study, FrozenTrial) -> Dict[str, BaseDistribution]\n+ def infer_relative_search_space(\n+ self, study: Study, trial: FrozenTrial\n+ ) -> Dict[str, BaseDistribution]:\n \n return {}\n \n- def sample_relative(self, study, trial, search_space):\n- # type: (Study, FrozenTrial, Dict[str, BaseDistribution]) -> Dict[str, Any]\n+ def sample_relative(\n+ self, study: Study, trial: FrozenTrial, search_space: Dict[str, BaseDistribution]\n+ ) -> Dict[str, Any]:\n \n return {}\n \n- def sample_independent(self, study, trial, param_name, param_distribution):\n- # type: (Study, FrozenTrial, str, distributions.BaseDistribution) -> Any\n+ def sample_independent(\n+ self,\n+ study: Study,\n+ trial: FrozenTrial,\n+ param_name: str,\n+ param_distribution: distributions.BaseDistribution,\n+ ) -> Any:\n \n if isinstance(param_distribution, distributions.UniformDistribution):\n return self._rng.uniform(param_distribution.low, param_distribution.high)\n", "issue": "Use function annotation syntax for Type Hints.\nAfter dropping Python 2.7 support at #710, we can define type hints with function annotation syntax. \r\n~~Do you have a plan to update the coding style guideline?~~\r\nhttps://github.com/optuna/optuna/wiki/Coding-Style-Conventions\r\n\r\n## Progress\r\n\r\n- [x] `optuna/integration/sklearn.py` (#1735)\r\n- [x] `optuna/study.py` - assigned to harpy\r\n\r\n## Note to the questioner\r\n\r\nWe still cannot use variable annotation syntax introduced by [PEP 526](https://www.python.org/dev/peps/pep-0526/) because we supports Python 3.5.\n", "before_files": [{"content": "import numpy\n\nfrom optuna import distributions\nfrom optuna.samplers import BaseSampler\nfrom optuna import type_checking\n\nif type_checking.TYPE_CHECKING:\n from typing import Any # NOQA\n from typing import Dict # NOQA\n from typing import Optional # NOQA\n\n from optuna.distributions import BaseDistribution # NOQA\n from optuna.study import Study # NOQA\n from optuna.trial import FrozenTrial # NOQA\n\n\nclass RandomSampler(BaseSampler):\n \"\"\"Sampler using random sampling.\n\n This sampler is based on *independent sampling*.\n See also :class:`~optuna.samplers.BaseSampler` for more details of 'independent sampling'.\n\n Example:\n\n .. testcode::\n\n import optuna\n from optuna.samplers import RandomSampler\n\n def objective(trial):\n x = trial.suggest_uniform('x', -5, 5)\n return x**2\n\n study = optuna.create_study(sampler=RandomSampler())\n study.optimize(objective, n_trials=10)\n\n Args:\n seed: Seed for random number generator.\n \"\"\"\n\n def __init__(self, seed=None):\n # type: (Optional[int]) -> None\n\n self._rng = numpy.random.RandomState(seed)\n\n def reseed_rng(self) -> None:\n\n self._rng = numpy.random.RandomState()\n\n def infer_relative_search_space(self, study, trial):\n # type: (Study, FrozenTrial) -> Dict[str, BaseDistribution]\n\n return {}\n\n def sample_relative(self, study, trial, search_space):\n # type: (Study, FrozenTrial, Dict[str, BaseDistribution]) -> Dict[str, Any]\n\n return {}\n\n def sample_independent(self, study, trial, param_name, param_distribution):\n # type: (Study, FrozenTrial, str, distributions.BaseDistribution) -> Any\n\n if isinstance(param_distribution, distributions.UniformDistribution):\n return self._rng.uniform(param_distribution.low, param_distribution.high)\n elif isinstance(param_distribution, distributions.LogUniformDistribution):\n log_low = numpy.log(param_distribution.low)\n log_high = numpy.log(param_distribution.high)\n return float(numpy.exp(self._rng.uniform(log_low, log_high)))\n elif isinstance(param_distribution, distributions.DiscreteUniformDistribution):\n q = param_distribution.q\n r = param_distribution.high - param_distribution.low\n # [low, high] is shifted to [0, r] to align sampled values at regular intervals.\n low = 0 - 0.5 * q\n high = r + 0.5 * q\n s = self._rng.uniform(low, high)\n v = numpy.round(s / q) * q + param_distribution.low\n # v may slightly exceed range due to round-off errors.\n return float(min(max(v, param_distribution.low), param_distribution.high))\n elif isinstance(param_distribution, distributions.IntUniformDistribution):\n # [low, high] is shifted to [0, r] to align sampled values at regular intervals.\n r = (param_distribution.high - param_distribution.low) / param_distribution.step\n # numpy.random.randint includes low but excludes high.\n s = self._rng.randint(0, r + 1)\n v = s * param_distribution.step + param_distribution.low\n return int(v)\n elif isinstance(param_distribution, distributions.IntLogUniformDistribution):\n log_low = numpy.log(param_distribution.low - 0.5)\n log_high = numpy.log(param_distribution.high + 0.5)\n s = numpy.exp(self._rng.uniform(log_low, log_high))\n v = numpy.round(s)\n return int(min(max(v, param_distribution.low), param_distribution.high))\n elif isinstance(param_distribution, distributions.CategoricalDistribution):\n choices = param_distribution.choices\n index = self._rng.randint(0, len(choices))\n return choices[index]\n else:\n raise NotImplementedError\n", "path": "optuna/samplers/_random.py"}], "after_files": [{"content": "from typing import Any\nfrom typing import Dict\nfrom typing import Optional\n\nimport numpy\n\nfrom optuna import distributions\nfrom optuna.distributions import BaseDistribution\nfrom optuna.samplers import BaseSampler\nfrom optuna.study import Study\nfrom optuna.trial import FrozenTrial\n\n\nclass RandomSampler(BaseSampler):\n \"\"\"Sampler using random sampling.\n\n This sampler is based on *independent sampling*.\n See also :class:`~optuna.samplers.BaseSampler` for more details of 'independent sampling'.\n\n Example:\n\n .. testcode::\n\n import optuna\n from optuna.samplers import RandomSampler\n\n def objective(trial):\n x = trial.suggest_uniform('x', -5, 5)\n return x**2\n\n study = optuna.create_study(sampler=RandomSampler())\n study.optimize(objective, n_trials=10)\n\n Args:\n seed: Seed for random number generator.\n \"\"\"\n\n def __init__(self, seed: Optional[int] = None) -> None:\n\n self._rng = numpy.random.RandomState(seed)\n\n def reseed_rng(self) -> None:\n\n self._rng = numpy.random.RandomState()\n\n def infer_relative_search_space(\n self, study: Study, trial: FrozenTrial\n ) -> Dict[str, BaseDistribution]:\n\n return {}\n\n def sample_relative(\n self, study: Study, trial: FrozenTrial, search_space: Dict[str, BaseDistribution]\n ) -> Dict[str, Any]:\n\n return {}\n\n def sample_independent(\n self,\n study: Study,\n trial: FrozenTrial,\n param_name: str,\n param_distribution: distributions.BaseDistribution,\n ) -> Any:\n\n if isinstance(param_distribution, distributions.UniformDistribution):\n return self._rng.uniform(param_distribution.low, param_distribution.high)\n elif isinstance(param_distribution, distributions.LogUniformDistribution):\n log_low = numpy.log(param_distribution.low)\n log_high = numpy.log(param_distribution.high)\n return float(numpy.exp(self._rng.uniform(log_low, log_high)))\n elif isinstance(param_distribution, distributions.DiscreteUniformDistribution):\n q = param_distribution.q\n r = param_distribution.high - param_distribution.low\n # [low, high] is shifted to [0, r] to align sampled values at regular intervals.\n low = 0 - 0.5 * q\n high = r + 0.5 * q\n s = self._rng.uniform(low, high)\n v = numpy.round(s / q) * q + param_distribution.low\n # v may slightly exceed range due to round-off errors.\n return float(min(max(v, param_distribution.low), param_distribution.high))\n elif isinstance(param_distribution, distributions.IntUniformDistribution):\n # [low, high] is shifted to [0, r] to align sampled values at regular intervals.\n r = (param_distribution.high - param_distribution.low) / param_distribution.step\n # numpy.random.randint includes low but excludes high.\n s = self._rng.randint(0, r + 1)\n v = s * param_distribution.step + param_distribution.low\n return int(v)\n elif isinstance(param_distribution, distributions.IntLogUniformDistribution):\n log_low = numpy.log(param_distribution.low - 0.5)\n log_high = numpy.log(param_distribution.high + 0.5)\n s = numpy.exp(self._rng.uniform(log_low, log_high))\n v = numpy.round(s)\n return int(min(max(v, param_distribution.low), param_distribution.high))\n elif isinstance(param_distribution, distributions.CategoricalDistribution):\n choices = param_distribution.choices\n index = self._rng.randint(0, len(choices))\n return choices[index]\n else:\n raise NotImplementedError\n", "path": "optuna/samplers/_random.py"}]}
| 1,447 | 583 |
gh_patches_debug_1324
|
rasdani/github-patches
|
git_diff
|
meltano__meltano-6609
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bug: `meltano state list` with pattern - no such option
### Meltano Version
2.3.0
### Python Version
3.8
### Bug scope
CLI (options, error messages, logging, etc.)
### Operating System
Mac
### Description
It looks like the `--pattern` argument thats in the docs https://docs.meltano.com/reference/command-line-interface#list isnt available on the CLI.
```
(meltano) Patricks-MBP:data pnadolny$ meltano --version
meltano, version 2.3.0
(meltano) Patricks-MBP:data pnadolny$ meltano state list --pattern '*tap-gitlab*'
2022-07-25T21:31:25.438941Z [info ] Environment 'userdev' is active
Usage: meltano state list [OPTIONS] [PATTERN]
Try 'meltano state list --help' for help.
Error: No such option: --pattern
```
### Code
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/meltano/cli/state.py`
Content:
```
1 """State management in CLI."""
2 from __future__ import annotations
3
4 import json
5 import re
6 from datetime import datetime as dt
7 from functools import partial, reduce, wraps
8 from operator import xor
9
10 import click
11 import structlog
12
13 from meltano.cli.params import pass_project
14 from meltano.core.block.parser import BlockParser
15 from meltano.core.db import project_engine
16 from meltano.core.job import Payload
17 from meltano.core.project import Project
18 from meltano.core.state_service import InvalidJobStateError, StateService
19
20 from . import cli
21 from .utils import InstrumentedCmd, InstrumentedGroup
22
23 STATE_SERVICE_KEY = "state_service"
24
25 logger = structlog.getLogger(__name__)
26
27
28 class MutuallyExclusiveOptionsError(Exception):
29 """Occurs when mutually exclusive options are provided incorrectly."""
30
31 def __init__(self, *options: str) -> None:
32 """Instantiate the error.
33
34 Args:
35 options: the mutually exclusive options that were incorrectly provided.
36 """
37 super().__init__(*options)
38 self.options = options
39
40 def __str__(self) -> str:
41 """Represent the error as a string."""
42 return f"Must provide exactly one of: {','.join(self.options)}"
43
44
45 def _prompt_for_confirmation(prompt):
46 """Wrap destructive CLI commands which should prompt the user for confirmation."""
47
48 def wrapper(func):
49 fun = click.option(
50 "--force", is_flag=True, help="Don't prompt for confirmation."
51 )(func)
52
53 @wraps(func)
54 def _wrapper(force=False, *args, **kwargs):
55 if force or click.confirm(prompt):
56 return fun(*args, **kwargs, force=force)
57 else:
58 click.secho("Aborting.", fg="red")
59
60 return _wrapper
61
62 return wrapper
63
64
65 prompt_for_confirmation = partial(
66 _prompt_for_confirmation, prompt="This is a destructive command. Continue?"
67 )
68
69
70 def state_service_from_state_id(project: Project, state_id: str) -> StateService | None:
71 """Instantiate by parsing a state_id."""
72 state_id_re = re.compile(r"^(?P<env>.+)\:(?P<tap>.+)-to-(?P<target>.+)$")
73 match = state_id_re.match(state_id)
74 if match:
75 # If the state_id matches convention (i.e., job has been run via "meltano run"),
76 # try parsing into BlockSet.
77 # This way, we get BlockSet validation and raise an error if no
78 # plugin in the BlockSet has "state" capability
79 try:
80 if not project.active_environment:
81 logger.warn(
82 f"Running state operation for environment '{match.group('env')}' outside of an environment"
83 )
84 elif project.active_environment.name != match.group("env"):
85 logger.warn(
86 f"Environment '{match.group('env')}' used in state operation does not match current environment '{project.active_environment.name}'."
87 )
88 project.activate_environment(match.group("env"))
89 blocks = [match.group("tap"), match.group("target")]
90 parser = BlockParser(logger, project, blocks)
91 return next(parser.find_blocks()).state_service
92 except Exception:
93 logger.warn("No plugins found for provided state_id.")
94 # If provided state_id does not match convention (i.e., run via "meltano elt"),
95 # use the standalone StateService in the CLI context.
96 return None
97
98
99 @cli.group(cls=InstrumentedGroup, name="state", short_help="Manage Singer state.")
100 @click.pass_context
101 @pass_project(migrate=True)
102 def meltano_state(project: Project, ctx: click.Context):
103 """
104 Manage state.
105
106 \b\nRead more at https://docs.meltano.com/reference/command-line-interface#state
107 """
108 _, sessionmaker = project_engine(project)
109 session = sessionmaker()
110 ctx.obj[STATE_SERVICE_KEY] = StateService(session) # noqa: WPS204
111
112
113 @meltano_state.command(cls=InstrumentedCmd, name="list")
114 @click.argument("pattern", required=False)
115 @click.pass_context
116 @pass_project()
117 def list_state(
118 project: Project, ctx: click.Context, pattern: str | None
119 ): # noqa: WPS125
120 """List all state_ids for this project.
121
122 Optionally pass a glob-style pattern to filter state_ids by.
123 """
124 state_service = ctx.obj[STATE_SERVICE_KEY]
125 ctx.obj["legacy_tracker"].track_meltano_state("list")
126 states = state_service.list_state(pattern)
127 if states:
128 for state_id, state in states.items():
129 if state:
130 try:
131 state_service.validate_state(json.dumps(state))
132 except (InvalidJobStateError, json.decoder.JSONDecodeError):
133 click.secho(state_id, fg="red")
134 else:
135 click.secho(state_id, fg="green")
136 else:
137 click.secho(state_id, fg="yellow")
138 else:
139 logger.info("No state IDs found.")
140
141
142 @meltano_state.command(cls=InstrumentedCmd, name="copy")
143 @prompt_for_confirmation(
144 prompt="This will overwrite state for the destination. Continue?"
145 )
146 @click.argument("src-state-id", type=str)
147 @click.argument("dst-state-id", type=str)
148 @pass_project(migrate=True)
149 @click.pass_context
150 def copy_state(
151 ctx: click.Context,
152 project: Project,
153 src_state_id: str,
154 dst_state_id: str,
155 force: bool,
156 ):
157 """Copy state to another job id."""
158 # Retrieve state for copying
159 state_service = (
160 state_service_from_state_id(project, src_state_id) or ctx.obj[STATE_SERVICE_KEY]
161 )
162 ctx.obj["legacy_tracker"].track_meltano_state("copy", dst_state_id)
163
164 state_service.copy_state(src_state_id, dst_state_id)
165
166 logger.info(
167 f"State for {dst_state_id} was successfully copied from {src_state_id} at {dt.utcnow():%Y-%m-%d %H:%M:%S}." # noqa: WPS323
168 )
169
170
171 @meltano_state.command(cls=InstrumentedCmd, name="move")
172 @prompt_for_confirmation(
173 prompt="This will clear the source state and overwrite destination state. Continue?"
174 )
175 @click.argument("src-state-id", type=str)
176 @click.argument("dst-state-id", type=str)
177 @pass_project(migrate=True)
178 @click.pass_context
179 def move_state(
180 ctx: click.Context,
181 project: Project,
182 src_state_id: str,
183 dst_state_id: str,
184 force: bool,
185 ):
186 """Move state to another job id, clearing the original."""
187 # Retrieve state for moveing
188 state_service = (
189 state_service_from_state_id(project, dst_state_id) or ctx.obj[STATE_SERVICE_KEY]
190 )
191 ctx.obj["legacy_tracker"].track_meltano_state("move", dst_state_id)
192
193 state_service.move_state(src_state_id, dst_state_id)
194
195 logger.info(
196 f"State for {src_state_id} was successfully moved to {dst_state_id} at {dt.utcnow():%Y-%m-%d %H:%M:%S}." # noqa: WPS323
197 )
198
199
200 @meltano_state.command(cls=InstrumentedCmd, name="merge")
201 @click.option(
202 "--from-state-id",
203 type=str,
204 help="Merge state from an existing state ID.",
205 )
206 @click.option(
207 "--input-file",
208 type=click.Path(exists=True),
209 help="Merge state from a JSON file containing Singer state.",
210 )
211 @click.argument("state-id", type=str)
212 @click.argument("state", type=str, required=False)
213 @pass_project(migrate=True)
214 @click.pass_context
215 def merge_state(
216 ctx: click.Context,
217 project: Project,
218 state_id: str,
219 state: str | None,
220 input_file: click.Path | None,
221 from_state_id: str | None,
222 ):
223 """Add bookmarks to existing state."""
224 state_service = (
225 state_service_from_state_id(project, state_id) or ctx.obj[STATE_SERVICE_KEY]
226 )
227 ctx.obj["legacy_tracker"].track_meltano_state("merge", state_id)
228 mutually_exclusive_options = ["--input-file", "STATE", "--from-state-id"]
229 if not reduce(xor, map(bool, [state, input_file, from_state_id])):
230 raise MutuallyExclusiveOptionsError(*mutually_exclusive_options)
231 elif input_file:
232 with open(input_file) as state_f:
233 state_service.add_state(
234 state_id, state_f.read(), payload_flags=Payload.INCOMPLETE_STATE
235 )
236 elif state:
237 state_service.add_state(state_id, state, payload_flags=Payload.INCOMPLETE_STATE)
238 elif from_state_id:
239 state_service.merge_state(from_state_id, state_id)
240 logger.info(
241 f"State for {state_id} was successfully merged at {dt.utcnow():%Y-%m-%d %H:%M:%S}." # noqa: WPS323
242 )
243
244
245 @meltano_state.command(cls=InstrumentedCmd, name="set")
246 @prompt_for_confirmation(
247 prompt="This will overwrite the state's current value. Continue?"
248 )
249 @click.option(
250 "--input-file",
251 type=click.Path(exists=True),
252 help="Set state from json file containing Singer state.",
253 )
254 @click.argument("state-id")
255 @click.argument("state", type=str, required=False)
256 @pass_project(migrate=True)
257 @click.pass_context
258 def set_state(
259 ctx: click.Context,
260 project: Project,
261 state_id: str,
262 state: str | None,
263 input_file: click.Path | None,
264 force: bool,
265 ):
266 """Set state."""
267 state_service = (
268 state_service_from_state_id(project, state_id) or ctx.obj[STATE_SERVICE_KEY]
269 )
270 ctx.obj["legacy_tracker"].track_meltano_state("set", state_id)
271 if not reduce(xor, map(bool, [state, input_file])):
272 raise MutuallyExclusiveOptionsError("--input-file", "STATE")
273 elif input_file:
274 with open(input_file) as state_f:
275 state_service.set_state(state_id, state_f.read())
276 elif state:
277 state_service.set_state(state_id, state)
278 logger.info(
279 f"State for {state_id} was successfully set at {dt.utcnow():%Y-%m-%d %H:%M:%S}." # noqa: WPS323
280 )
281
282
283 @meltano_state.command(cls=InstrumentedCmd, name="get") # noqa: WPS46
284 @click.argument("state-id")
285 @pass_project(migrate=True)
286 @click.pass_context
287 def get_state(ctx: click.Context, project: Project, state_id: str): # noqa: WPS463
288 """Get state."""
289 state_service = (
290 state_service_from_state_id(project, state_id) or ctx.obj[STATE_SERVICE_KEY]
291 )
292 ctx.obj["legacy_tracker"].track_meltano_state("get", state_id)
293 retrieved_state = state_service.get_state(state_id)
294 click.echo(json.dumps(retrieved_state))
295
296
297 @meltano_state.command(cls=InstrumentedCmd, name="clear")
298 @prompt_for_confirmation(prompt="This will clear state for the job. Continue?")
299 @click.argument("state-id")
300 @pass_project(migrate=True)
301 @click.pass_context
302 def clear_state(ctx: click.Context, project: Project, state_id: str, force: bool):
303 """Clear state."""
304 state_service = (
305 state_service_from_state_id(project, state_id) or ctx.obj[STATE_SERVICE_KEY]
306 )
307 ctx.obj["legacy_tracker"].track_meltano_state("clear", state_id)
308 state_service.clear_state(state_id)
309
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/meltano/cli/state.py b/src/meltano/cli/state.py
--- a/src/meltano/cli/state.py
+++ b/src/meltano/cli/state.py
@@ -111,7 +111,7 @@
@meltano_state.command(cls=InstrumentedCmd, name="list")
[email protected]("pattern", required=False)
[email protected]("--pattern", type=str, help="Filter state IDs by pattern.")
@click.pass_context
@pass_project()
def list_state(
|
{"golden_diff": "diff --git a/src/meltano/cli/state.py b/src/meltano/cli/state.py\n--- a/src/meltano/cli/state.py\n+++ b/src/meltano/cli/state.py\n@@ -111,7 +111,7 @@\n \n \n @meltano_state.command(cls=InstrumentedCmd, name=\"list\")\[email protected](\"pattern\", required=False)\[email protected](\"--pattern\", type=str, help=\"Filter state IDs by pattern.\")\n @click.pass_context\n @pass_project()\n def list_state(\n", "issue": "bug: `meltano state list` with pattern - no such option\n### Meltano Version\r\n\r\n2.3.0\r\n\r\n### Python Version\r\n\r\n3.8\r\n\r\n### Bug scope\r\n\r\nCLI (options, error messages, logging, etc.)\r\n\r\n### Operating System\r\n\r\nMac\r\n\r\n### Description\r\n\r\nIt looks like the `--pattern` argument thats in the docs https://docs.meltano.com/reference/command-line-interface#list isnt available on the CLI.\r\n\r\n```\r\n(meltano) Patricks-MBP:data pnadolny$ meltano --version\r\nmeltano, version 2.3.0\r\n\r\n(meltano) Patricks-MBP:data pnadolny$ meltano state list --pattern '*tap-gitlab*'\r\n2022-07-25T21:31:25.438941Z [info ] Environment 'userdev' is active\r\nUsage: meltano state list [OPTIONS] [PATTERN]\r\nTry 'meltano state list --help' for help.\r\n\r\nError: No such option: --pattern\r\n```\r\n\r\n### Code\r\n\r\n_No response_\n", "before_files": [{"content": "\"\"\"State management in CLI.\"\"\"\nfrom __future__ import annotations\n\nimport json\nimport re\nfrom datetime import datetime as dt\nfrom functools import partial, reduce, wraps\nfrom operator import xor\n\nimport click\nimport structlog\n\nfrom meltano.cli.params import pass_project\nfrom meltano.core.block.parser import BlockParser\nfrom meltano.core.db import project_engine\nfrom meltano.core.job import Payload\nfrom meltano.core.project import Project\nfrom meltano.core.state_service import InvalidJobStateError, StateService\n\nfrom . import cli\nfrom .utils import InstrumentedCmd, InstrumentedGroup\n\nSTATE_SERVICE_KEY = \"state_service\"\n\nlogger = structlog.getLogger(__name__)\n\n\nclass MutuallyExclusiveOptionsError(Exception):\n \"\"\"Occurs when mutually exclusive options are provided incorrectly.\"\"\"\n\n def __init__(self, *options: str) -> None:\n \"\"\"Instantiate the error.\n\n Args:\n options: the mutually exclusive options that were incorrectly provided.\n \"\"\"\n super().__init__(*options)\n self.options = options\n\n def __str__(self) -> str:\n \"\"\"Represent the error as a string.\"\"\"\n return f\"Must provide exactly one of: {','.join(self.options)}\"\n\n\ndef _prompt_for_confirmation(prompt):\n \"\"\"Wrap destructive CLI commands which should prompt the user for confirmation.\"\"\"\n\n def wrapper(func):\n fun = click.option(\n \"--force\", is_flag=True, help=\"Don't prompt for confirmation.\"\n )(func)\n\n @wraps(func)\n def _wrapper(force=False, *args, **kwargs):\n if force or click.confirm(prompt):\n return fun(*args, **kwargs, force=force)\n else:\n click.secho(\"Aborting.\", fg=\"red\")\n\n return _wrapper\n\n return wrapper\n\n\nprompt_for_confirmation = partial(\n _prompt_for_confirmation, prompt=\"This is a destructive command. Continue?\"\n)\n\n\ndef state_service_from_state_id(project: Project, state_id: str) -> StateService | None:\n \"\"\"Instantiate by parsing a state_id.\"\"\"\n state_id_re = re.compile(r\"^(?P<env>.+)\\:(?P<tap>.+)-to-(?P<target>.+)$\")\n match = state_id_re.match(state_id)\n if match:\n # If the state_id matches convention (i.e., job has been run via \"meltano run\"),\n # try parsing into BlockSet.\n # This way, we get BlockSet validation and raise an error if no\n # plugin in the BlockSet has \"state\" capability\n try:\n if not project.active_environment:\n logger.warn(\n f\"Running state operation for environment '{match.group('env')}' outside of an environment\"\n )\n elif project.active_environment.name != match.group(\"env\"):\n logger.warn(\n f\"Environment '{match.group('env')}' used in state operation does not match current environment '{project.active_environment.name}'.\"\n )\n project.activate_environment(match.group(\"env\"))\n blocks = [match.group(\"tap\"), match.group(\"target\")]\n parser = BlockParser(logger, project, blocks)\n return next(parser.find_blocks()).state_service\n except Exception:\n logger.warn(\"No plugins found for provided state_id.\")\n # If provided state_id does not match convention (i.e., run via \"meltano elt\"),\n # use the standalone StateService in the CLI context.\n return None\n\n\[email protected](cls=InstrumentedGroup, name=\"state\", short_help=\"Manage Singer state.\")\[email protected]_context\n@pass_project(migrate=True)\ndef meltano_state(project: Project, ctx: click.Context):\n \"\"\"\n Manage state.\n\n \\b\\nRead more at https://docs.meltano.com/reference/command-line-interface#state\n \"\"\"\n _, sessionmaker = project_engine(project)\n session = sessionmaker()\n ctx.obj[STATE_SERVICE_KEY] = StateService(session) # noqa: WPS204\n\n\n@meltano_state.command(cls=InstrumentedCmd, name=\"list\")\[email protected](\"pattern\", required=False)\[email protected]_context\n@pass_project()\ndef list_state(\n project: Project, ctx: click.Context, pattern: str | None\n): # noqa: WPS125\n \"\"\"List all state_ids for this project.\n\n Optionally pass a glob-style pattern to filter state_ids by.\n \"\"\"\n state_service = ctx.obj[STATE_SERVICE_KEY]\n ctx.obj[\"legacy_tracker\"].track_meltano_state(\"list\")\n states = state_service.list_state(pattern)\n if states:\n for state_id, state in states.items():\n if state:\n try:\n state_service.validate_state(json.dumps(state))\n except (InvalidJobStateError, json.decoder.JSONDecodeError):\n click.secho(state_id, fg=\"red\")\n else:\n click.secho(state_id, fg=\"green\")\n else:\n click.secho(state_id, fg=\"yellow\")\n else:\n logger.info(\"No state IDs found.\")\n\n\n@meltano_state.command(cls=InstrumentedCmd, name=\"copy\")\n@prompt_for_confirmation(\n prompt=\"This will overwrite state for the destination. Continue?\"\n)\[email protected](\"src-state-id\", type=str)\[email protected](\"dst-state-id\", type=str)\n@pass_project(migrate=True)\[email protected]_context\ndef copy_state(\n ctx: click.Context,\n project: Project,\n src_state_id: str,\n dst_state_id: str,\n force: bool,\n):\n \"\"\"Copy state to another job id.\"\"\"\n # Retrieve state for copying\n state_service = (\n state_service_from_state_id(project, src_state_id) or ctx.obj[STATE_SERVICE_KEY]\n )\n ctx.obj[\"legacy_tracker\"].track_meltano_state(\"copy\", dst_state_id)\n\n state_service.copy_state(src_state_id, dst_state_id)\n\n logger.info(\n f\"State for {dst_state_id} was successfully copied from {src_state_id} at {dt.utcnow():%Y-%m-%d %H:%M:%S}.\" # noqa: WPS323\n )\n\n\n@meltano_state.command(cls=InstrumentedCmd, name=\"move\")\n@prompt_for_confirmation(\n prompt=\"This will clear the source state and overwrite destination state. Continue?\"\n)\[email protected](\"src-state-id\", type=str)\[email protected](\"dst-state-id\", type=str)\n@pass_project(migrate=True)\[email protected]_context\ndef move_state(\n ctx: click.Context,\n project: Project,\n src_state_id: str,\n dst_state_id: str,\n force: bool,\n):\n \"\"\"Move state to another job id, clearing the original.\"\"\"\n # Retrieve state for moveing\n state_service = (\n state_service_from_state_id(project, dst_state_id) or ctx.obj[STATE_SERVICE_KEY]\n )\n ctx.obj[\"legacy_tracker\"].track_meltano_state(\"move\", dst_state_id)\n\n state_service.move_state(src_state_id, dst_state_id)\n\n logger.info(\n f\"State for {src_state_id} was successfully moved to {dst_state_id} at {dt.utcnow():%Y-%m-%d %H:%M:%S}.\" # noqa: WPS323\n )\n\n\n@meltano_state.command(cls=InstrumentedCmd, name=\"merge\")\[email protected](\n \"--from-state-id\",\n type=str,\n help=\"Merge state from an existing state ID.\",\n)\[email protected](\n \"--input-file\",\n type=click.Path(exists=True),\n help=\"Merge state from a JSON file containing Singer state.\",\n)\[email protected](\"state-id\", type=str)\[email protected](\"state\", type=str, required=False)\n@pass_project(migrate=True)\[email protected]_context\ndef merge_state(\n ctx: click.Context,\n project: Project,\n state_id: str,\n state: str | None,\n input_file: click.Path | None,\n from_state_id: str | None,\n):\n \"\"\"Add bookmarks to existing state.\"\"\"\n state_service = (\n state_service_from_state_id(project, state_id) or ctx.obj[STATE_SERVICE_KEY]\n )\n ctx.obj[\"legacy_tracker\"].track_meltano_state(\"merge\", state_id)\n mutually_exclusive_options = [\"--input-file\", \"STATE\", \"--from-state-id\"]\n if not reduce(xor, map(bool, [state, input_file, from_state_id])):\n raise MutuallyExclusiveOptionsError(*mutually_exclusive_options)\n elif input_file:\n with open(input_file) as state_f:\n state_service.add_state(\n state_id, state_f.read(), payload_flags=Payload.INCOMPLETE_STATE\n )\n elif state:\n state_service.add_state(state_id, state, payload_flags=Payload.INCOMPLETE_STATE)\n elif from_state_id:\n state_service.merge_state(from_state_id, state_id)\n logger.info(\n f\"State for {state_id} was successfully merged at {dt.utcnow():%Y-%m-%d %H:%M:%S}.\" # noqa: WPS323\n )\n\n\n@meltano_state.command(cls=InstrumentedCmd, name=\"set\")\n@prompt_for_confirmation(\n prompt=\"This will overwrite the state's current value. Continue?\"\n)\[email protected](\n \"--input-file\",\n type=click.Path(exists=True),\n help=\"Set state from json file containing Singer state.\",\n)\[email protected](\"state-id\")\[email protected](\"state\", type=str, required=False)\n@pass_project(migrate=True)\[email protected]_context\ndef set_state(\n ctx: click.Context,\n project: Project,\n state_id: str,\n state: str | None,\n input_file: click.Path | None,\n force: bool,\n):\n \"\"\"Set state.\"\"\"\n state_service = (\n state_service_from_state_id(project, state_id) or ctx.obj[STATE_SERVICE_KEY]\n )\n ctx.obj[\"legacy_tracker\"].track_meltano_state(\"set\", state_id)\n if not reduce(xor, map(bool, [state, input_file])):\n raise MutuallyExclusiveOptionsError(\"--input-file\", \"STATE\")\n elif input_file:\n with open(input_file) as state_f:\n state_service.set_state(state_id, state_f.read())\n elif state:\n state_service.set_state(state_id, state)\n logger.info(\n f\"State for {state_id} was successfully set at {dt.utcnow():%Y-%m-%d %H:%M:%S}.\" # noqa: WPS323\n )\n\n\n@meltano_state.command(cls=InstrumentedCmd, name=\"get\") # noqa: WPS46\[email protected](\"state-id\")\n@pass_project(migrate=True)\[email protected]_context\ndef get_state(ctx: click.Context, project: Project, state_id: str): # noqa: WPS463\n \"\"\"Get state.\"\"\"\n state_service = (\n state_service_from_state_id(project, state_id) or ctx.obj[STATE_SERVICE_KEY]\n )\n ctx.obj[\"legacy_tracker\"].track_meltano_state(\"get\", state_id)\n retrieved_state = state_service.get_state(state_id)\n click.echo(json.dumps(retrieved_state))\n\n\n@meltano_state.command(cls=InstrumentedCmd, name=\"clear\")\n@prompt_for_confirmation(prompt=\"This will clear state for the job. Continue?\")\[email protected](\"state-id\")\n@pass_project(migrate=True)\[email protected]_context\ndef clear_state(ctx: click.Context, project: Project, state_id: str, force: bool):\n \"\"\"Clear state.\"\"\"\n state_service = (\n state_service_from_state_id(project, state_id) or ctx.obj[STATE_SERVICE_KEY]\n )\n ctx.obj[\"legacy_tracker\"].track_meltano_state(\"clear\", state_id)\n state_service.clear_state(state_id)\n", "path": "src/meltano/cli/state.py"}], "after_files": [{"content": "\"\"\"State management in CLI.\"\"\"\nfrom __future__ import annotations\n\nimport json\nimport re\nfrom datetime import datetime as dt\nfrom functools import partial, reduce, wraps\nfrom operator import xor\n\nimport click\nimport structlog\n\nfrom meltano.cli.params import pass_project\nfrom meltano.core.block.parser import BlockParser\nfrom meltano.core.db import project_engine\nfrom meltano.core.job import Payload\nfrom meltano.core.project import Project\nfrom meltano.core.state_service import InvalidJobStateError, StateService\n\nfrom . import cli\nfrom .utils import InstrumentedCmd, InstrumentedGroup\n\nSTATE_SERVICE_KEY = \"state_service\"\n\nlogger = structlog.getLogger(__name__)\n\n\nclass MutuallyExclusiveOptionsError(Exception):\n \"\"\"Occurs when mutually exclusive options are provided incorrectly.\"\"\"\n\n def __init__(self, *options: str) -> None:\n \"\"\"Instantiate the error.\n\n Args:\n options: the mutually exclusive options that were incorrectly provided.\n \"\"\"\n super().__init__(*options)\n self.options = options\n\n def __str__(self) -> str:\n \"\"\"Represent the error as a string.\"\"\"\n return f\"Must provide exactly one of: {','.join(self.options)}\"\n\n\ndef _prompt_for_confirmation(prompt):\n \"\"\"Wrap destructive CLI commands which should prompt the user for confirmation.\"\"\"\n\n def wrapper(func):\n fun = click.option(\n \"--force\", is_flag=True, help=\"Don't prompt for confirmation.\"\n )(func)\n\n @wraps(func)\n def _wrapper(force=False, *args, **kwargs):\n if force or click.confirm(prompt):\n return fun(*args, **kwargs, force=force)\n else:\n click.secho(\"Aborting.\", fg=\"red\")\n\n return _wrapper\n\n return wrapper\n\n\nprompt_for_confirmation = partial(\n _prompt_for_confirmation, prompt=\"This is a destructive command. Continue?\"\n)\n\n\ndef state_service_from_state_id(project: Project, state_id: str) -> StateService | None:\n \"\"\"Instantiate by parsing a state_id.\"\"\"\n state_id_re = re.compile(r\"^(?P<env>.+)\\:(?P<tap>.+)-to-(?P<target>.+)$\")\n match = state_id_re.match(state_id)\n if match:\n # If the state_id matches convention (i.e., job has been run via \"meltano run\"),\n # try parsing into BlockSet.\n # This way, we get BlockSet validation and raise an error if no\n # plugin in the BlockSet has \"state\" capability\n try:\n if not project.active_environment:\n logger.warn(\n f\"Running state operation for environment '{match.group('env')}' outside of an environment\"\n )\n elif project.active_environment.name != match.group(\"env\"):\n logger.warn(\n f\"Environment '{match.group('env')}' used in state operation does not match current environment '{project.active_environment.name}'.\"\n )\n project.activate_environment(match.group(\"env\"))\n blocks = [match.group(\"tap\"), match.group(\"target\")]\n parser = BlockParser(logger, project, blocks)\n return next(parser.find_blocks()).state_service\n except Exception:\n logger.warn(\"No plugins found for provided state_id.\")\n # If provided state_id does not match convention (i.e., run via \"meltano elt\"),\n # use the standalone StateService in the CLI context.\n return None\n\n\[email protected](cls=InstrumentedGroup, name=\"state\", short_help=\"Manage Singer state.\")\[email protected]_context\n@pass_project(migrate=True)\ndef meltano_state(project: Project, ctx: click.Context):\n \"\"\"\n Manage state.\n\n \\b\\nRead more at https://docs.meltano.com/reference/command-line-interface#state\n \"\"\"\n _, sessionmaker = project_engine(project)\n session = sessionmaker()\n ctx.obj[STATE_SERVICE_KEY] = StateService(session) # noqa: WPS204\n\n\n@meltano_state.command(cls=InstrumentedCmd, name=\"list\")\[email protected](\"--pattern\", type=str, help=\"Filter state IDs by pattern.\")\[email protected]_context\n@pass_project()\ndef list_state(\n project: Project, ctx: click.Context, pattern: str | None\n): # noqa: WPS125\n \"\"\"List all state_ids for this project.\n\n Optionally pass a glob-style pattern to filter state_ids by.\n \"\"\"\n state_service = ctx.obj[STATE_SERVICE_KEY]\n ctx.obj[\"legacy_tracker\"].track_meltano_state(\"list\")\n states = state_service.list_state(pattern)\n if states:\n for state_id, state in states.items():\n if state:\n try:\n state_service.validate_state(json.dumps(state))\n except (InvalidJobStateError, json.decoder.JSONDecodeError):\n click.secho(state_id, fg=\"red\")\n else:\n click.secho(state_id, fg=\"green\")\n else:\n click.secho(state_id, fg=\"yellow\")\n else:\n logger.info(\"No state IDs found.\")\n\n\n@meltano_state.command(cls=InstrumentedCmd, name=\"copy\")\n@prompt_for_confirmation(\n prompt=\"This will overwrite state for the destination. Continue?\"\n)\[email protected](\"src-state-id\", type=str)\[email protected](\"dst-state-id\", type=str)\n@pass_project(migrate=True)\[email protected]_context\ndef copy_state(\n ctx: click.Context,\n project: Project,\n src_state_id: str,\n dst_state_id: str,\n force: bool,\n):\n \"\"\"Copy state to another job id.\"\"\"\n # Retrieve state for copying\n state_service = (\n state_service_from_state_id(project, src_state_id) or ctx.obj[STATE_SERVICE_KEY]\n )\n ctx.obj[\"legacy_tracker\"].track_meltano_state(\"copy\", dst_state_id)\n\n state_service.copy_state(src_state_id, dst_state_id)\n\n logger.info(\n f\"State for {dst_state_id} was successfully copied from {src_state_id} at {dt.utcnow():%Y-%m-%d %H:%M:%S}.\" # noqa: WPS323\n )\n\n\n@meltano_state.command(cls=InstrumentedCmd, name=\"move\")\n@prompt_for_confirmation(\n prompt=\"This will clear the source state and overwrite destination state. Continue?\"\n)\[email protected](\"src-state-id\", type=str)\[email protected](\"dst-state-id\", type=str)\n@pass_project(migrate=True)\[email protected]_context\ndef move_state(\n ctx: click.Context,\n project: Project,\n src_state_id: str,\n dst_state_id: str,\n force: bool,\n):\n \"\"\"Move state to another job id, clearing the original.\"\"\"\n # Retrieve state for moveing\n state_service = (\n state_service_from_state_id(project, dst_state_id) or ctx.obj[STATE_SERVICE_KEY]\n )\n ctx.obj[\"legacy_tracker\"].track_meltano_state(\"move\", dst_state_id)\n\n state_service.move_state(src_state_id, dst_state_id)\n\n logger.info(\n f\"State for {src_state_id} was successfully moved to {dst_state_id} at {dt.utcnow():%Y-%m-%d %H:%M:%S}.\" # noqa: WPS323\n )\n\n\n@meltano_state.command(cls=InstrumentedCmd, name=\"merge\")\[email protected](\n \"--from-state-id\",\n type=str,\n help=\"Merge state from an existing state ID.\",\n)\[email protected](\n \"--input-file\",\n type=click.Path(exists=True),\n help=\"Merge state from a JSON file containing Singer state.\",\n)\[email protected](\"state-id\", type=str)\[email protected](\"state\", type=str, required=False)\n@pass_project(migrate=True)\[email protected]_context\ndef merge_state(\n ctx: click.Context,\n project: Project,\n state_id: str,\n state: str | None,\n input_file: click.Path | None,\n from_state_id: str | None,\n):\n \"\"\"Add bookmarks to existing state.\"\"\"\n state_service = (\n state_service_from_state_id(project, state_id) or ctx.obj[STATE_SERVICE_KEY]\n )\n ctx.obj[\"legacy_tracker\"].track_meltano_state(\"merge\", state_id)\n mutually_exclusive_options = [\"--input-file\", \"STATE\", \"--from-state-id\"]\n if not reduce(xor, map(bool, [state, input_file, from_state_id])):\n raise MutuallyExclusiveOptionsError(*mutually_exclusive_options)\n elif input_file:\n with open(input_file) as state_f:\n state_service.add_state(\n state_id, state_f.read(), payload_flags=Payload.INCOMPLETE_STATE\n )\n elif state:\n state_service.add_state(state_id, state, payload_flags=Payload.INCOMPLETE_STATE)\n elif from_state_id:\n state_service.merge_state(from_state_id, state_id)\n logger.info(\n f\"State for {state_id} was successfully merged at {dt.utcnow():%Y-%m-%d %H:%M:%S}.\" # noqa: WPS323\n )\n\n\n@meltano_state.command(cls=InstrumentedCmd, name=\"set\")\n@prompt_for_confirmation(\n prompt=\"This will overwrite the state's current value. Continue?\"\n)\[email protected](\n \"--input-file\",\n type=click.Path(exists=True),\n help=\"Set state from json file containing Singer state.\",\n)\[email protected](\"state-id\")\[email protected](\"state\", type=str, required=False)\n@pass_project(migrate=True)\[email protected]_context\ndef set_state(\n ctx: click.Context,\n project: Project,\n state_id: str,\n state: str | None,\n input_file: click.Path | None,\n force: bool,\n):\n \"\"\"Set state.\"\"\"\n state_service = (\n state_service_from_state_id(project, state_id) or ctx.obj[STATE_SERVICE_KEY]\n )\n ctx.obj[\"legacy_tracker\"].track_meltano_state(\"set\", state_id)\n if not reduce(xor, map(bool, [state, input_file])):\n raise MutuallyExclusiveOptionsError(\"--input-file\", \"STATE\")\n elif input_file:\n with open(input_file) as state_f:\n state_service.set_state(state_id, state_f.read())\n elif state:\n state_service.set_state(state_id, state)\n logger.info(\n f\"State for {state_id} was successfully set at {dt.utcnow():%Y-%m-%d %H:%M:%S}.\" # noqa: WPS323\n )\n\n\n@meltano_state.command(cls=InstrumentedCmd, name=\"get\") # noqa: WPS46\[email protected](\"state-id\")\n@pass_project(migrate=True)\[email protected]_context\ndef get_state(ctx: click.Context, project: Project, state_id: str): # noqa: WPS463\n \"\"\"Get state.\"\"\"\n state_service = (\n state_service_from_state_id(project, state_id) or ctx.obj[STATE_SERVICE_KEY]\n )\n ctx.obj[\"legacy_tracker\"].track_meltano_state(\"get\", state_id)\n retrieved_state = state_service.get_state(state_id)\n click.echo(json.dumps(retrieved_state))\n\n\n@meltano_state.command(cls=InstrumentedCmd, name=\"clear\")\n@prompt_for_confirmation(prompt=\"This will clear state for the job. Continue?\")\[email protected](\"state-id\")\n@pass_project(migrate=True)\[email protected]_context\ndef clear_state(ctx: click.Context, project: Project, state_id: str, force: bool):\n \"\"\"Clear state.\"\"\"\n state_service = (\n state_service_from_state_id(project, state_id) or ctx.obj[STATE_SERVICE_KEY]\n )\n ctx.obj[\"legacy_tracker\"].track_meltano_state(\"clear\", state_id)\n state_service.clear_state(state_id)\n", "path": "src/meltano/cli/state.py"}]}
| 3,826 | 112 |
gh_patches_debug_41921
|
rasdani/github-patches
|
git_diff
|
sunpy__sunpy-1862
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
get_url_for_time_range function in stereo.py in dataretriever not working correctly.
The following query:-
``` python
from sunpy.time.timerange import TimeRange
from sunpy.net.vso.attrs import Time, Instrument
from sunpy.net.dataretriever.client import QueryResponse
import sunpy.net.dataretriever.sources.stereo as stereo
LCClient = stereo.HETClient()
urls = LCClient._get_url_for_timerange(TimeRange('2008/12/01','2010/12/01'),'ahead', 15*u.min)
```
Should return a non-empty list of urls but instead returns an empty list. Possible problem stems from the implementation of scraper.py in sunpy.util. The scraper doesn't work as intended on http://www.srl.caltech.edu/STEREO/DATA/HET.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sunpy/util/scraper.py`
Content:
```
1 from __future__ import absolute_import, division, print_function
2
3 import os
4 import datetime
5 import re
6
7 from bs4 import BeautifulSoup
8 from sunpy.extern import six
9 from sunpy.extern.six.moves import range, zip
10 from sunpy.extern.six.moves.urllib.request import urlopen
11
12 __all__ = ['Scraper']
13
14 # regular expressions to convert datetime format
15 TIME_CONVERSIONS = {'%Y': '\d{4}', '%y': '\d{2}',
16 '%b': '[A-Z]..', '%B': '\W', '%m': '\d{2}',
17 '%d': '\d{2}', '%j': '\d{3}',
18 '%H': '\d{2}', '%I': '\d{2}',
19 '%M': '\d{2}',
20 '%S': '\d{2}'}
21
22 class Scraper(object):
23 """
24 A Scraper to scrap web data archives based on dates.
25
26 Parameters
27 ----------
28 pattern : string
29 A string containing the url with the date encoded as
30 datetime formats, and any other parameter as kwargs
31 as string format.
32
33 Attributes
34 ----------
35 pattern : string
36 A converted string with the kwargs.
37 now : datetime.datetime
38 The pattern with the actual date.
39
40 Examples
41 --------
42 >>> # Downloading data from SolarMonitor.org
43 >>> from sunpy.util.scraper import Scraper
44 >>> solmon_pattern = ('http://solarmonitor.org/data/'
45 '%Y/%m/%d/fits/{instrument}/'
46 '{instrument}_{wave:05d}_fd_%Y%m%d_%H%M%S.fts.gz')
47 >>> solmon = Scraper(solmon_pattern, instrument = 'swap', wave = 174)
48 >>> print(solmon.pattern)
49 http://solarmonitor.org/data/%Y/%m/%d/fits/swap/swap_00174_fd_%Y%m%d_%H%M%S.fts.gz
50 >>> print(solmon.now)
51 http://solarmonitor.org/data/2012/01/25/fits/swap/swap_00174_fd_20120125_173301.fts.gz
52
53 Notes
54 -----
55 The now attribute does not return an existent file, but just how the
56 pattern looks with the actual time.
57 """
58 def __init__(self, pattern, **kwargs):
59 self.pattern = pattern.format(**kwargs)
60 self.now = datetime.datetime.now().strftime(self.pattern)
61
62 def matches(self, filepath, date):
63 return date.strftime(self.pattern) == filepath
64
65 def range(self, timerange):
66 """
67 Gets the directories for a certain range of time
68 (i.e. using `~sunpy.time.TimeRange`).
69
70 Parameters
71 ----------
72
73 timerange : `~sunpy.time.timerange.TimeRange`
74 Time interval where to find the directories for a given
75 pattern.
76
77 Returns
78 -------
79
80 directories : list of strings
81 List of all the possible directories valid for the time
82 range given. Notice that these directories may not exist
83 in the archive.
84 """
85 #find directory structure - without file names
86 directorypattern = os.path.dirname(self.pattern) + '/'
87 #TODO what if there's not slashes?
88 rangedelta = timerange.dt
89 timestep = self._smallerPattern(directorypattern)
90 if timestep is None:
91 return [directorypattern]
92 else:
93 # Number of elements in the time range (including end)
94 n_steps = rangedelta.total_seconds()/timestep.total_seconds()
95 TotalTimeElements = int(round(n_steps)) + 1
96 directories = [(timerange.start + n * timestep).strftime(directorypattern)
97 for n in range(TotalTimeElements)] #todo if date <= endate
98 return directories
99
100 def _URL_followsPattern(self, url):
101 """Check whether the url provided follows the pattern"""
102 pattern = self.pattern
103 for k,v in six.iteritems(TIME_CONVERSIONS):
104 pattern = pattern.replace(k, v)
105 matches = re.match(pattern, url)
106 if matches:
107 return matches.end() == matches.endpos == len(self.now)
108 return False
109
110 def _extractDateURL(self, url):
111 """Extracts the date from a particular url following the pattern"""
112 # url_to_list substitutes '.' and '_' for '/' to then create
113 # a list of all the blocks in times - assuming they are all
114 # separated with either '.', '_' or '/'
115 url_to_list = lambda txt: re.sub(r'\.|_', '/', txt).split('/')
116 pattern_list = url_to_list(self.pattern)
117 url_list = url_to_list(url)
118
119 time_order = ['%Y', '%y', '%b', '%B', '%m', '%d', '%j',
120 '%H', '%I', '%M', '%S']
121 final_date = []
122 final_pattern = []
123 # Find in directory and filename
124 for pattern_elem, url_elem in zip(pattern_list, url_list):
125 time_formats = [x for x in time_order if x in pattern_elem]
126 if len(time_formats) > 0:
127 final_date.append(url_elem)
128 final_pattern.append(pattern_elem)
129 for time_bit in time_formats:
130 time_order.remove(time_bit)
131 # Find and remove repeated elements eg: %Y in ['%Y', '%Y%m%d']
132 # Make all as single strings
133 date_together = ''.join(final_date)
134 pattern_together = ''.join(final_pattern)
135 re_together = pattern_together
136 for k, v in six.iteritems(TIME_CONVERSIONS):
137 re_together = re_together.replace(k, v)
138
139 # Create new empty lists
140 final_date = list()
141 final_pattern = list()
142 for p,r in zip(pattern_together.split('%')[1:], re_together.split('\\')[1:]):
143 regexp = '\\{}'.format(r)
144 pattern = '%{}'.format(p)
145 date_part = re.match(regexp, date_together)
146 date_together = date_together[:date_part.start()] + \
147 date_together[date_part.end():]
148 if pattern not in final_pattern:
149 final_pattern.append('%{}'.format(p))
150 final_date.append(date_part.group())
151 return datetime.datetime.strptime(' '.join(final_date),
152 ' '.join(final_pattern))
153
154 def filelist(self, timerange):
155 """
156 Returns the list of existent files in the archive for the
157 given time range.
158
159 Parameters
160 ----------
161
162 timerange : `~sunpy.time.TimeRange`
163 Time interval where to find the directories for a given
164 pattern.
165
166 Returns
167 -------
168
169 filesurls : list of strings
170 List of all the files found between the time range given.
171
172 Examples
173 --------
174 >>> from sunpy.time import TimeRange
175 >>> timerange = TimeRange('2015-01-01','2015-01-01T16:00:00')
176 >>> print(solmon.filelist(timerange))
177 ['http://solarmonitor.org/data/2015/01/01/fits/swap/swap_00174_fd_20150101_025423.fts.gz']
178 """
179 directories = self.range(timerange)
180 filesurls = []
181 for directory in directories:
182 try:
183 opn = urlopen(directory)
184 try:
185 soup = BeautifulSoup(opn)
186 for link in soup.find_all("a"):
187 href = link.get("href")
188 if href.endswith(self.pattern.split('.')[-1]):
189 fullpath = directory + href
190 if self._URL_followsPattern(fullpath):
191 datehref = self._extractDateURL(fullpath)
192 if (datehref >= timerange.start and
193 datehref <= timerange.end):
194 filesurls.append(fullpath)
195 finally:
196 opn.close()
197 except:
198 pass
199 return filesurls
200
201 def _smallerPattern(self, directoryPattern):
202 """Obtain the smaller time step for the given pattern"""
203 try:
204 if "%S" in directoryPattern:
205 return datetime.timedelta(seconds=1)
206 elif "%M" in directoryPattern:
207 return datetime.timedelta(minutes=1)
208 elif any(hour in directoryPattern for hour in ["%H", "%I"]):
209 return datetime.timedelta(hours=1)
210 elif any(day in directoryPattern for day in ["%d", "%j"]):
211 return datetime.timedelta(days=1)
212 elif any(month in directoryPattern for month in ["%b","%B","%m"]):
213 return datetime.timedelta(days=31)
214 elif any(year in directoryPattern for year in ["%Y", "%y"]):
215 return datetime.timedelta(days=365)
216 else:
217 return None
218 except:
219 raise
220
221
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sunpy/util/scraper.py b/sunpy/util/scraper.py
--- a/sunpy/util/scraper.py
+++ b/sunpy/util/scraper.py
@@ -12,12 +12,13 @@
__all__ = ['Scraper']
# regular expressions to convert datetime format
+# added `%e` as for milliseconds `%f/1000`
TIME_CONVERSIONS = {'%Y': '\d{4}', '%y': '\d{2}',
- '%b': '[A-Z]..', '%B': '\W', '%m': '\d{2}',
+ '%b': '[A-Z][a-z]{2}', '%B': '\W', '%m': '\d{2}',
'%d': '\d{2}', '%j': '\d{3}',
'%H': '\d{2}', '%I': '\d{2}',
'%M': '\d{2}',
- '%S': '\d{2}'}
+ '%S': '\d{2}', '%e': '\d{3}', '%f': '\d{6}'}
class Scraper(object):
"""
@@ -57,7 +58,13 @@
"""
def __init__(self, pattern, **kwargs):
self.pattern = pattern.format(**kwargs)
- self.now = datetime.datetime.now().strftime(self.pattern)
+ milliseconds = re.search('\%e', self.pattern)
+ if not milliseconds:
+ self.now = datetime.datetime.now().strftime(self.pattern)
+ else:
+ now = datetime.datetime.now()
+ milliseconds_ = int(now.microsecond / 1000.)
+ self.now = now.strftime(self.pattern[0:milliseconds.start()] + str(milliseconds_) + self.pattern[milliseconds.end():])
def matches(self, filepath, date):
return date.strftime(self.pattern) == filepath
@@ -115,9 +122,8 @@
url_to_list = lambda txt: re.sub(r'\.|_', '/', txt).split('/')
pattern_list = url_to_list(self.pattern)
url_list = url_to_list(url)
-
time_order = ['%Y', '%y', '%b', '%B', '%m', '%d', '%j',
- '%H', '%I', '%M', '%S']
+ '%H', '%I', '%M', '%S', '%e', '%f']
final_date = []
final_pattern = []
# Find in directory and filename
@@ -139,10 +145,13 @@
# Create new empty lists
final_date = list()
final_pattern = list()
+ re_together = re_together.replace('[A-Z]', '\\[A-Z]')
for p,r in zip(pattern_together.split('%')[1:], re_together.split('\\')[1:]):
- regexp = '\\{}'.format(r)
+ if p == 'e':
+ continue
+ regexp = '\\{}'.format(r) if not r.startswith('[') else r
pattern = '%{}'.format(p)
- date_part = re.match(regexp, date_together)
+ date_part = re.search(regexp, date_together)
date_together = date_together[:date_part.start()] + \
date_together[date_part.end():]
if pattern not in final_pattern:
@@ -182,7 +191,7 @@
try:
opn = urlopen(directory)
try:
- soup = BeautifulSoup(opn)
+ soup = BeautifulSoup(opn, "lxml")
for link in soup.find_all("a"):
href = link.get("href")
if href.endswith(self.pattern.split('.')[-1]):
|
{"golden_diff": "diff --git a/sunpy/util/scraper.py b/sunpy/util/scraper.py\n--- a/sunpy/util/scraper.py\n+++ b/sunpy/util/scraper.py\n@@ -12,12 +12,13 @@\n __all__ = ['Scraper']\n \n # regular expressions to convert datetime format\n+# added `%e` as for milliseconds `%f/1000`\n TIME_CONVERSIONS = {'%Y': '\\d{4}', '%y': '\\d{2}',\n- '%b': '[A-Z]..', '%B': '\\W', '%m': '\\d{2}',\n+ '%b': '[A-Z][a-z]{2}', '%B': '\\W', '%m': '\\d{2}',\n '%d': '\\d{2}', '%j': '\\d{3}',\n '%H': '\\d{2}', '%I': '\\d{2}',\n '%M': '\\d{2}',\n- '%S': '\\d{2}'}\n+ '%S': '\\d{2}', '%e': '\\d{3}', '%f': '\\d{6}'}\n \n class Scraper(object):\n \"\"\"\n@@ -57,7 +58,13 @@\n \"\"\"\n def __init__(self, pattern, **kwargs):\n self.pattern = pattern.format(**kwargs)\n- self.now = datetime.datetime.now().strftime(self.pattern)\n+ milliseconds = re.search('\\%e', self.pattern)\n+ if not milliseconds:\n+ self.now = datetime.datetime.now().strftime(self.pattern)\n+ else:\n+ now = datetime.datetime.now()\n+ milliseconds_ = int(now.microsecond / 1000.)\n+ self.now = now.strftime(self.pattern[0:milliseconds.start()] + str(milliseconds_) + self.pattern[milliseconds.end():])\n \n def matches(self, filepath, date):\n return date.strftime(self.pattern) == filepath\n@@ -115,9 +122,8 @@\n url_to_list = lambda txt: re.sub(r'\\.|_', '/', txt).split('/')\n pattern_list = url_to_list(self.pattern)\n url_list = url_to_list(url)\n-\n time_order = ['%Y', '%y', '%b', '%B', '%m', '%d', '%j',\n- '%H', '%I', '%M', '%S']\n+ '%H', '%I', '%M', '%S', '%e', '%f']\n final_date = []\n final_pattern = []\n # Find in directory and filename\n@@ -139,10 +145,13 @@\n # Create new empty lists\n final_date = list()\n final_pattern = list()\n+ re_together = re_together.replace('[A-Z]', '\\\\[A-Z]')\n for p,r in zip(pattern_together.split('%')[1:], re_together.split('\\\\')[1:]):\n- regexp = '\\\\{}'.format(r)\n+ if p == 'e':\n+ continue\n+ regexp = '\\\\{}'.format(r) if not r.startswith('[') else r\n pattern = '%{}'.format(p)\n- date_part = re.match(regexp, date_together)\n+ date_part = re.search(regexp, date_together)\n date_together = date_together[:date_part.start()] + \\\n date_together[date_part.end():]\n if pattern not in final_pattern:\n@@ -182,7 +191,7 @@\n try:\n opn = urlopen(directory)\n try:\n- soup = BeautifulSoup(opn)\n+ soup = BeautifulSoup(opn, \"lxml\")\n for link in soup.find_all(\"a\"):\n href = link.get(\"href\")\n if href.endswith(self.pattern.split('.')[-1]):\n", "issue": "get_url_for_time_range function in stereo.py in dataretriever not working correctly.\nThe following query:-\n\n``` python\nfrom sunpy.time.timerange import TimeRange\nfrom sunpy.net.vso.attrs import Time, Instrument\nfrom sunpy.net.dataretriever.client import QueryResponse\nimport sunpy.net.dataretriever.sources.stereo as stereo\n\nLCClient = stereo.HETClient()\nurls = LCClient._get_url_for_timerange(TimeRange('2008/12/01','2010/12/01'),'ahead', 15*u.min)\n\n```\n\nShould return a non-empty list of urls but instead returns an empty list. Possible problem stems from the implementation of scraper.py in sunpy.util. The scraper doesn't work as intended on http://www.srl.caltech.edu/STEREO/DATA/HET.\n\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport os\nimport datetime\nimport re\n\nfrom bs4 import BeautifulSoup\nfrom sunpy.extern import six\nfrom sunpy.extern.six.moves import range, zip\nfrom sunpy.extern.six.moves.urllib.request import urlopen\n\n__all__ = ['Scraper']\n\n# regular expressions to convert datetime format\nTIME_CONVERSIONS = {'%Y': '\\d{4}', '%y': '\\d{2}',\n '%b': '[A-Z]..', '%B': '\\W', '%m': '\\d{2}',\n '%d': '\\d{2}', '%j': '\\d{3}',\n '%H': '\\d{2}', '%I': '\\d{2}',\n '%M': '\\d{2}',\n '%S': '\\d{2}'}\n\nclass Scraper(object):\n \"\"\"\n A Scraper to scrap web data archives based on dates.\n\n Parameters\n ----------\n pattern : string\n A string containing the url with the date encoded as\n datetime formats, and any other parameter as kwargs\n as string format.\n\n Attributes\n ----------\n pattern : string\n A converted string with the kwargs.\n now : datetime.datetime\n The pattern with the actual date.\n\n Examples\n --------\n >>> # Downloading data from SolarMonitor.org\n >>> from sunpy.util.scraper import Scraper\n >>> solmon_pattern = ('http://solarmonitor.org/data/'\n '%Y/%m/%d/fits/{instrument}/'\n '{instrument}_{wave:05d}_fd_%Y%m%d_%H%M%S.fts.gz')\n >>> solmon = Scraper(solmon_pattern, instrument = 'swap', wave = 174)\n >>> print(solmon.pattern)\n http://solarmonitor.org/data/%Y/%m/%d/fits/swap/swap_00174_fd_%Y%m%d_%H%M%S.fts.gz\n >>> print(solmon.now)\n http://solarmonitor.org/data/2012/01/25/fits/swap/swap_00174_fd_20120125_173301.fts.gz\n\n Notes\n -----\n The now attribute does not return an existent file, but just how the\n pattern looks with the actual time.\n \"\"\"\n def __init__(self, pattern, **kwargs):\n self.pattern = pattern.format(**kwargs)\n self.now = datetime.datetime.now().strftime(self.pattern)\n\n def matches(self, filepath, date):\n return date.strftime(self.pattern) == filepath\n\n def range(self, timerange):\n \"\"\"\n Gets the directories for a certain range of time\n (i.e. using `~sunpy.time.TimeRange`).\n\n Parameters\n ----------\n\n timerange : `~sunpy.time.timerange.TimeRange`\n Time interval where to find the directories for a given\n pattern.\n\n Returns\n -------\n\n directories : list of strings\n List of all the possible directories valid for the time\n range given. Notice that these directories may not exist\n in the archive.\n \"\"\"\n #find directory structure - without file names\n directorypattern = os.path.dirname(self.pattern) + '/'\n #TODO what if there's not slashes?\n rangedelta = timerange.dt\n timestep = self._smallerPattern(directorypattern)\n if timestep is None:\n return [directorypattern]\n else:\n # Number of elements in the time range (including end)\n n_steps = rangedelta.total_seconds()/timestep.total_seconds()\n TotalTimeElements = int(round(n_steps)) + 1\n directories = [(timerange.start + n * timestep).strftime(directorypattern)\n for n in range(TotalTimeElements)] #todo if date <= endate\n return directories\n\n def _URL_followsPattern(self, url):\n \"\"\"Check whether the url provided follows the pattern\"\"\"\n pattern = self.pattern\n for k,v in six.iteritems(TIME_CONVERSIONS):\n pattern = pattern.replace(k, v)\n matches = re.match(pattern, url)\n if matches:\n return matches.end() == matches.endpos == len(self.now)\n return False\n\n def _extractDateURL(self, url):\n \"\"\"Extracts the date from a particular url following the pattern\"\"\"\n # url_to_list substitutes '.' and '_' for '/' to then create\n # a list of all the blocks in times - assuming they are all\n # separated with either '.', '_' or '/'\n url_to_list = lambda txt: re.sub(r'\\.|_', '/', txt).split('/')\n pattern_list = url_to_list(self.pattern)\n url_list = url_to_list(url)\n\n time_order = ['%Y', '%y', '%b', '%B', '%m', '%d', '%j',\n '%H', '%I', '%M', '%S']\n final_date = []\n final_pattern = []\n # Find in directory and filename\n for pattern_elem, url_elem in zip(pattern_list, url_list):\n time_formats = [x for x in time_order if x in pattern_elem]\n if len(time_formats) > 0:\n final_date.append(url_elem)\n final_pattern.append(pattern_elem)\n for time_bit in time_formats:\n time_order.remove(time_bit)\n # Find and remove repeated elements eg: %Y in ['%Y', '%Y%m%d']\n # Make all as single strings\n date_together = ''.join(final_date)\n pattern_together = ''.join(final_pattern)\n re_together = pattern_together\n for k, v in six.iteritems(TIME_CONVERSIONS):\n re_together = re_together.replace(k, v)\n\n # Create new empty lists\n final_date = list()\n final_pattern = list()\n for p,r in zip(pattern_together.split('%')[1:], re_together.split('\\\\')[1:]):\n regexp = '\\\\{}'.format(r)\n pattern = '%{}'.format(p)\n date_part = re.match(regexp, date_together)\n date_together = date_together[:date_part.start()] + \\\n date_together[date_part.end():]\n if pattern not in final_pattern:\n final_pattern.append('%{}'.format(p))\n final_date.append(date_part.group())\n return datetime.datetime.strptime(' '.join(final_date),\n ' '.join(final_pattern))\n\n def filelist(self, timerange):\n \"\"\"\n Returns the list of existent files in the archive for the\n given time range.\n\n Parameters\n ----------\n\n timerange : `~sunpy.time.TimeRange`\n Time interval where to find the directories for a given\n pattern.\n\n Returns\n -------\n\n filesurls : list of strings\n List of all the files found between the time range given.\n\n Examples\n --------\n >>> from sunpy.time import TimeRange\n >>> timerange = TimeRange('2015-01-01','2015-01-01T16:00:00')\n >>> print(solmon.filelist(timerange))\n ['http://solarmonitor.org/data/2015/01/01/fits/swap/swap_00174_fd_20150101_025423.fts.gz']\n \"\"\"\n directories = self.range(timerange)\n filesurls = []\n for directory in directories:\n try:\n opn = urlopen(directory)\n try:\n soup = BeautifulSoup(opn)\n for link in soup.find_all(\"a\"):\n href = link.get(\"href\")\n if href.endswith(self.pattern.split('.')[-1]):\n fullpath = directory + href\n if self._URL_followsPattern(fullpath):\n datehref = self._extractDateURL(fullpath)\n if (datehref >= timerange.start and\n datehref <= timerange.end):\n filesurls.append(fullpath)\n finally:\n opn.close()\n except:\n pass\n return filesurls\n\n def _smallerPattern(self, directoryPattern):\n \"\"\"Obtain the smaller time step for the given pattern\"\"\"\n try:\n if \"%S\" in directoryPattern:\n return datetime.timedelta(seconds=1)\n elif \"%M\" in directoryPattern:\n return datetime.timedelta(minutes=1)\n elif any(hour in directoryPattern for hour in [\"%H\", \"%I\"]):\n return datetime.timedelta(hours=1)\n elif any(day in directoryPattern for day in [\"%d\", \"%j\"]):\n return datetime.timedelta(days=1)\n elif any(month in directoryPattern for month in [\"%b\",\"%B\",\"%m\"]):\n return datetime.timedelta(days=31)\n elif any(year in directoryPattern for year in [\"%Y\", \"%y\"]):\n return datetime.timedelta(days=365)\n else:\n return None\n except:\n raise\n\n", "path": "sunpy/util/scraper.py"}], "after_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport os\nimport datetime\nimport re\n\nfrom bs4 import BeautifulSoup\nfrom sunpy.extern import six\nfrom sunpy.extern.six.moves import range, zip\nfrom sunpy.extern.six.moves.urllib.request import urlopen\n\n__all__ = ['Scraper']\n\n# regular expressions to convert datetime format\n# added `%e` as for milliseconds `%f/1000`\nTIME_CONVERSIONS = {'%Y': '\\d{4}', '%y': '\\d{2}',\n '%b': '[A-Z][a-z]{2}', '%B': '\\W', '%m': '\\d{2}',\n '%d': '\\d{2}', '%j': '\\d{3}',\n '%H': '\\d{2}', '%I': '\\d{2}',\n '%M': '\\d{2}',\n '%S': '\\d{2}', '%e': '\\d{3}', '%f': '\\d{6}'}\n\nclass Scraper(object):\n \"\"\"\n A Scraper to scrap web data archives based on dates.\n\n Parameters\n ----------\n pattern : string\n A string containing the url with the date encoded as\n datetime formats, and any other parameter as kwargs\n as string format.\n\n Attributes\n ----------\n pattern : string\n A converted string with the kwargs.\n now : datetime.datetime\n The pattern with the actual date.\n\n Examples\n --------\n >>> # Downloading data from SolarMonitor.org\n >>> from sunpy.util.scraper import Scraper\n >>> solmon_pattern = ('http://solarmonitor.org/data/'\n '%Y/%m/%d/fits/{instrument}/'\n '{instrument}_{wave:05d}_fd_%Y%m%d_%H%M%S.fts.gz')\n >>> solmon = Scraper(solmon_pattern, instrument = 'swap', wave = 174)\n >>> print(solmon.pattern)\n http://solarmonitor.org/data/%Y/%m/%d/fits/swap/swap_00174_fd_%Y%m%d_%H%M%S.fts.gz\n >>> print(solmon.now)\n http://solarmonitor.org/data/2012/01/25/fits/swap/swap_00174_fd_20120125_173301.fts.gz\n\n Notes\n -----\n The now attribute does not return an existent file, but just how the\n pattern looks with the actual time.\n \"\"\"\n def __init__(self, pattern, **kwargs):\n self.pattern = pattern.format(**kwargs)\n milliseconds = re.search('\\%e', self.pattern)\n if not milliseconds:\n self.now = datetime.datetime.now().strftime(self.pattern)\n else:\n now = datetime.datetime.now()\n milliseconds_ = int(now.microsecond / 1000.)\n self.now = now.strftime(self.pattern[0:milliseconds.start()] + str(milliseconds_) + self.pattern[milliseconds.end():])\n\n def matches(self, filepath, date):\n return date.strftime(self.pattern) == filepath\n\n def range(self, timerange):\n \"\"\"\n Gets the directories for a certain range of time\n (i.e. using `~sunpy.time.TimeRange`).\n\n Parameters\n ----------\n\n timerange : `~sunpy.time.timerange.TimeRange`\n Time interval where to find the directories for a given\n pattern.\n\n Returns\n -------\n\n directories : list of strings\n List of all the possible directories valid for the time\n range given. Notice that these directories may not exist\n in the archive.\n \"\"\"\n #find directory structure - without file names\n directorypattern = os.path.dirname(self.pattern) + '/'\n #TODO what if there's not slashes?\n rangedelta = timerange.dt\n timestep = self._smallerPattern(directorypattern)\n if timestep is None:\n return [directorypattern]\n else:\n # Number of elements in the time range (including end)\n n_steps = rangedelta.total_seconds()/timestep.total_seconds()\n TotalTimeElements = int(round(n_steps)) + 1\n directories = [(timerange.start + n * timestep).strftime(directorypattern)\n for n in range(TotalTimeElements)] #todo if date <= endate\n return directories\n\n def _URL_followsPattern(self, url):\n \"\"\"Check whether the url provided follows the pattern\"\"\"\n pattern = self.pattern\n for k,v in six.iteritems(TIME_CONVERSIONS):\n pattern = pattern.replace(k, v)\n matches = re.match(pattern, url)\n if matches:\n return matches.end() == matches.endpos == len(self.now)\n return False\n\n def _extractDateURL(self, url):\n \"\"\"Extracts the date from a particular url following the pattern\"\"\"\n # url_to_list substitutes '.' and '_' for '/' to then create\n # a list of all the blocks in times - assuming they are all\n # separated with either '.', '_' or '/'\n url_to_list = lambda txt: re.sub(r'\\.|_', '/', txt).split('/')\n pattern_list = url_to_list(self.pattern)\n url_list = url_to_list(url)\n time_order = ['%Y', '%y', '%b', '%B', '%m', '%d', '%j',\n '%H', '%I', '%M', '%S', '%e', '%f']\n final_date = []\n final_pattern = []\n # Find in directory and filename\n for pattern_elem, url_elem in zip(pattern_list, url_list):\n time_formats = [x for x in time_order if x in pattern_elem]\n if len(time_formats) > 0:\n final_date.append(url_elem)\n final_pattern.append(pattern_elem)\n for time_bit in time_formats:\n time_order.remove(time_bit)\n # Find and remove repeated elements eg: %Y in ['%Y', '%Y%m%d']\n # Make all as single strings\n date_together = ''.join(final_date)\n pattern_together = ''.join(final_pattern)\n re_together = pattern_together\n for k, v in six.iteritems(TIME_CONVERSIONS):\n re_together = re_together.replace(k, v)\n\n # Create new empty lists\n final_date = list()\n final_pattern = list()\n re_together = re_together.replace('[A-Z]', '\\\\[A-Z]')\n for p,r in zip(pattern_together.split('%')[1:], re_together.split('\\\\')[1:]):\n if p == 'e':\n continue\n regexp = '\\\\{}'.format(r) if not r.startswith('[') else r\n pattern = '%{}'.format(p)\n date_part = re.search(regexp, date_together)\n date_together = date_together[:date_part.start()] + \\\n date_together[date_part.end():]\n if pattern not in final_pattern:\n final_pattern.append('%{}'.format(p))\n final_date.append(date_part.group())\n return datetime.datetime.strptime(' '.join(final_date),\n ' '.join(final_pattern))\n\n def filelist(self, timerange):\n \"\"\"\n Returns the list of existent files in the archive for the\n given time range.\n\n Parameters\n ----------\n\n timerange : `~sunpy.time.TimeRange`\n Time interval where to find the directories for a given\n pattern.\n\n Returns\n -------\n\n filesurls : list of strings\n List of all the files found between the time range given.\n\n Examples\n --------\n >>> from sunpy.time import TimeRange\n >>> timerange = TimeRange('2015-01-01','2015-01-01T16:00:00')\n >>> print(solmon.filelist(timerange))\n ['http://solarmonitor.org/data/2015/01/01/fits/swap/swap_00174_fd_20150101_025423.fts.gz']\n \"\"\"\n directories = self.range(timerange)\n filesurls = []\n for directory in directories:\n try:\n opn = urlopen(directory)\n try:\n soup = BeautifulSoup(opn, \"lxml\")\n for link in soup.find_all(\"a\"):\n href = link.get(\"href\")\n if href.endswith(self.pattern.split('.')[-1]):\n fullpath = directory + href\n if self._URL_followsPattern(fullpath):\n datehref = self._extractDateURL(fullpath)\n if (datehref >= timerange.start and\n datehref <= timerange.end):\n filesurls.append(fullpath)\n finally:\n opn.close()\n except:\n pass\n return filesurls\n\n def _smallerPattern(self, directoryPattern):\n \"\"\"Obtain the smaller time step for the given pattern\"\"\"\n try:\n if \"%S\" in directoryPattern:\n return datetime.timedelta(seconds=1)\n elif \"%M\" in directoryPattern:\n return datetime.timedelta(minutes=1)\n elif any(hour in directoryPattern for hour in [\"%H\", \"%I\"]):\n return datetime.timedelta(hours=1)\n elif any(day in directoryPattern for day in [\"%d\", \"%j\"]):\n return datetime.timedelta(days=1)\n elif any(month in directoryPattern for month in [\"%b\",\"%B\",\"%m\"]):\n return datetime.timedelta(days=31)\n elif any(year in directoryPattern for year in [\"%Y\", \"%y\"]):\n return datetime.timedelta(days=365)\n else:\n return None\n except:\n raise\n\n", "path": "sunpy/util/scraper.py"}]}
| 2,911 | 805 |
gh_patches_debug_3298
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-887
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
utils.convert_tensor considers `device = 0` to be no device
## 🐛 Bug description
In utils.convert_tensor, this line appears:
`return tensor.to(device=device, non_blocking=non_blocking) if device else tensor`
This means that for `device = 0` (as returned from `torch.cuda.current_device`) no conversion is applied, which can be very confusing. I might add a PR for that tomorrow, unless people tell me there's a reason to leave that line as it is.
For reproduction:
```python
import torch
from ignite.utils import convert_tensor
mytens = torch.zeros(2)
device = torch.cuda.current_device()
converted_tens = convert_tensor(mytens, device)
assert converted_tens.device == device
```
## Environment
- PyTorch Version (e.g., 1.4): 1.4
- Ignite Version (e.g., 0.3.0): 0.3
- OS (e.g., Linux): Windows 10
- How you installed Ignite (`conda`, `pip`, source): conda
- Python version: 3.7.6
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/utils.py`
Content:
```
1 import collections.abc as collections
2 import logging
3 from typing import Union, Optional, Callable, Any, Type, Tuple
4
5 import torch
6
7 __all__ = ["convert_tensor", "apply_to_tensor", "apply_to_type", "to_onehot", "setup_logger"]
8
9
10 def convert_tensor(
11 input_: Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes],
12 device: Optional[Union[str, torch.device]] = None,
13 non_blocking: bool = False,
14 ) -> Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes]:
15 """Move tensors to relevant device."""
16
17 def _func(tensor: torch.Tensor) -> torch.Tensor:
18 return tensor.to(device=device, non_blocking=non_blocking) if device else tensor
19
20 return apply_to_tensor(input_, _func)
21
22
23 def apply_to_tensor(
24 input_: Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes], func: Callable
25 ) -> Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes]:
26 """Apply a function on a tensor or mapping, or sequence of tensors.
27 """
28 return apply_to_type(input_, torch.Tensor, func)
29
30
31 def apply_to_type(
32 input_: Union[Any, collections.Sequence, collections.Mapping, str, bytes],
33 input_type: Union[Type, Tuple[Type[Any], Any]],
34 func: Callable,
35 ) -> Union[Any, collections.Sequence, collections.Mapping, str, bytes]:
36 """Apply a function on a object of `input_type` or mapping, or sequence of objects of `input_type`.
37 """
38 if isinstance(input_, input_type):
39 return func(input_)
40 elif isinstance(input_, (str, bytes)):
41 return input_
42 elif isinstance(input_, collections.Mapping):
43 return type(input_)({k: apply_to_type(sample, input_type, func) for k, sample in input_.items()})
44 elif isinstance(input_, tuple) and hasattr(input_, "_fields"): # namedtuple
45 return type(input_)(*(apply_to_type(sample, input_type, func) for sample in input_))
46 elif isinstance(input_, collections.Sequence):
47 return type(input_)([apply_to_type(sample, input_type, func) for sample in input_])
48 else:
49 raise TypeError(("input must contain {}, dicts or lists; found {}".format(input_type, type(input_))))
50
51
52 def to_onehot(indices: torch.Tensor, num_classes: int) -> torch.Tensor:
53 """Convert a tensor of indices of any shape `(N, ...)` to a
54 tensor of one-hot indicators of shape `(N, num_classes, ...) and of type uint8. Output's device is equal to the
55 input's device`.
56 """
57 onehot = torch.zeros(indices.shape[0], num_classes, *indices.shape[1:], dtype=torch.uint8, device=indices.device)
58 return onehot.scatter_(1, indices.unsqueeze(1), 1)
59
60
61 def setup_logger(
62 name: str,
63 level: int = logging.INFO,
64 format: str = "%(asctime)s %(name)s %(levelname)s: %(message)s",
65 filepath: Optional[str] = None,
66 distributed_rank: int = 0,
67 ) -> logging.Logger:
68 """Setups logger: name, level, format etc.
69
70 Args:
71 name (str): new name for the logger.
72 level (int): logging level, e.g. CRITICAL, ERROR, WARNING, INFO, DEBUG
73 format (str): logging format. By default, `%(asctime)s %(name)s %(levelname)s: %(message)s`
74 filepath (str, optional): Optional logging file path. If not None, logs are written to the file.
75 distributed_rank (int, optional): Optional, rank in distributed configuration to avoid logger setup for workers.
76
77 Returns:
78 logging.Logger
79
80 For example, to improve logs readability when training with a trainer and evaluator:
81
82 .. code-block:: python
83
84 from ignite.utils import setup_logger
85
86 trainer = ...
87 evaluator = ...
88
89 trainer.logger = setup_logger("trainer")
90 evaluator.logger = setup_logger("evaluator")
91
92 trainer.run(data, max_epochs=10)
93
94 # Logs will look like
95 # 2020-01-21 12:46:07,356 trainer INFO: Engine run starting with max_epochs=5.
96 # 2020-01-21 12:46:07,358 trainer INFO: Epoch[1] Complete. Time taken: 00:5:23
97 # 2020-01-21 12:46:07,358 evaluator INFO: Engine run starting with max_epochs=1.
98 # 2020-01-21 12:46:07,358 evaluator INFO: Epoch[1] Complete. Time taken: 00:01:02
99 # ...
100
101 """
102 logger = logging.getLogger(name)
103
104 if distributed_rank > 0:
105 return logger
106
107 logger.setLevel(level)
108
109 # Remove previous handlers
110 if logger.hasHandlers():
111 for h in list(logger.handlers):
112 logger.removeHandler(h)
113
114 formatter = logging.Formatter(format)
115
116 ch = logging.StreamHandler()
117 ch.setLevel(level)
118 ch.setFormatter(formatter)
119 logger.addHandler(ch)
120
121 if filepath is not None:
122 fh = logging.FileHandler(filepath)
123 fh.setLevel(level)
124 fh.setFormatter(formatter)
125 logger.addHandler(fh)
126
127 return logger
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ignite/utils.py b/ignite/utils.py
--- a/ignite/utils.py
+++ b/ignite/utils.py
@@ -15,7 +15,7 @@
"""Move tensors to relevant device."""
def _func(tensor: torch.Tensor) -> torch.Tensor:
- return tensor.to(device=device, non_blocking=non_blocking) if device else tensor
+ return tensor.to(device=device, non_blocking=non_blocking) if device is not None else tensor
return apply_to_tensor(input_, _func)
|
{"golden_diff": "diff --git a/ignite/utils.py b/ignite/utils.py\n--- a/ignite/utils.py\n+++ b/ignite/utils.py\n@@ -15,7 +15,7 @@\n \"\"\"Move tensors to relevant device.\"\"\"\n \n def _func(tensor: torch.Tensor) -> torch.Tensor:\n- return tensor.to(device=device, non_blocking=non_blocking) if device else tensor\n+ return tensor.to(device=device, non_blocking=non_blocking) if device is not None else tensor\n \n return apply_to_tensor(input_, _func)\n", "issue": "utils.convert_tensor considers `device = 0` to be no device\n## \ud83d\udc1b Bug description\r\nIn utils.convert_tensor, this line appears:\r\n`return tensor.to(device=device, non_blocking=non_blocking) if device else tensor`\r\n\r\nThis means that for `device = 0` (as returned from `torch.cuda.current_device`) no conversion is applied, which can be very confusing. I might add a PR for that tomorrow, unless people tell me there's a reason to leave that line as it is.\r\n\r\nFor reproduction:\r\n```python\r\nimport torch\r\nfrom ignite.utils import convert_tensor\r\n\r\nmytens = torch.zeros(2)\r\ndevice = torch.cuda.current_device()\r\nconverted_tens = convert_tensor(mytens, device)\r\nassert converted_tens.device == device\r\n```\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.4): 1.4\r\n - Ignite Version (e.g., 0.3.0): 0.3\r\n - OS (e.g., Linux): Windows 10\r\n - How you installed Ignite (`conda`, `pip`, source): conda\r\n - Python version: 3.7.6\r\n\r\n\n", "before_files": [{"content": "import collections.abc as collections\nimport logging\nfrom typing import Union, Optional, Callable, Any, Type, Tuple\n\nimport torch\n\n__all__ = [\"convert_tensor\", \"apply_to_tensor\", \"apply_to_type\", \"to_onehot\", \"setup_logger\"]\n\n\ndef convert_tensor(\n input_: Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes],\n device: Optional[Union[str, torch.device]] = None,\n non_blocking: bool = False,\n) -> Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes]:\n \"\"\"Move tensors to relevant device.\"\"\"\n\n def _func(tensor: torch.Tensor) -> torch.Tensor:\n return tensor.to(device=device, non_blocking=non_blocking) if device else tensor\n\n return apply_to_tensor(input_, _func)\n\n\ndef apply_to_tensor(\n input_: Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes], func: Callable\n) -> Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes]:\n \"\"\"Apply a function on a tensor or mapping, or sequence of tensors.\n \"\"\"\n return apply_to_type(input_, torch.Tensor, func)\n\n\ndef apply_to_type(\n input_: Union[Any, collections.Sequence, collections.Mapping, str, bytes],\n input_type: Union[Type, Tuple[Type[Any], Any]],\n func: Callable,\n) -> Union[Any, collections.Sequence, collections.Mapping, str, bytes]:\n \"\"\"Apply a function on a object of `input_type` or mapping, or sequence of objects of `input_type`.\n \"\"\"\n if isinstance(input_, input_type):\n return func(input_)\n elif isinstance(input_, (str, bytes)):\n return input_\n elif isinstance(input_, collections.Mapping):\n return type(input_)({k: apply_to_type(sample, input_type, func) for k, sample in input_.items()})\n elif isinstance(input_, tuple) and hasattr(input_, \"_fields\"): # namedtuple\n return type(input_)(*(apply_to_type(sample, input_type, func) for sample in input_))\n elif isinstance(input_, collections.Sequence):\n return type(input_)([apply_to_type(sample, input_type, func) for sample in input_])\n else:\n raise TypeError((\"input must contain {}, dicts or lists; found {}\".format(input_type, type(input_))))\n\n\ndef to_onehot(indices: torch.Tensor, num_classes: int) -> torch.Tensor:\n \"\"\"Convert a tensor of indices of any shape `(N, ...)` to a\n tensor of one-hot indicators of shape `(N, num_classes, ...) and of type uint8. Output's device is equal to the\n input's device`.\n \"\"\"\n onehot = torch.zeros(indices.shape[0], num_classes, *indices.shape[1:], dtype=torch.uint8, device=indices.device)\n return onehot.scatter_(1, indices.unsqueeze(1), 1)\n\n\ndef setup_logger(\n name: str,\n level: int = logging.INFO,\n format: str = \"%(asctime)s %(name)s %(levelname)s: %(message)s\",\n filepath: Optional[str] = None,\n distributed_rank: int = 0,\n) -> logging.Logger:\n \"\"\"Setups logger: name, level, format etc.\n\n Args:\n name (str): new name for the logger.\n level (int): logging level, e.g. CRITICAL, ERROR, WARNING, INFO, DEBUG\n format (str): logging format. By default, `%(asctime)s %(name)s %(levelname)s: %(message)s`\n filepath (str, optional): Optional logging file path. If not None, logs are written to the file.\n distributed_rank (int, optional): Optional, rank in distributed configuration to avoid logger setup for workers.\n\n Returns:\n logging.Logger\n\n For example, to improve logs readability when training with a trainer and evaluator:\n\n .. code-block:: python\n\n from ignite.utils import setup_logger\n\n trainer = ...\n evaluator = ...\n\n trainer.logger = setup_logger(\"trainer\")\n evaluator.logger = setup_logger(\"evaluator\")\n\n trainer.run(data, max_epochs=10)\n\n # Logs will look like\n # 2020-01-21 12:46:07,356 trainer INFO: Engine run starting with max_epochs=5.\n # 2020-01-21 12:46:07,358 trainer INFO: Epoch[1] Complete. Time taken: 00:5:23\n # 2020-01-21 12:46:07,358 evaluator INFO: Engine run starting with max_epochs=1.\n # 2020-01-21 12:46:07,358 evaluator INFO: Epoch[1] Complete. Time taken: 00:01:02\n # ...\n\n \"\"\"\n logger = logging.getLogger(name)\n\n if distributed_rank > 0:\n return logger\n\n logger.setLevel(level)\n\n # Remove previous handlers\n if logger.hasHandlers():\n for h in list(logger.handlers):\n logger.removeHandler(h)\n\n formatter = logging.Formatter(format)\n\n ch = logging.StreamHandler()\n ch.setLevel(level)\n ch.setFormatter(formatter)\n logger.addHandler(ch)\n\n if filepath is not None:\n fh = logging.FileHandler(filepath)\n fh.setLevel(level)\n fh.setFormatter(formatter)\n logger.addHandler(fh)\n\n return logger\n", "path": "ignite/utils.py"}], "after_files": [{"content": "import collections.abc as collections\nimport logging\nfrom typing import Union, Optional, Callable, Any, Type, Tuple\n\nimport torch\n\n__all__ = [\"convert_tensor\", \"apply_to_tensor\", \"apply_to_type\", \"to_onehot\", \"setup_logger\"]\n\n\ndef convert_tensor(\n input_: Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes],\n device: Optional[Union[str, torch.device]] = None,\n non_blocking: bool = False,\n) -> Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes]:\n \"\"\"Move tensors to relevant device.\"\"\"\n\n def _func(tensor: torch.Tensor) -> torch.Tensor:\n return tensor.to(device=device, non_blocking=non_blocking) if device is not None else tensor\n\n return apply_to_tensor(input_, _func)\n\n\ndef apply_to_tensor(\n input_: Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes], func: Callable\n) -> Union[torch.Tensor, collections.Sequence, collections.Mapping, str, bytes]:\n \"\"\"Apply a function on a tensor or mapping, or sequence of tensors.\n \"\"\"\n return apply_to_type(input_, torch.Tensor, func)\n\n\ndef apply_to_type(\n input_: Union[Any, collections.Sequence, collections.Mapping, str, bytes],\n input_type: Union[Type, Tuple[Type[Any], Any]],\n func: Callable,\n) -> Union[Any, collections.Sequence, collections.Mapping, str, bytes]:\n \"\"\"Apply a function on a object of `input_type` or mapping, or sequence of objects of `input_type`.\n \"\"\"\n if isinstance(input_, input_type):\n return func(input_)\n elif isinstance(input_, (str, bytes)):\n return input_\n elif isinstance(input_, collections.Mapping):\n return type(input_)({k: apply_to_type(sample, input_type, func) for k, sample in input_.items()})\n elif isinstance(input_, tuple) and hasattr(input_, \"_fields\"): # namedtuple\n return type(input_)(*(apply_to_type(sample, input_type, func) for sample in input_))\n elif isinstance(input_, collections.Sequence):\n return type(input_)([apply_to_type(sample, input_type, func) for sample in input_])\n else:\n raise TypeError((\"input must contain {}, dicts or lists; found {}\".format(input_type, type(input_))))\n\n\ndef to_onehot(indices: torch.Tensor, num_classes: int) -> torch.Tensor:\n \"\"\"Convert a tensor of indices of any shape `(N, ...)` to a\n tensor of one-hot indicators of shape `(N, num_classes, ...) and of type uint8. Output's device is equal to the\n input's device`.\n \"\"\"\n onehot = torch.zeros(indices.shape[0], num_classes, *indices.shape[1:], dtype=torch.uint8, device=indices.device)\n return onehot.scatter_(1, indices.unsqueeze(1), 1)\n\n\ndef setup_logger(\n name: str,\n level: int = logging.INFO,\n format: str = \"%(asctime)s %(name)s %(levelname)s: %(message)s\",\n filepath: Optional[str] = None,\n distributed_rank: int = 0,\n) -> logging.Logger:\n \"\"\"Setups logger: name, level, format etc.\n\n Args:\n name (str): new name for the logger.\n level (int): logging level, e.g. CRITICAL, ERROR, WARNING, INFO, DEBUG\n format (str): logging format. By default, `%(asctime)s %(name)s %(levelname)s: %(message)s`\n filepath (str, optional): Optional logging file path. If not None, logs are written to the file.\n distributed_rank (int, optional): Optional, rank in distributed configuration to avoid logger setup for workers.\n\n Returns:\n logging.Logger\n\n For example, to improve logs readability when training with a trainer and evaluator:\n\n .. code-block:: python\n\n from ignite.utils import setup_logger\n\n trainer = ...\n evaluator = ...\n\n trainer.logger = setup_logger(\"trainer\")\n evaluator.logger = setup_logger(\"evaluator\")\n\n trainer.run(data, max_epochs=10)\n\n # Logs will look like\n # 2020-01-21 12:46:07,356 trainer INFO: Engine run starting with max_epochs=5.\n # 2020-01-21 12:46:07,358 trainer INFO: Epoch[1] Complete. Time taken: 00:5:23\n # 2020-01-21 12:46:07,358 evaluator INFO: Engine run starting with max_epochs=1.\n # 2020-01-21 12:46:07,358 evaluator INFO: Epoch[1] Complete. Time taken: 00:01:02\n # ...\n\n \"\"\"\n logger = logging.getLogger(name)\n\n if distributed_rank > 0:\n return logger\n\n logger.setLevel(level)\n\n # Remove previous handlers\n if logger.hasHandlers():\n for h in list(logger.handlers):\n logger.removeHandler(h)\n\n formatter = logging.Formatter(format)\n\n ch = logging.StreamHandler()\n ch.setLevel(level)\n ch.setFormatter(formatter)\n logger.addHandler(ch)\n\n if filepath is not None:\n fh = logging.FileHandler(filepath)\n fh.setLevel(level)\n fh.setFormatter(formatter)\n logger.addHandler(fh)\n\n return logger\n", "path": "ignite/utils.py"}]}
| 1,961 | 113 |
gh_patches_debug_27528
|
rasdani/github-patches
|
git_diff
|
Lightning-AI__pytorch-lightning-2356
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Trainer(precision=16) fails with optim.lr_scheduler.ReduceLROnPlateau
<!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq)
-->
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
### To Reproduce
Steps to reproduce the behavior:
1. Create a `pl.LightningModule` that returns your optimizer along with a `optim.lr_scheduler.ReduceLROnPlateau` scheduler from `configure_optimizers`
2. Create a `pl.Trainer` wit `precision=16`
3. Run your training (i.e., `trainer.fit(model)`)
4. See error
```console
Traceback (most recent call last):
File "main.py", line 65, in <module>
main()
File "main.py", line 61, in main
trainer.fit(model)
File "/workspace/pytorch-lightning/pytorch_lightning/trainer/trainer.py", line 889, in fit
self.dp_train(model)
File "/workspace/pytorch-lightning/pytorch_lightning/trainer/distrib_parts.py", line 223, in dp_train
self.reinit_scheduler_properties(optimizers, self.lr_schedulers)
File "/workspace/pytorch-lightning/pytorch_lightning/trainer/optimizers.py", line 122, in reinit_scheduler_properties
scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)
UnboundLocalError: local variable 'idx' referenced before assignment
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
<!-- #### Code sample -->
<!-- Ideally attach a minimal code sample to reproduce the decried issue.
Minimal means having the shortest code but still preserving the bug. -->
<!-- ### Expected behavior -->
<!-- A clear and concise description of what you expected to happen. -->
<!-- ### Environment
Please copy and paste the output from our
[environment collection script](https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py)
(or fill out the checklist below manually).
You can get the script and run it with:
```
wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
# For security purposes, please check the contents of collect_env_details.py before running it.
python collect_env_details.py
```
- PyTorch Version (1.5):
- OS (Linux):
### Additional context
-->
<!-- Add any other context about the problem here. -->
The error occurs in `pytorch-lightning/pytorch_lightning/trainer/optimizers.py", line 122`.
```python
def reinit_scheduler_properties(self, optimizers: list, schedulers: list):
# Reinitialize optimizer.step properties added by schedulers
for scheduler in schedulers:
for optimizer in optimizers:
scheduler = scheduler['scheduler']
# check that we dont mix users optimizers and schedulers
if scheduler.optimizer == optimizer:
# Find the mro belonging to the base lr scheduler class
for i, mro in enumerate(scheduler.__class__.__mro__):
if mro == optim.lr_scheduler._LRScheduler:
idx = i
scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)
```
The `idx` local variable is unassigned because `optim.lr_scheduler.ReduceLROnPlateau` is not a subclass of `optim.lr_scheduler._LRScheduler`.
I could work around the error by adding a specific check for `optim.lr_scheduler.ReduceLROnPlateau` but I'm not sure if this is a good solution.
```python
def reinit_scheduler_properties(self, optimizers: list, schedulers: list):
# Reinitialize optimizer.step properties added by schedulers
for scheduler in schedulers:
for optimizer in optimizers:
scheduler = scheduler['scheduler']
# check that we dont mix users optimizers and schedulers
if scheduler.optimizer == optimizer:
# Find the mro belonging to the base lr scheduler class
for i, mro in enumerate(scheduler.__class__.__mro__):
if mro == optim.lr_scheduler._LRScheduler:
idx = i
elif mro == optim.lr_scheduler.ReduceLROnPlateau:
idx = i
scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)
```
### Related issue in PyTorch:
ReduceLROnPlateau parent class is not _LRScheduler #21981
https://github.com/pytorch/pytorch/issues/21981
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytorch_lightning/trainer/optimizers.py`
Content:
```
1 from abc import ABC
2 from typing import List, Tuple
3
4 import torch
5 from torch import optim
6 from torch.optim.optimizer import Optimizer
7
8 from pytorch_lightning.core.lightning import LightningModule
9 from pytorch_lightning.utilities import rank_zero_warn
10
11
12 class TrainerOptimizersMixin(ABC):
13
14 def init_optimizers(
15 self,
16 model: LightningModule
17 ) -> Tuple[List, List, List]:
18 optim_conf = model.configure_optimizers()
19
20 if optim_conf is None:
21 rank_zero_warn('`LightningModule.configure_optimizers` returned `None`, '
22 'this fit will run with no optimizer', UserWarning)
23 optim_conf = _MockOptimizer()
24
25 # single output, single optimizer
26 if isinstance(optim_conf, Optimizer):
27 return [optim_conf], [], []
28
29 # two lists, optimizer + lr schedulers
30 elif isinstance(optim_conf, (list, tuple)) and len(optim_conf) == 2 \
31 and isinstance(optim_conf[0], list):
32 optimizers, lr_schedulers = optim_conf
33 lr_schedulers = self.configure_schedulers(lr_schedulers)
34 return optimizers, lr_schedulers, []
35
36 # single dictionary
37 elif isinstance(optim_conf, dict):
38 optimizer = optim_conf["optimizer"]
39 lr_scheduler = optim_conf.get("lr_scheduler", [])
40 if lr_scheduler:
41 lr_schedulers = self.configure_schedulers([lr_scheduler])
42 else:
43 lr_schedulers = []
44 return [optimizer], lr_schedulers, []
45
46 # multiple dictionaries
47 elif isinstance(optim_conf, (list, tuple)) and isinstance(optim_conf[0], dict):
48 optimizers = [opt_dict["optimizer"] for opt_dict in optim_conf]
49 # take only lr wif exists and ot they are defined - not None
50 lr_schedulers = [
51 opt_dict["lr_scheduler"] for opt_dict in optim_conf if opt_dict.get("lr_scheduler")
52 ]
53 # take only freq wif exists and ot they are defined - not None
54 optimizer_frequencies = [
55 opt_dict["frequency"] for opt_dict in optim_conf if opt_dict.get("frequency") is not None
56 ]
57
58 # clean scheduler list
59 if lr_schedulers:
60 lr_schedulers = self.configure_schedulers(lr_schedulers)
61 # assert that if frequencies are present, they are given for all optimizers
62 if optimizer_frequencies and len(optimizer_frequencies) != len(optimizers):
63 raise ValueError("A frequency must be given to each optimizer.")
64 return optimizers, lr_schedulers, optimizer_frequencies
65
66 # single list or tuple, multiple optimizer
67 elif isinstance(optim_conf, (list, tuple)):
68 return list(optim_conf), [], []
69
70 # unknown configuration
71 else:
72 raise ValueError(
73 'Unknown configuration for model optimizers.'
74 ' Output from `model.configure_optimizers()` should either be:'
75 ' * single output, single `torch.optim.Optimizer`'
76 ' * single output, list of `torch.optim.Optimizer`'
77 ' * single output, a dictionary with `optimizer` key (`torch.optim.Optimizer`)'
78 ' and an optional `lr_scheduler` key (`torch.optim.lr_scheduler`)'
79 ' * two outputs, first being a list of `torch.optim.Optimizer` second being'
80 ' a list of `torch.optim.lr_scheduler`'
81 ' * multiple outputs, dictionaries as described with an optional `frequency` key (int)')
82
83 def configure_schedulers(self, schedulers: list):
84 # Convert each scheduler into dict structure with relevant information
85 lr_schedulers = []
86 default_config = {'interval': 'epoch', # default every epoch
87 'frequency': 1, # default every epoch/batch
88 'reduce_on_plateau': False, # most often not ReduceLROnPlateau scheduler
89 'monitor': 'val_loss'} # default value to monitor for ReduceLROnPlateau
90 for scheduler in schedulers:
91 if isinstance(scheduler, dict):
92 if 'scheduler' not in scheduler:
93 raise ValueError('Lr scheduler should have key `scheduler`',
94 ' with item being a lr scheduler')
95 scheduler['reduce_on_plateau'] = isinstance(
96 scheduler['scheduler'], optim.lr_scheduler.ReduceLROnPlateau)
97
98 lr_schedulers.append({**default_config, **scheduler})
99
100 elif isinstance(scheduler, optim.lr_scheduler.ReduceLROnPlateau):
101 lr_schedulers.append({**default_config, 'scheduler': scheduler,
102 'reduce_on_plateau': True})
103
104 elif isinstance(scheduler, optim.lr_scheduler._LRScheduler):
105 lr_schedulers.append({**default_config, 'scheduler': scheduler})
106 else:
107 raise ValueError(f'Input {scheduler} to lr schedulers '
108 'is a invalid input.')
109 return lr_schedulers
110
111 def reinit_scheduler_properties(self, optimizers: list, schedulers: list):
112 # Reinitialize optimizer.step properties added by schedulers
113 for scheduler in schedulers:
114 for optimizer in optimizers:
115 scheduler = scheduler['scheduler']
116 # check that we dont mix users optimizers and schedulers
117 if scheduler.optimizer == optimizer:
118 # Find the mro belonging to the base lr scheduler class
119 for i, mro in enumerate(scheduler.__class__.__mro__):
120 if mro == optim.lr_scheduler._LRScheduler:
121 idx = i
122 scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)
123
124
125 class _MockOptimizer(Optimizer):
126 """The `_MockOptimizer` will be used inplace of an optimizer in the event that `None`
127 is returned from `configure_optimizers`.
128 """
129
130 def __init__(self):
131 super().__init__([torch.zeros(1)], {})
132
133 def add_param_group(self, param_group):
134 pass # Do Nothing
135
136 def load_state_dict(self, state_dict):
137 pass # Do Nothing
138
139 def state_dict(self):
140 return {} # Return Empty
141
142 def step(self, closure=None):
143 if closure is not None:
144 closure()
145
146 def zero_grad(self):
147 pass # Do Nothing
148
149 def __repr__(self):
150 return 'No Optimizer'
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pytorch_lightning/trainer/optimizers.py b/pytorch_lightning/trainer/optimizers.py
--- a/pytorch_lightning/trainer/optimizers.py
+++ b/pytorch_lightning/trainer/optimizers.py
@@ -111,15 +111,25 @@
def reinit_scheduler_properties(self, optimizers: list, schedulers: list):
# Reinitialize optimizer.step properties added by schedulers
for scheduler in schedulers:
+ scheduler = scheduler['scheduler']
+
for optimizer in optimizers:
- scheduler = scheduler['scheduler']
# check that we dont mix users optimizers and schedulers
if scheduler.optimizer == optimizer:
# Find the mro belonging to the base lr scheduler class
for i, mro in enumerate(scheduler.__class__.__mro__):
- if mro == optim.lr_scheduler._LRScheduler:
+ if (
+ mro == optim.lr_scheduler._LRScheduler
+ or mro == optim.lr_scheduler.ReduceLROnPlateau
+ ):
idx = i
- scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)
+ state = scheduler.state_dict()
+ else:
+ state = None
+
+ scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)
+ if state is not None:
+ scheduler.load_state_dict(state)
class _MockOptimizer(Optimizer):
|
{"golden_diff": "diff --git a/pytorch_lightning/trainer/optimizers.py b/pytorch_lightning/trainer/optimizers.py\n--- a/pytorch_lightning/trainer/optimizers.py\n+++ b/pytorch_lightning/trainer/optimizers.py\n@@ -111,15 +111,25 @@\n def reinit_scheduler_properties(self, optimizers: list, schedulers: list):\n # Reinitialize optimizer.step properties added by schedulers\n for scheduler in schedulers:\n+ scheduler = scheduler['scheduler']\n+\n for optimizer in optimizers:\n- scheduler = scheduler['scheduler']\n # check that we dont mix users optimizers and schedulers\n if scheduler.optimizer == optimizer:\n # Find the mro belonging to the base lr scheduler class\n for i, mro in enumerate(scheduler.__class__.__mro__):\n- if mro == optim.lr_scheduler._LRScheduler:\n+ if (\n+ mro == optim.lr_scheduler._LRScheduler\n+ or mro == optim.lr_scheduler.ReduceLROnPlateau\n+ ):\n idx = i\n- scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)\n+ state = scheduler.state_dict()\n+ else:\n+ state = None\n+\n+ scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)\n+ if state is not None:\n+ scheduler.load_state_dict(state)\n \n \n class _MockOptimizer(Optimizer):\n", "issue": "Trainer(precision=16) fails with optim.lr_scheduler.ReduceLROnPlateau\n<!-- \r\n### Common bugs:\r\n1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79). \r\n2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq) \r\n-->\r\n\r\n## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n### To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Create a `pl.LightningModule` that returns your optimizer along with a `optim.lr_scheduler.ReduceLROnPlateau` scheduler from `configure_optimizers`\r\n2. Create a `pl.Trainer` wit `precision=16`\r\n3. Run your training (i.e., `trainer.fit(model)`)\r\n4. See error\r\n\r\n```console\r\nTraceback (most recent call last): \r\n File \"main.py\", line 65, in <module> \r\n main() \r\n File \"main.py\", line 61, in main \r\n trainer.fit(model) \r\n File \"/workspace/pytorch-lightning/pytorch_lightning/trainer/trainer.py\", line 889, in fit \r\n self.dp_train(model) \r\n File \"/workspace/pytorch-lightning/pytorch_lightning/trainer/distrib_parts.py\", line 223, in dp_train \r\n self.reinit_scheduler_properties(optimizers, self.lr_schedulers) \r\n File \"/workspace/pytorch-lightning/pytorch_lightning/trainer/optimizers.py\", line 122, in reinit_scheduler_properties \r\n scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer) \r\nUnboundLocalError: local variable 'idx' referenced before assignment \r\n```\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n\r\n<!-- #### Code sample -->\r\n<!-- Ideally attach a minimal code sample to reproduce the decried issue. \r\nMinimal means having the shortest code but still preserving the bug. -->\r\n\r\n<!-- ### Expected behavior -->\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n<!-- ### Environment\r\n\r\nPlease copy and paste the output from our\r\n[environment collection script](https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py)\r\n(or fill out the checklist below manually).\r\n\r\nYou can get the script and run it with:\r\n```\r\nwget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py\r\n# For security purposes, please check the contents of collect_env_details.py before running it.\r\npython collect_env_details.py\r\n```\r\n - PyTorch Version (1.5):\r\n - OS (Linux):\r\n\r\n### Additional context\r\n-->\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\r\nThe error occurs in `pytorch-lightning/pytorch_lightning/trainer/optimizers.py\", line 122`.\r\n\r\n```python\r\ndef reinit_scheduler_properties(self, optimizers: list, schedulers: list):\r\n # Reinitialize optimizer.step properties added by schedulers\r\n for scheduler in schedulers:\r\n for optimizer in optimizers:\r\n scheduler = scheduler['scheduler']\r\n # check that we dont mix users optimizers and schedulers\r\n if scheduler.optimizer == optimizer:\r\n # Find the mro belonging to the base lr scheduler class\r\n for i, mro in enumerate(scheduler.__class__.__mro__):\r\n if mro == optim.lr_scheduler._LRScheduler:\r\n idx = i\r\n scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)\r\n```\r\nThe `idx` local variable is unassigned because `optim.lr_scheduler.ReduceLROnPlateau` is not a subclass of `optim.lr_scheduler._LRScheduler`.\r\n\r\nI could work around the error by adding a specific check for `optim.lr_scheduler.ReduceLROnPlateau` but I'm not sure if this is a good solution.\r\n\r\n```python\r\ndef reinit_scheduler_properties(self, optimizers: list, schedulers: list):\r\n # Reinitialize optimizer.step properties added by schedulers\r\n for scheduler in schedulers:\r\n for optimizer in optimizers:\r\n scheduler = scheduler['scheduler']\r\n # check that we dont mix users optimizers and schedulers\r\n if scheduler.optimizer == optimizer:\r\n # Find the mro belonging to the base lr scheduler class\r\n for i, mro in enumerate(scheduler.__class__.__mro__):\r\n if mro == optim.lr_scheduler._LRScheduler:\r\n idx = i\r\n elif mro == optim.lr_scheduler.ReduceLROnPlateau:\r\n idx = i\r\n scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)\r\n```\r\n\r\n### Related issue in PyTorch:\r\nReduceLROnPlateau parent class is not _LRScheduler #21981\r\nhttps://github.com/pytorch/pytorch/issues/21981\n", "before_files": [{"content": "from abc import ABC\nfrom typing import List, Tuple\n\nimport torch\nfrom torch import optim\nfrom torch.optim.optimizer import Optimizer\n\nfrom pytorch_lightning.core.lightning import LightningModule\nfrom pytorch_lightning.utilities import rank_zero_warn\n\n\nclass TrainerOptimizersMixin(ABC):\n\n def init_optimizers(\n self,\n model: LightningModule\n ) -> Tuple[List, List, List]:\n optim_conf = model.configure_optimizers()\n\n if optim_conf is None:\n rank_zero_warn('`LightningModule.configure_optimizers` returned `None`, '\n 'this fit will run with no optimizer', UserWarning)\n optim_conf = _MockOptimizer()\n\n # single output, single optimizer\n if isinstance(optim_conf, Optimizer):\n return [optim_conf], [], []\n\n # two lists, optimizer + lr schedulers\n elif isinstance(optim_conf, (list, tuple)) and len(optim_conf) == 2 \\\n and isinstance(optim_conf[0], list):\n optimizers, lr_schedulers = optim_conf\n lr_schedulers = self.configure_schedulers(lr_schedulers)\n return optimizers, lr_schedulers, []\n\n # single dictionary\n elif isinstance(optim_conf, dict):\n optimizer = optim_conf[\"optimizer\"]\n lr_scheduler = optim_conf.get(\"lr_scheduler\", [])\n if lr_scheduler:\n lr_schedulers = self.configure_schedulers([lr_scheduler])\n else:\n lr_schedulers = []\n return [optimizer], lr_schedulers, []\n\n # multiple dictionaries\n elif isinstance(optim_conf, (list, tuple)) and isinstance(optim_conf[0], dict):\n optimizers = [opt_dict[\"optimizer\"] for opt_dict in optim_conf]\n # take only lr wif exists and ot they are defined - not None\n lr_schedulers = [\n opt_dict[\"lr_scheduler\"] for opt_dict in optim_conf if opt_dict.get(\"lr_scheduler\")\n ]\n # take only freq wif exists and ot they are defined - not None\n optimizer_frequencies = [\n opt_dict[\"frequency\"] for opt_dict in optim_conf if opt_dict.get(\"frequency\") is not None\n ]\n\n # clean scheduler list\n if lr_schedulers:\n lr_schedulers = self.configure_schedulers(lr_schedulers)\n # assert that if frequencies are present, they are given for all optimizers\n if optimizer_frequencies and len(optimizer_frequencies) != len(optimizers):\n raise ValueError(\"A frequency must be given to each optimizer.\")\n return optimizers, lr_schedulers, optimizer_frequencies\n\n # single list or tuple, multiple optimizer\n elif isinstance(optim_conf, (list, tuple)):\n return list(optim_conf), [], []\n\n # unknown configuration\n else:\n raise ValueError(\n 'Unknown configuration for model optimizers.'\n ' Output from `model.configure_optimizers()` should either be:'\n ' * single output, single `torch.optim.Optimizer`'\n ' * single output, list of `torch.optim.Optimizer`'\n ' * single output, a dictionary with `optimizer` key (`torch.optim.Optimizer`)'\n ' and an optional `lr_scheduler` key (`torch.optim.lr_scheduler`)'\n ' * two outputs, first being a list of `torch.optim.Optimizer` second being'\n ' a list of `torch.optim.lr_scheduler`'\n ' * multiple outputs, dictionaries as described with an optional `frequency` key (int)')\n\n def configure_schedulers(self, schedulers: list):\n # Convert each scheduler into dict structure with relevant information\n lr_schedulers = []\n default_config = {'interval': 'epoch', # default every epoch\n 'frequency': 1, # default every epoch/batch\n 'reduce_on_plateau': False, # most often not ReduceLROnPlateau scheduler\n 'monitor': 'val_loss'} # default value to monitor for ReduceLROnPlateau\n for scheduler in schedulers:\n if isinstance(scheduler, dict):\n if 'scheduler' not in scheduler:\n raise ValueError('Lr scheduler should have key `scheduler`',\n ' with item being a lr scheduler')\n scheduler['reduce_on_plateau'] = isinstance(\n scheduler['scheduler'], optim.lr_scheduler.ReduceLROnPlateau)\n\n lr_schedulers.append({**default_config, **scheduler})\n\n elif isinstance(scheduler, optim.lr_scheduler.ReduceLROnPlateau):\n lr_schedulers.append({**default_config, 'scheduler': scheduler,\n 'reduce_on_plateau': True})\n\n elif isinstance(scheduler, optim.lr_scheduler._LRScheduler):\n lr_schedulers.append({**default_config, 'scheduler': scheduler})\n else:\n raise ValueError(f'Input {scheduler} to lr schedulers '\n 'is a invalid input.')\n return lr_schedulers\n\n def reinit_scheduler_properties(self, optimizers: list, schedulers: list):\n # Reinitialize optimizer.step properties added by schedulers\n for scheduler in schedulers:\n for optimizer in optimizers:\n scheduler = scheduler['scheduler']\n # check that we dont mix users optimizers and schedulers\n if scheduler.optimizer == optimizer:\n # Find the mro belonging to the base lr scheduler class\n for i, mro in enumerate(scheduler.__class__.__mro__):\n if mro == optim.lr_scheduler._LRScheduler:\n idx = i\n scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)\n\n\nclass _MockOptimizer(Optimizer):\n \"\"\"The `_MockOptimizer` will be used inplace of an optimizer in the event that `None`\n is returned from `configure_optimizers`.\n \"\"\"\n\n def __init__(self):\n super().__init__([torch.zeros(1)], {})\n\n def add_param_group(self, param_group):\n pass # Do Nothing\n\n def load_state_dict(self, state_dict):\n pass # Do Nothing\n\n def state_dict(self):\n return {} # Return Empty\n\n def step(self, closure=None):\n if closure is not None:\n closure()\n\n def zero_grad(self):\n pass # Do Nothing\n\n def __repr__(self):\n return 'No Optimizer'\n", "path": "pytorch_lightning/trainer/optimizers.py"}], "after_files": [{"content": "from abc import ABC\nfrom typing import List, Tuple\n\nimport torch\nfrom torch import optim\nfrom torch.optim.optimizer import Optimizer\n\nfrom pytorch_lightning.core.lightning import LightningModule\nfrom pytorch_lightning.utilities import rank_zero_warn\n\n\nclass TrainerOptimizersMixin(ABC):\n\n def init_optimizers(\n self,\n model: LightningModule\n ) -> Tuple[List, List, List]:\n optim_conf = model.configure_optimizers()\n\n if optim_conf is None:\n rank_zero_warn('`LightningModule.configure_optimizers` returned `None`, '\n 'this fit will run with no optimizer', UserWarning)\n optim_conf = _MockOptimizer()\n\n # single output, single optimizer\n if isinstance(optim_conf, Optimizer):\n return [optim_conf], [], []\n\n # two lists, optimizer + lr schedulers\n elif isinstance(optim_conf, (list, tuple)) and len(optim_conf) == 2 \\\n and isinstance(optim_conf[0], list):\n optimizers, lr_schedulers = optim_conf\n lr_schedulers = self.configure_schedulers(lr_schedulers)\n return optimizers, lr_schedulers, []\n\n # single dictionary\n elif isinstance(optim_conf, dict):\n optimizer = optim_conf[\"optimizer\"]\n lr_scheduler = optim_conf.get(\"lr_scheduler\", [])\n if lr_scheduler:\n lr_schedulers = self.configure_schedulers([lr_scheduler])\n else:\n lr_schedulers = []\n return [optimizer], lr_schedulers, []\n\n # multiple dictionaries\n elif isinstance(optim_conf, (list, tuple)) and isinstance(optim_conf[0], dict):\n optimizers = [opt_dict[\"optimizer\"] for opt_dict in optim_conf]\n # take only lr wif exists and ot they are defined - not None\n lr_schedulers = [\n opt_dict[\"lr_scheduler\"] for opt_dict in optim_conf if opt_dict.get(\"lr_scheduler\")\n ]\n # take only freq wif exists and ot they are defined - not None\n optimizer_frequencies = [\n opt_dict[\"frequency\"] for opt_dict in optim_conf if opt_dict.get(\"frequency\") is not None\n ]\n\n # clean scheduler list\n if lr_schedulers:\n lr_schedulers = self.configure_schedulers(lr_schedulers)\n # assert that if frequencies are present, they are given for all optimizers\n if optimizer_frequencies and len(optimizer_frequencies) != len(optimizers):\n raise ValueError(\"A frequency must be given to each optimizer.\")\n return optimizers, lr_schedulers, optimizer_frequencies\n\n # single list or tuple, multiple optimizer\n elif isinstance(optim_conf, (list, tuple)):\n return list(optim_conf), [], []\n\n # unknown configuration\n else:\n raise ValueError(\n 'Unknown configuration for model optimizers.'\n ' Output from `model.configure_optimizers()` should either be:'\n ' * single output, single `torch.optim.Optimizer`'\n ' * single output, list of `torch.optim.Optimizer`'\n ' * single output, a dictionary with `optimizer` key (`torch.optim.Optimizer`)'\n ' and an optional `lr_scheduler` key (`torch.optim.lr_scheduler`)'\n ' * two outputs, first being a list of `torch.optim.Optimizer` second being'\n ' a list of `torch.optim.lr_scheduler`'\n ' * multiple outputs, dictionaries as described with an optional `frequency` key (int)')\n\n def configure_schedulers(self, schedulers: list):\n # Convert each scheduler into dict structure with relevant information\n lr_schedulers = []\n default_config = {'interval': 'epoch', # default every epoch\n 'frequency': 1, # default every epoch/batch\n 'reduce_on_plateau': False, # most often not ReduceLROnPlateau scheduler\n 'monitor': 'val_loss'} # default value to monitor for ReduceLROnPlateau\n for scheduler in schedulers:\n if isinstance(scheduler, dict):\n if 'scheduler' not in scheduler:\n raise ValueError('Lr scheduler should have key `scheduler`',\n ' with item being a lr scheduler')\n scheduler['reduce_on_plateau'] = isinstance(\n scheduler['scheduler'], optim.lr_scheduler.ReduceLROnPlateau)\n\n lr_schedulers.append({**default_config, **scheduler})\n\n elif isinstance(scheduler, optim.lr_scheduler.ReduceLROnPlateau):\n lr_schedulers.append({**default_config, 'scheduler': scheduler,\n 'reduce_on_plateau': True})\n\n elif isinstance(scheduler, optim.lr_scheduler._LRScheduler):\n lr_schedulers.append({**default_config, 'scheduler': scheduler})\n else:\n raise ValueError(f'Input {scheduler} to lr schedulers '\n 'is a invalid input.')\n return lr_schedulers\n\n def reinit_scheduler_properties(self, optimizers: list, schedulers: list):\n # Reinitialize optimizer.step properties added by schedulers\n for scheduler in schedulers:\n scheduler = scheduler['scheduler']\n\n for optimizer in optimizers:\n # check that we dont mix users optimizers and schedulers\n if scheduler.optimizer == optimizer:\n # Find the mro belonging to the base lr scheduler class\n for i, mro in enumerate(scheduler.__class__.__mro__):\n if (\n mro == optim.lr_scheduler._LRScheduler\n or mro == optim.lr_scheduler.ReduceLROnPlateau\n ):\n idx = i\n state = scheduler.state_dict()\n else:\n state = None\n\n scheduler.__class__.__mro__[idx].__init__(scheduler, optimizer)\n if state is not None:\n scheduler.load_state_dict(state)\n\n\nclass _MockOptimizer(Optimizer):\n \"\"\"The `_MockOptimizer` will be used inplace of an optimizer in the event that `None`\n is returned from `configure_optimizers`.\n \"\"\"\n\n def __init__(self):\n super().__init__([torch.zeros(1)], {})\n\n def add_param_group(self, param_group):\n pass # Do Nothing\n\n def load_state_dict(self, state_dict):\n pass # Do Nothing\n\n def state_dict(self):\n return {} # Return Empty\n\n def step(self, closure=None):\n if closure is not None:\n closure()\n\n def zero_grad(self):\n pass # Do Nothing\n\n def __repr__(self):\n return 'No Optimizer'\n", "path": "pytorch_lightning/trainer/optimizers.py"}]}
| 3,043 | 317 |
gh_patches_debug_37113
|
rasdani/github-patches
|
git_diff
|
sublimelsp__LSP-472
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
LS always starts in first folder of workspace
LSP always starts a language server in the first project of your workspace, regardless of which one you're working on. For example, with the following workspace:

When I open any Rust files in `bserver`, RLS is still started in `LSP`, since it appears first in the list. This causes RLS to throw a warning:

and effectively breaks all useful functionality of the LSP plugin--nothing works, because RLS is staring at the wrong directory.
I'm still digging as to why this is, but it looks like the issue is [an oversight with branching right here](https://github.com/tomv564/LSP/blob/master/plugin/core/workspace.py#L16). I'll submit a PR shortly.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugin/core/workspace.py`
Content:
```
1 import os
2 try:
3 from typing import List, Optional, Any
4 assert List and Optional and Any
5 except ImportError:
6 pass
7
8 from .logging import debug
9 # from .types import WindowLike
10
11
12 def get_project_path(window: 'Any') -> 'Optional[str]':
13 """
14 Returns the first project folder or the parent folder of the active view
15 """
16 if len(window.folders()):
17 folder_paths = window.folders()
18 return folder_paths[0]
19 else:
20 view = window.active_view()
21 if view:
22 filename = view.file_name()
23 if filename:
24 project_path = os.path.dirname(filename)
25 debug("Couldn't determine project directory since no folders are open!",
26 "Using", project_path, "as a fallback.")
27 return project_path
28 else:
29 debug("Couldn't determine project directory since no folders are open",
30 "and the current file isn't saved on the disk.")
31 return None
32 else:
33 debug("No view is active in current window")
34 return None # https://github.com/tomv564/LSP/issues/219
35
36
37 def get_common_parent(paths: 'List[str]') -> str:
38 """
39 Get the common parent directory of multiple paths.
40
41 Python 3.5+ includes os.path.commonpath which does this, however Sublime
42 currently embeds Python 3.3.
43 """
44 return os.path.commonprefix([path + '/' for path in paths]).rstrip('/')
45
46
47 def is_in_workspace(window: 'Any', file_path: str) -> bool:
48 workspace_path = get_project_path(window)
49 if workspace_path is None:
50 return False
51
52 common_dir = get_common_parent([workspace_path, file_path])
53 return workspace_path == common_dir
54
55
56 def enable_in_project(window, config_name: str) -> None:
57 project_data = window.project_data()
58 if isinstance(project_data, dict):
59 project_settings = project_data.setdefault('settings', dict())
60 project_lsp_settings = project_settings.setdefault('LSP', dict())
61 project_client_settings = project_lsp_settings.setdefault(config_name, dict())
62 project_client_settings['enabled'] = True
63 window.set_project_data(project_data)
64 else:
65 debug('non-dict returned in project_settings: ', project_data)
66
67
68 def disable_in_project(window, config_name: str) -> None:
69 project_data = window.project_data()
70 if isinstance(project_data, dict):
71 project_settings = project_data.setdefault('settings', dict())
72 project_lsp_settings = project_settings.setdefault('LSP', dict())
73 project_client_settings = project_lsp_settings.setdefault(config_name, dict())
74 project_client_settings['enabled'] = False
75 window.set_project_data(project_data)
76 else:
77 debug('non-dict returned in project_settings: ', project_data)
78
79
80 def get_project_config(window: 'Any') -> dict:
81 project_data = window.project_data() or dict()
82 if isinstance(project_data, dict):
83 project_settings = project_data.setdefault('settings', dict())
84 project_lsp_settings = project_settings.setdefault('LSP', dict())
85 return project_lsp_settings
86 else:
87 debug('non-dict returned in project_settings: ', project_data)
88 return dict()
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plugin/core/workspace.py b/plugin/core/workspace.py
--- a/plugin/core/workspace.py
+++ b/plugin/core/workspace.py
@@ -1,37 +1,69 @@
import os
try:
- from typing import List, Optional, Any
- assert List and Optional and Any
+ from typing import List, Optional, Any, Iterable
+ assert List and Optional and Any and Iterable
except ImportError:
pass
from .logging import debug
-# from .types import WindowLike
+from .types import ViewLike
+
+
+def get_filename_from_view(view: ViewLike) -> 'Optional[str]':
+ if not view:
+ debug("No view is active in current window")
+ return None # https://github.com/tomv564/LSP/issues/219
+ filename = view.file_name()
+ if not filename:
+ debug("Couldn't determine project directory since no folders are open",
+ "and the current file isn't saved on the disk.")
+ return filename
+
+
+def get_directory_name(view: ViewLike) -> 'Optional[str]':
+ filename = get_filename_from_view(view)
+ if filename:
+ project_path = os.path.dirname(filename)
+ return project_path
+ return None
+
+
+def find_path_among_multi_folders(folders: 'Iterable[str]',
+ view: ViewLike) -> 'Optional[str]':
+ filename = get_filename_from_view(view)
+ if not filename:
+ return None
+ folders = [os.path.realpath(f) for f in folders]
+ file = view.file_name()
+ if not file:
+ return None
+ file = os.path.realpath(file)
+ while file not in folders:
+ file = os.path.dirname(file)
+ if os.path.dirname(file) == file:
+ # We're at the root of the filesystem.
+ file = None
+ break
+ debug('project path is', file)
+ return file
def get_project_path(window: 'Any') -> 'Optional[str]':
"""
- Returns the first project folder or the parent folder of the active view
+ Returns the project folder or the parent folder of the active view
"""
- if len(window.folders()):
+ if not window:
+ return None
+ num_folders = len(window.folders())
+ if num_folders == 0:
+ return get_directory_name(window.active_view())
+ elif num_folders == 1:
folder_paths = window.folders()
return folder_paths[0]
- else:
- view = window.active_view()
- if view:
- filename = view.file_name()
- if filename:
- project_path = os.path.dirname(filename)
- debug("Couldn't determine project directory since no folders are open!",
- "Using", project_path, "as a fallback.")
- return project_path
- else:
- debug("Couldn't determine project directory since no folders are open",
- "and the current file isn't saved on the disk.")
- return None
- else:
- debug("No view is active in current window")
- return None # https://github.com/tomv564/LSP/issues/219
+ else: # num_folders > 1
+ return find_path_among_multi_folders(
+ window.folders(),
+ window.active_view())
def get_common_parent(paths: 'List[str]') -> str:
|
{"golden_diff": "diff --git a/plugin/core/workspace.py b/plugin/core/workspace.py\n--- a/plugin/core/workspace.py\n+++ b/plugin/core/workspace.py\n@@ -1,37 +1,69 @@\n import os\n try:\n- from typing import List, Optional, Any\n- assert List and Optional and Any\n+ from typing import List, Optional, Any, Iterable\n+ assert List and Optional and Any and Iterable\n except ImportError:\n pass\n \n from .logging import debug\n-# from .types import WindowLike\n+from .types import ViewLike\n+\n+\n+def get_filename_from_view(view: ViewLike) -> 'Optional[str]':\n+ if not view:\n+ debug(\"No view is active in current window\")\n+ return None # https://github.com/tomv564/LSP/issues/219\n+ filename = view.file_name()\n+ if not filename:\n+ debug(\"Couldn't determine project directory since no folders are open\",\n+ \"and the current file isn't saved on the disk.\")\n+ return filename\n+\n+\n+def get_directory_name(view: ViewLike) -> 'Optional[str]':\n+ filename = get_filename_from_view(view)\n+ if filename:\n+ project_path = os.path.dirname(filename)\n+ return project_path\n+ return None\n+\n+\n+def find_path_among_multi_folders(folders: 'Iterable[str]',\n+ view: ViewLike) -> 'Optional[str]':\n+ filename = get_filename_from_view(view)\n+ if not filename:\n+ return None\n+ folders = [os.path.realpath(f) for f in folders]\n+ file = view.file_name()\n+ if not file:\n+ return None\n+ file = os.path.realpath(file)\n+ while file not in folders:\n+ file = os.path.dirname(file)\n+ if os.path.dirname(file) == file:\n+ # We're at the root of the filesystem.\n+ file = None\n+ break\n+ debug('project path is', file)\n+ return file\n \n \n def get_project_path(window: 'Any') -> 'Optional[str]':\n \"\"\"\n- Returns the first project folder or the parent folder of the active view\n+ Returns the project folder or the parent folder of the active view\n \"\"\"\n- if len(window.folders()):\n+ if not window:\n+ return None\n+ num_folders = len(window.folders())\n+ if num_folders == 0:\n+ return get_directory_name(window.active_view())\n+ elif num_folders == 1:\n folder_paths = window.folders()\n return folder_paths[0]\n- else:\n- view = window.active_view()\n- if view:\n- filename = view.file_name()\n- if filename:\n- project_path = os.path.dirname(filename)\n- debug(\"Couldn't determine project directory since no folders are open!\",\n- \"Using\", project_path, \"as a fallback.\")\n- return project_path\n- else:\n- debug(\"Couldn't determine project directory since no folders are open\",\n- \"and the current file isn't saved on the disk.\")\n- return None\n- else:\n- debug(\"No view is active in current window\")\n- return None # https://github.com/tomv564/LSP/issues/219\n+ else: # num_folders > 1\n+ return find_path_among_multi_folders(\n+ window.folders(),\n+ window.active_view())\n \n \n def get_common_parent(paths: 'List[str]') -> str:\n", "issue": "LS always starts in first folder of workspace\nLSP always starts a language server in the first project of your workspace, regardless of which one you're working on. For example, with the following workspace:\r\n\r\n\r\n\r\nWhen I open any Rust files in `bserver`, RLS is still started in `LSP`, since it appears first in the list. This causes RLS to throw a warning:\r\n\r\n\r\n\r\nand effectively breaks all useful functionality of the LSP plugin--nothing works, because RLS is staring at the wrong directory.\r\n\r\nI'm still digging as to why this is, but it looks like the issue is [an oversight with branching right here](https://github.com/tomv564/LSP/blob/master/plugin/core/workspace.py#L16). I'll submit a PR shortly.\n", "before_files": [{"content": "import os\ntry:\n from typing import List, Optional, Any\n assert List and Optional and Any\nexcept ImportError:\n pass\n\nfrom .logging import debug\n# from .types import WindowLike\n\n\ndef get_project_path(window: 'Any') -> 'Optional[str]':\n \"\"\"\n Returns the first project folder or the parent folder of the active view\n \"\"\"\n if len(window.folders()):\n folder_paths = window.folders()\n return folder_paths[0]\n else:\n view = window.active_view()\n if view:\n filename = view.file_name()\n if filename:\n project_path = os.path.dirname(filename)\n debug(\"Couldn't determine project directory since no folders are open!\",\n \"Using\", project_path, \"as a fallback.\")\n return project_path\n else:\n debug(\"Couldn't determine project directory since no folders are open\",\n \"and the current file isn't saved on the disk.\")\n return None\n else:\n debug(\"No view is active in current window\")\n return None # https://github.com/tomv564/LSP/issues/219\n\n\ndef get_common_parent(paths: 'List[str]') -> str:\n \"\"\"\n Get the common parent directory of multiple paths.\n\n Python 3.5+ includes os.path.commonpath which does this, however Sublime\n currently embeds Python 3.3.\n \"\"\"\n return os.path.commonprefix([path + '/' for path in paths]).rstrip('/')\n\n\ndef is_in_workspace(window: 'Any', file_path: str) -> bool:\n workspace_path = get_project_path(window)\n if workspace_path is None:\n return False\n\n common_dir = get_common_parent([workspace_path, file_path])\n return workspace_path == common_dir\n\n\ndef enable_in_project(window, config_name: str) -> None:\n project_data = window.project_data()\n if isinstance(project_data, dict):\n project_settings = project_data.setdefault('settings', dict())\n project_lsp_settings = project_settings.setdefault('LSP', dict())\n project_client_settings = project_lsp_settings.setdefault(config_name, dict())\n project_client_settings['enabled'] = True\n window.set_project_data(project_data)\n else:\n debug('non-dict returned in project_settings: ', project_data)\n\n\ndef disable_in_project(window, config_name: str) -> None:\n project_data = window.project_data()\n if isinstance(project_data, dict):\n project_settings = project_data.setdefault('settings', dict())\n project_lsp_settings = project_settings.setdefault('LSP', dict())\n project_client_settings = project_lsp_settings.setdefault(config_name, dict())\n project_client_settings['enabled'] = False\n window.set_project_data(project_data)\n else:\n debug('non-dict returned in project_settings: ', project_data)\n\n\ndef get_project_config(window: 'Any') -> dict:\n project_data = window.project_data() or dict()\n if isinstance(project_data, dict):\n project_settings = project_data.setdefault('settings', dict())\n project_lsp_settings = project_settings.setdefault('LSP', dict())\n return project_lsp_settings\n else:\n debug('non-dict returned in project_settings: ', project_data)\n return dict()\n", "path": "plugin/core/workspace.py"}], "after_files": [{"content": "import os\ntry:\n from typing import List, Optional, Any, Iterable\n assert List and Optional and Any and Iterable\nexcept ImportError:\n pass\n\nfrom .logging import debug\nfrom .types import ViewLike\n\n\ndef get_filename_from_view(view: ViewLike) -> 'Optional[str]':\n if not view:\n debug(\"No view is active in current window\")\n return None # https://github.com/tomv564/LSP/issues/219\n filename = view.file_name()\n if not filename:\n debug(\"Couldn't determine project directory since no folders are open\",\n \"and the current file isn't saved on the disk.\")\n return filename\n\n\ndef get_directory_name(view: ViewLike) -> 'Optional[str]':\n filename = get_filename_from_view(view)\n if filename:\n project_path = os.path.dirname(filename)\n return project_path\n return None\n\n\ndef find_path_among_multi_folders(folders: 'Iterable[str]',\n view: ViewLike) -> 'Optional[str]':\n filename = get_filename_from_view(view)\n if not filename:\n return None\n folders = [os.path.realpath(f) for f in folders]\n file = view.file_name()\n if not file:\n return None\n file = os.path.realpath(file)\n while file not in folders:\n file = os.path.dirname(file)\n if os.path.dirname(file) == file:\n # We're at the root of the filesystem.\n file = None\n break\n debug('project path is', file)\n return file\n\n\ndef get_project_path(window: 'Any') -> 'Optional[str]':\n \"\"\"\n Returns the project folder or the parent folder of the active view\n \"\"\"\n if not window:\n return None\n num_folders = len(window.folders())\n if num_folders == 0:\n return get_directory_name(window.active_view())\n elif num_folders == 1:\n folder_paths = window.folders()\n return folder_paths[0]\n else: # num_folders > 1\n return find_path_among_multi_folders(\n window.folders(),\n window.active_view())\n\n\ndef get_common_parent(paths: 'List[str]') -> str:\n \"\"\"\n Get the common parent directory of multiple paths.\n\n Python 3.5+ includes os.path.commonpath which does this, however Sublime\n currently embeds Python 3.3.\n \"\"\"\n return os.path.commonprefix([path + '/' for path in paths]).rstrip('/')\n\n\ndef is_in_workspace(window: 'Any', file_path: str) -> bool:\n workspace_path = get_project_path(window)\n if workspace_path is None:\n return False\n\n common_dir = get_common_parent([workspace_path, file_path])\n return workspace_path == common_dir\n\n\ndef enable_in_project(window, config_name: str) -> None:\n project_data = window.project_data()\n if isinstance(project_data, dict):\n project_settings = project_data.setdefault('settings', dict())\n project_lsp_settings = project_settings.setdefault('LSP', dict())\n project_client_settings = project_lsp_settings.setdefault(config_name, dict())\n project_client_settings['enabled'] = True\n window.set_project_data(project_data)\n else:\n debug('non-dict returned in project_settings: ', project_data)\n\n\ndef disable_in_project(window, config_name: str) -> None:\n project_data = window.project_data()\n if isinstance(project_data, dict):\n project_settings = project_data.setdefault('settings', dict())\n project_lsp_settings = project_settings.setdefault('LSP', dict())\n project_client_settings = project_lsp_settings.setdefault(config_name, dict())\n project_client_settings['enabled'] = False\n window.set_project_data(project_data)\n else:\n debug('non-dict returned in project_settings: ', project_data)\n\n\ndef get_project_config(window: 'Any') -> dict:\n project_data = window.project_data() or dict()\n if isinstance(project_data, dict):\n project_settings = project_data.setdefault('settings', dict())\n project_lsp_settings = project_settings.setdefault('LSP', dict())\n return project_lsp_settings\n else:\n debug('non-dict returned in project_settings: ', project_data)\n return dict()\n", "path": "plugin/core/workspace.py"}]}
| 1,430 | 764 |
gh_patches_debug_27524
|
rasdani/github-patches
|
git_diff
|
scikit-hep__pyhf-1202
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
iminuit v1.5.0 breaks optimization tests
# Description
With the release of [`iminuit` `v1.5.0`](https://github.com/scikit-hep/iminuit/releases/tag/v1.5.0) on 2020-09-17 the nightly tests are failing in `test_optim.py`. Specifically
https://github.com/scikit-hep/pyhf/blob/8a6ee36da4f566d8a37df01e20201098aa1f8a54/tests/test_optim.py#L47
is failing with errors of
```pytb
try:
assert result.success
except AssertionError:
log.error(result)
> raise exceptions.FailedMinimization(result)
E pyhf.exceptions.FailedMinimization: Optimization failed. Estimated distance to minimum too large.
src/pyhf/optimize/mixins.py:52: FailedMinimization
------------------------------ Captured log call -------------------------------
ERROR pyhf.optimize.mixins:mixins.py:51 fun: 15.5887451171875
hess_inv: array([[1., 1.],
[1., 1.]])
message: 'Optimization failed. Estimated distance to minimum too large.'
minuit: <iminuit._libiminuit.Minuit object at 0x5619c82f90a0>
nfev: 110
njev: 0
success: False
unc: None
x: array([0.97325551, 0.91712703])
```
where the `pyhf.exceptions.FailedMinimization` being raised comes from the `raise exceptions.FailedMinimization(result)` in
https://github.com/scikit-hep/pyhf/blob/8a6ee36da4f566d8a37df01e20201098aa1f8a54/src/pyhf/optimize/mixins.py#L31-L53
which are of course coming from
https://github.com/scikit-hep/pyhf/blob/8a6ee36da4f566d8a37df01e20201098aa1f8a54/src/pyhf/optimize/opt_minuit.py#L122-L132
in
https://github.com/scikit-hep/pyhf/blob/8a6ee36da4f566d8a37df01e20201098aa1f8a54/src/pyhf/optimize/opt_minuit.py#L69
# Steps to Reproduce
Run the tests using current master.

To show that this is definitley an issue with `iminuit` `v1.5.0+`
```
$ python -m pip install --upgrade "iminuit<1.5.0"
$ pip list | grep iminuit
iminuit 1.4.9
$ python -m pytest -sx tests/test_optim.py
```
passes but
```
$ python -m pip install --upgrade iminuit
$ pip list | grep iminuit
iminuit 1.5.1
$ python -m pytest -sx tests/test_optim.py
```
fails.
# Checklist
- [x] Run `git fetch` to get the most up to date version of `master`
- [x] Searched through existing Issues to confirm this is not a duplicate issue
- [x] Filled out the Description, Expected Behavior, Actual Behavior, and Steps to Reproduce sections above or have edited/removed them in a way that fully describes the issue
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pyhf/optimize/opt_minuit.py`
Content:
```
1 """Minuit Optimizer Class."""
2 from .. import default_backend, exceptions
3 from .mixins import OptimizerMixin
4 import scipy
5 import iminuit
6
7
8 class minuit_optimizer(OptimizerMixin):
9 """
10 Optimizer that uses iminuit.Minuit.migrad.
11 """
12
13 __slots__ = ['name', 'errordef', 'steps', 'strategy', 'tolerance']
14
15 def __init__(self, *args, **kwargs):
16 """
17 Create MINUIT Optimizer.
18
19 .. note::
20
21 ``errordef`` should be 1.0 for a least-squares cost function and 0.5
22 for negative log-likelihood function. See page 37 of
23 http://hep.fi.infn.it/minuit.pdf. This parameter is sometimes
24 called ``UP`` in the ``MINUIT`` docs.
25
26
27 Args:
28 errordef (:obj:`float`): See minuit docs. Default is 1.0.
29 steps (:obj:`int`): Number of steps for the bounds. Default is 1000.
30 strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is None.
31 tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is 0.1.
32 """
33 self.name = 'minuit'
34 self.errordef = kwargs.pop('errordef', 1)
35 self.steps = kwargs.pop('steps', 1000)
36 self.strategy = kwargs.pop('strategy', None)
37 self.tolerance = kwargs.pop('tolerance', 0.1)
38 super().__init__(*args, **kwargs)
39
40 def _get_minimizer(
41 self, objective_and_grad, init_pars, init_bounds, fixed_vals=None, do_grad=False
42 ):
43
44 step_sizes = [(b[1] - b[0]) / float(self.steps) for b in init_bounds]
45 fixed_vals = fixed_vals or []
46 # Minuit wants True/False for each parameter
47 fixed_bools = [False] * len(init_pars)
48 for index, val in fixed_vals:
49 fixed_bools[index] = True
50 init_pars[index] = val
51 step_sizes[index] = 0.0
52
53 # Minuit requires jac=callable
54 if do_grad:
55 wrapped_objective = lambda pars: objective_and_grad(pars)[0] # noqa: E731
56 jac = lambda pars: objective_and_grad(pars)[1] # noqa: E731
57 else:
58 wrapped_objective = objective_and_grad
59 jac = None
60
61 kwargs = dict(
62 fcn=wrapped_objective,
63 grad=jac,
64 start=init_pars,
65 error=step_sizes,
66 limit=init_bounds,
67 fix=fixed_bools,
68 print_level=self.verbose,
69 errordef=self.errordef,
70 )
71 return iminuit.Minuit.from_array_func(**kwargs)
72
73 def _minimize(
74 self,
75 minimizer,
76 func,
77 x0,
78 do_grad=False,
79 bounds=None,
80 fixed_vals=None,
81 return_uncertainties=False,
82 options={},
83 ):
84
85 """
86 Same signature as :func:`scipy.optimize.minimize`.
87
88 Note: an additional `minuit` is injected into the fitresult to get the
89 underlying minimizer.
90
91 Minimizer Options:
92 maxiter (:obj:`int`): maximum number of iterations. Default is 100000.
93 return_uncertainties (:obj:`bool`): Return uncertainties on the fitted parameters. Default is off.
94 strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is to configure in response to `do_grad`.
95
96 Returns:
97 fitresult (scipy.optimize.OptimizeResult): the fit result
98 """
99 maxiter = options.pop('maxiter', self.maxiter)
100 return_uncertainties = options.pop('return_uncertainties', False)
101 # 0: Fast, user-provided gradient
102 # 1: Default, no user-provided gradient
103 strategy = options.pop(
104 'strategy', self.strategy if self.strategy else not do_grad
105 )
106 tolerance = options.pop('tolerance', self.tolerance)
107 if options:
108 raise exceptions.Unsupported(
109 f"Unsupported options were passed in: {list(options.keys())}."
110 )
111
112 minimizer.strategy = strategy
113 minimizer.tol = tolerance
114 minimizer.migrad(ncall=maxiter)
115 # Following lines below come from:
116 # https://github.com/scikit-hep/iminuit/blob/22f6ed7146c1d1f3274309656d8c04461dde5ba3/src/iminuit/_minimize.py#L106-L125
117 message = "Optimization terminated successfully."
118 if not minimizer.valid:
119 message = "Optimization failed."
120 fmin = minimizer.fmin
121 if fmin.has_reached_call_limit:
122 message += " Call limit was reached."
123 if fmin.is_above_max_edm:
124 message += " Estimated distance to minimum too large."
125
126 n = len(x0)
127 hess_inv = default_backend.ones((n, n))
128 if minimizer.valid:
129 # Extra call to hesse() after migrad() is always needed for good error estimates. If you pass a user-provided gradient to MINUIT, convergence is faster.
130 minimizer.hesse()
131 hess_inv = minimizer.np_covariance()
132
133 unc = None
134 if return_uncertainties:
135 unc = minimizer.np_errors()
136
137 return scipy.optimize.OptimizeResult(
138 x=minimizer.np_values(),
139 unc=unc,
140 success=minimizer.valid,
141 fun=minimizer.fval,
142 hess_inv=hess_inv,
143 message=message,
144 nfev=minimizer.ncalls,
145 njev=minimizer.ngrads,
146 minuit=minimizer,
147 )
148
```
Path: `setup.py`
Content:
```
1 from setuptools import setup
2
3 extras_require = {
4 'shellcomplete': ['click_completion'],
5 'tensorflow': [
6 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major
7 'tensorflow-probability~=0.10.0',
8 ],
9 'torch': ['torch~=1.2'],
10 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],
11 'xmlio': ['uproot3~=3.14'], # Future proof against uproot4 API changes
12 'minuit': ['iminuit~=1.4.3'], # v1.5.0 breaks pyhf for 32b TensorFlow and PyTorch
13 }
14 extras_require['backends'] = sorted(
15 set(
16 extras_require['tensorflow']
17 + extras_require['torch']
18 + extras_require['jax']
19 + extras_require['minuit']
20 )
21 )
22 extras_require['contrib'] = sorted({'matplotlib', 'requests'})
23 extras_require['lint'] = sorted({'flake8', 'black'})
24
25 extras_require['test'] = sorted(
26 set(
27 extras_require['backends']
28 + extras_require['xmlio']
29 + extras_require['contrib']
30 + extras_require['shellcomplete']
31 + [
32 'pytest~=6.0',
33 'pytest-cov>=2.5.1',
34 'pytest-mock',
35 'pytest-benchmark[histogram]',
36 'pytest-console-scripts',
37 'pytest-mpl',
38 'pydocstyle',
39 'coverage>=4.0', # coveralls
40 'papermill~=2.0',
41 'nteract-scrapbook~=0.2',
42 'jupyter',
43 'graphviz',
44 'jsonpatch',
45 ]
46 )
47 )
48 extras_require['docs'] = sorted(
49 {
50 'sphinx>=3.1.2',
51 'sphinxcontrib-bibtex',
52 'sphinx-click',
53 'sphinx_rtd_theme',
54 'nbsphinx',
55 'ipywidgets',
56 'sphinx-issues',
57 'sphinx-copybutton>0.2.9',
58 }
59 )
60 extras_require['develop'] = sorted(
61 set(
62 extras_require['docs']
63 + extras_require['lint']
64 + extras_require['test']
65 + [
66 'nbdime',
67 'bump2version',
68 'ipython',
69 'pre-commit',
70 'check-manifest',
71 'codemetapy>=0.3.4',
72 'twine',
73 ]
74 )
75 )
76 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
77
78
79 setup(
80 extras_require=extras_require,
81 use_scm_version=lambda: {'local_scheme': lambda version: ''},
82 )
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -9,7 +9,7 @@
'torch': ['torch~=1.2'],
'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],
'xmlio': ['uproot3~=3.14'], # Future proof against uproot4 API changes
- 'minuit': ['iminuit~=1.4.3'], # v1.5.0 breaks pyhf for 32b TensorFlow and PyTorch
+ 'minuit': ['iminuit~=1.5.3'],
}
extras_require['backends'] = sorted(
set(
diff --git a/src/pyhf/optimize/opt_minuit.py b/src/pyhf/optimize/opt_minuit.py
--- a/src/pyhf/optimize/opt_minuit.py
+++ b/src/pyhf/optimize/opt_minuit.py
@@ -113,7 +113,7 @@
minimizer.tol = tolerance
minimizer.migrad(ncall=maxiter)
# Following lines below come from:
- # https://github.com/scikit-hep/iminuit/blob/22f6ed7146c1d1f3274309656d8c04461dde5ba3/src/iminuit/_minimize.py#L106-L125
+ # https://github.com/scikit-hep/iminuit/blob/64acac11cfa2fb91ccbd02d1b3c51f8a9e2cc484/src/iminuit/_minimize.py#L102-L121
message = "Optimization terminated successfully."
if not minimizer.valid:
message = "Optimization failed."
@@ -141,7 +141,7 @@
fun=minimizer.fval,
hess_inv=hess_inv,
message=message,
- nfev=minimizer.ncalls,
- njev=minimizer.ngrads,
+ nfev=minimizer.ncalls_total,
+ njev=minimizer.ngrads_total,
minuit=minimizer,
)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -9,7 +9,7 @@\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],\n 'xmlio': ['uproot3~=3.14'], # Future proof against uproot4 API changes\n- 'minuit': ['iminuit~=1.4.3'], # v1.5.0 breaks pyhf for 32b TensorFlow and PyTorch\n+ 'minuit': ['iminuit~=1.5.3'],\n }\n extras_require['backends'] = sorted(\n set(\ndiff --git a/src/pyhf/optimize/opt_minuit.py b/src/pyhf/optimize/opt_minuit.py\n--- a/src/pyhf/optimize/opt_minuit.py\n+++ b/src/pyhf/optimize/opt_minuit.py\n@@ -113,7 +113,7 @@\n minimizer.tol = tolerance\n minimizer.migrad(ncall=maxiter)\n # Following lines below come from:\n- # https://github.com/scikit-hep/iminuit/blob/22f6ed7146c1d1f3274309656d8c04461dde5ba3/src/iminuit/_minimize.py#L106-L125\n+ # https://github.com/scikit-hep/iminuit/blob/64acac11cfa2fb91ccbd02d1b3c51f8a9e2cc484/src/iminuit/_minimize.py#L102-L121\n message = \"Optimization terminated successfully.\"\n if not minimizer.valid:\n message = \"Optimization failed.\"\n@@ -141,7 +141,7 @@\n fun=minimizer.fval,\n hess_inv=hess_inv,\n message=message,\n- nfev=minimizer.ncalls,\n- njev=minimizer.ngrads,\n+ nfev=minimizer.ncalls_total,\n+ njev=minimizer.ngrads_total,\n minuit=minimizer,\n )\n", "issue": "iminuit v1.5.0 breaks optimization tests\n# Description\r\n\r\nWith the release of [`iminuit` `v1.5.0`](https://github.com/scikit-hep/iminuit/releases/tag/v1.5.0) on 2020-09-17 the nightly tests are failing in `test_optim.py`. Specifically\r\n\r\nhttps://github.com/scikit-hep/pyhf/blob/8a6ee36da4f566d8a37df01e20201098aa1f8a54/tests/test_optim.py#L47\r\n\r\nis failing with errors of \r\n\r\n```pytb\r\n try:\r\n assert result.success\r\n except AssertionError:\r\n log.error(result)\r\n> raise exceptions.FailedMinimization(result)\r\nE pyhf.exceptions.FailedMinimization: Optimization failed. Estimated distance to minimum too large.\r\n\r\nsrc/pyhf/optimize/mixins.py:52: FailedMinimization\r\n------------------------------ Captured log call -------------------------------\r\nERROR pyhf.optimize.mixins:mixins.py:51 fun: 15.5887451171875\r\n hess_inv: array([[1., 1.],\r\n [1., 1.]])\r\n message: 'Optimization failed. Estimated distance to minimum too large.'\r\n minuit: <iminuit._libiminuit.Minuit object at 0x5619c82f90a0>\r\n nfev: 110\r\n njev: 0\r\n success: False\r\n unc: None\r\n x: array([0.97325551, 0.91712703])\r\n```\r\n\r\nwhere the `pyhf.exceptions.FailedMinimization` being raised comes from the `raise exceptions.FailedMinimization(result)` in\r\n\r\nhttps://github.com/scikit-hep/pyhf/blob/8a6ee36da4f566d8a37df01e20201098aa1f8a54/src/pyhf/optimize/mixins.py#L31-L53\r\n\r\nwhich are of course coming from\r\n\r\nhttps://github.com/scikit-hep/pyhf/blob/8a6ee36da4f566d8a37df01e20201098aa1f8a54/src/pyhf/optimize/opt_minuit.py#L122-L132\r\n\r\nin\r\n\r\nhttps://github.com/scikit-hep/pyhf/blob/8a6ee36da4f566d8a37df01e20201098aa1f8a54/src/pyhf/optimize/opt_minuit.py#L69\r\n\r\n# Steps to Reproduce\r\n\r\nRun the tests using current master.\r\n\r\n\r\n\r\nTo show that this is definitley an issue with `iminuit` `v1.5.0+`\r\n\r\n```\r\n$ python -m pip install --upgrade \"iminuit<1.5.0\"\r\n$ pip list | grep iminuit\r\niminuit 1.4.9\r\n$ python -m pytest -sx tests/test_optim.py\r\n```\r\n\r\npasses but\r\n\r\n```\r\n$ python -m pip install --upgrade iminuit\r\n$ pip list | grep iminuit\r\niminuit 1.5.1\r\n$ python -m pytest -sx tests/test_optim.py\r\n```\r\n\r\nfails.\r\n\r\n# Checklist\r\n\r\n- [x] Run `git fetch` to get the most up to date version of `master`\r\n- [x] Searched through existing Issues to confirm this is not a duplicate issue\r\n- [x] Filled out the Description, Expected Behavior, Actual Behavior, and Steps to Reproduce sections above or have edited/removed them in a way that fully describes the issue\r\n\n", "before_files": [{"content": "\"\"\"Minuit Optimizer Class.\"\"\"\nfrom .. import default_backend, exceptions\nfrom .mixins import OptimizerMixin\nimport scipy\nimport iminuit\n\n\nclass minuit_optimizer(OptimizerMixin):\n \"\"\"\n Optimizer that uses iminuit.Minuit.migrad.\n \"\"\"\n\n __slots__ = ['name', 'errordef', 'steps', 'strategy', 'tolerance']\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Create MINUIT Optimizer.\n\n .. note::\n\n ``errordef`` should be 1.0 for a least-squares cost function and 0.5\n for negative log-likelihood function. See page 37 of\n http://hep.fi.infn.it/minuit.pdf. This parameter is sometimes\n called ``UP`` in the ``MINUIT`` docs.\n\n\n Args:\n errordef (:obj:`float`): See minuit docs. Default is 1.0.\n steps (:obj:`int`): Number of steps for the bounds. Default is 1000.\n strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is None.\n tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is 0.1.\n \"\"\"\n self.name = 'minuit'\n self.errordef = kwargs.pop('errordef', 1)\n self.steps = kwargs.pop('steps', 1000)\n self.strategy = kwargs.pop('strategy', None)\n self.tolerance = kwargs.pop('tolerance', 0.1)\n super().__init__(*args, **kwargs)\n\n def _get_minimizer(\n self, objective_and_grad, init_pars, init_bounds, fixed_vals=None, do_grad=False\n ):\n\n step_sizes = [(b[1] - b[0]) / float(self.steps) for b in init_bounds]\n fixed_vals = fixed_vals or []\n # Minuit wants True/False for each parameter\n fixed_bools = [False] * len(init_pars)\n for index, val in fixed_vals:\n fixed_bools[index] = True\n init_pars[index] = val\n step_sizes[index] = 0.0\n\n # Minuit requires jac=callable\n if do_grad:\n wrapped_objective = lambda pars: objective_and_grad(pars)[0] # noqa: E731\n jac = lambda pars: objective_and_grad(pars)[1] # noqa: E731\n else:\n wrapped_objective = objective_and_grad\n jac = None\n\n kwargs = dict(\n fcn=wrapped_objective,\n grad=jac,\n start=init_pars,\n error=step_sizes,\n limit=init_bounds,\n fix=fixed_bools,\n print_level=self.verbose,\n errordef=self.errordef,\n )\n return iminuit.Minuit.from_array_func(**kwargs)\n\n def _minimize(\n self,\n minimizer,\n func,\n x0,\n do_grad=False,\n bounds=None,\n fixed_vals=None,\n return_uncertainties=False,\n options={},\n ):\n\n \"\"\"\n Same signature as :func:`scipy.optimize.minimize`.\n\n Note: an additional `minuit` is injected into the fitresult to get the\n underlying minimizer.\n\n Minimizer Options:\n maxiter (:obj:`int`): maximum number of iterations. Default is 100000.\n return_uncertainties (:obj:`bool`): Return uncertainties on the fitted parameters. Default is off.\n strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is to configure in response to `do_grad`.\n\n Returns:\n fitresult (scipy.optimize.OptimizeResult): the fit result\n \"\"\"\n maxiter = options.pop('maxiter', self.maxiter)\n return_uncertainties = options.pop('return_uncertainties', False)\n # 0: Fast, user-provided gradient\n # 1: Default, no user-provided gradient\n strategy = options.pop(\n 'strategy', self.strategy if self.strategy else not do_grad\n )\n tolerance = options.pop('tolerance', self.tolerance)\n if options:\n raise exceptions.Unsupported(\n f\"Unsupported options were passed in: {list(options.keys())}.\"\n )\n\n minimizer.strategy = strategy\n minimizer.tol = tolerance\n minimizer.migrad(ncall=maxiter)\n # Following lines below come from:\n # https://github.com/scikit-hep/iminuit/blob/22f6ed7146c1d1f3274309656d8c04461dde5ba3/src/iminuit/_minimize.py#L106-L125\n message = \"Optimization terminated successfully.\"\n if not minimizer.valid:\n message = \"Optimization failed.\"\n fmin = minimizer.fmin\n if fmin.has_reached_call_limit:\n message += \" Call limit was reached.\"\n if fmin.is_above_max_edm:\n message += \" Estimated distance to minimum too large.\"\n\n n = len(x0)\n hess_inv = default_backend.ones((n, n))\n if minimizer.valid:\n # Extra call to hesse() after migrad() is always needed for good error estimates. If you pass a user-provided gradient to MINUIT, convergence is faster.\n minimizer.hesse()\n hess_inv = minimizer.np_covariance()\n\n unc = None\n if return_uncertainties:\n unc = minimizer.np_errors()\n\n return scipy.optimize.OptimizeResult(\n x=minimizer.np_values(),\n unc=unc,\n success=minimizer.valid,\n fun=minimizer.fval,\n hess_inv=hess_inv,\n message=message,\n nfev=minimizer.ncalls,\n njev=minimizer.ngrads,\n minuit=minimizer,\n )\n", "path": "src/pyhf/optimize/opt_minuit.py"}, {"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major\n 'tensorflow-probability~=0.10.0',\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],\n 'xmlio': ['uproot3~=3.14'], # Future proof against uproot4 API changes\n 'minuit': ['iminuit~=1.4.3'], # v1.5.0 breaks pyhf for 32b TensorFlow and PyTorch\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted({'matplotlib', 'requests'})\nextras_require['lint'] = sorted({'flake8', 'black'})\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=6.0',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n {\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n }\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + [\n 'nbdime',\n 'bump2version',\n 'ipython',\n 'pre-commit',\n 'check-manifest',\n 'codemetapy>=0.3.4',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}], "after_files": [{"content": "\"\"\"Minuit Optimizer Class.\"\"\"\nfrom .. import default_backend, exceptions\nfrom .mixins import OptimizerMixin\nimport scipy\nimport iminuit\n\n\nclass minuit_optimizer(OptimizerMixin):\n \"\"\"\n Optimizer that uses iminuit.Minuit.migrad.\n \"\"\"\n\n __slots__ = ['name', 'errordef', 'steps', 'strategy', 'tolerance']\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Create MINUIT Optimizer.\n\n .. note::\n\n ``errordef`` should be 1.0 for a least-squares cost function and 0.5\n for negative log-likelihood function. See page 37 of\n http://hep.fi.infn.it/minuit.pdf. This parameter is sometimes\n called ``UP`` in the ``MINUIT`` docs.\n\n\n Args:\n errordef (:obj:`float`): See minuit docs. Default is 1.0.\n steps (:obj:`int`): Number of steps for the bounds. Default is 1000.\n strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is None.\n tolerance (:obj:`float`): tolerance for termination. See specific optimizer for detailed meaning. Default is 0.1.\n \"\"\"\n self.name = 'minuit'\n self.errordef = kwargs.pop('errordef', 1)\n self.steps = kwargs.pop('steps', 1000)\n self.strategy = kwargs.pop('strategy', None)\n self.tolerance = kwargs.pop('tolerance', 0.1)\n super().__init__(*args, **kwargs)\n\n def _get_minimizer(\n self, objective_and_grad, init_pars, init_bounds, fixed_vals=None, do_grad=False\n ):\n\n step_sizes = [(b[1] - b[0]) / float(self.steps) for b in init_bounds]\n fixed_vals = fixed_vals or []\n # Minuit wants True/False for each parameter\n fixed_bools = [False] * len(init_pars)\n for index, val in fixed_vals:\n fixed_bools[index] = True\n init_pars[index] = val\n step_sizes[index] = 0.0\n\n # Minuit requires jac=callable\n if do_grad:\n wrapped_objective = lambda pars: objective_and_grad(pars)[0] # noqa: E731\n jac = lambda pars: objective_and_grad(pars)[1] # noqa: E731\n else:\n wrapped_objective = objective_and_grad\n jac = None\n\n kwargs = dict(\n fcn=wrapped_objective,\n grad=jac,\n start=init_pars,\n error=step_sizes,\n limit=init_bounds,\n fix=fixed_bools,\n print_level=self.verbose,\n errordef=self.errordef,\n )\n return iminuit.Minuit.from_array_func(**kwargs)\n\n def _minimize(\n self,\n minimizer,\n func,\n x0,\n do_grad=False,\n bounds=None,\n fixed_vals=None,\n return_uncertainties=False,\n options={},\n ):\n\n \"\"\"\n Same signature as :func:`scipy.optimize.minimize`.\n\n Note: an additional `minuit` is injected into the fitresult to get the\n underlying minimizer.\n\n Minimizer Options:\n maxiter (:obj:`int`): maximum number of iterations. Default is 100000.\n return_uncertainties (:obj:`bool`): Return uncertainties on the fitted parameters. Default is off.\n strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. Default is to configure in response to `do_grad`.\n\n Returns:\n fitresult (scipy.optimize.OptimizeResult): the fit result\n \"\"\"\n maxiter = options.pop('maxiter', self.maxiter)\n return_uncertainties = options.pop('return_uncertainties', False)\n # 0: Fast, user-provided gradient\n # 1: Default, no user-provided gradient\n strategy = options.pop(\n 'strategy', self.strategy if self.strategy else not do_grad\n )\n tolerance = options.pop('tolerance', self.tolerance)\n if options:\n raise exceptions.Unsupported(\n f\"Unsupported options were passed in: {list(options.keys())}.\"\n )\n\n minimizer.strategy = strategy\n minimizer.tol = tolerance\n minimizer.migrad(ncall=maxiter)\n # Following lines below come from:\n # https://github.com/scikit-hep/iminuit/blob/64acac11cfa2fb91ccbd02d1b3c51f8a9e2cc484/src/iminuit/_minimize.py#L102-L121\n message = \"Optimization terminated successfully.\"\n if not minimizer.valid:\n message = \"Optimization failed.\"\n fmin = minimizer.fmin\n if fmin.has_reached_call_limit:\n message += \" Call limit was reached.\"\n if fmin.is_above_max_edm:\n message += \" Estimated distance to minimum too large.\"\n\n n = len(x0)\n hess_inv = default_backend.ones((n, n))\n if minimizer.valid:\n # Extra call to hesse() after migrad() is always needed for good error estimates. If you pass a user-provided gradient to MINUIT, convergence is faster.\n minimizer.hesse()\n hess_inv = minimizer.np_covariance()\n\n unc = None\n if return_uncertainties:\n unc = minimizer.np_errors()\n\n return scipy.optimize.OptimizeResult(\n x=minimizer.np_values(),\n unc=unc,\n success=minimizer.valid,\n fun=minimizer.fval,\n hess_inv=hess_inv,\n message=message,\n nfev=minimizer.ncalls_total,\n njev=minimizer.ngrads_total,\n minuit=minimizer,\n )\n", "path": "src/pyhf/optimize/opt_minuit.py"}, {"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major\n 'tensorflow-probability~=0.10.0',\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],\n 'xmlio': ['uproot3~=3.14'], # Future proof against uproot4 API changes\n 'minuit': ['iminuit~=1.5.3'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted({'matplotlib', 'requests'})\nextras_require['lint'] = sorted({'flake8', 'black'})\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=6.0',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n {\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n }\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + [\n 'nbdime',\n 'bump2version',\n 'ipython',\n 'pre-commit',\n 'check-manifest',\n 'codemetapy>=0.3.4',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]}
| 3,560 | 493 |
gh_patches_debug_13955
|
rasdani/github-patches
|
git_diff
|
saleor__saleor-3337
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The fetch vat rates button should not be a GET method
### What I'm trying to achieve
Not to allow GET methods to fetch vat rates.
### Steps to reproduce the problem
1. Go to configuration -> Taxes ;
2. The fetch tax rates button, is a GET button.
### What I expected to happen
Get a POST instead of a GET, which is safer against attacks.
### Describe a proposed solution
Drop the button link on the dashboard for a submit button or a modal.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/dashboard/taxes/views.py`
Content:
```
1 import logging
2
3 from django.conf import settings
4 from django.contrib import messages
5 from django.contrib.auth.decorators import permission_required
6 from django.core.exceptions import ImproperlyConfigured
7 from django.core.management import call_command
8 from django.shortcuts import get_object_or_404, redirect
9 from django.template.response import TemplateResponse
10 from django.utils.translation import pgettext_lazy
11 from django_countries.fields import Country
12 from django_prices_vatlayer.models import VAT
13
14 from ...core import TaxRateType
15 from ...core.utils import get_paginator_items
16 from ...core.utils.taxes import get_taxes_for_country
17 from ...dashboard.taxes.filters import TaxFilter
18 from ...dashboard.taxes.forms import TaxesConfigurationForm
19 from ...dashboard.views import staff_member_required
20
21 logger = logging.getLogger(__name__)
22
23
24 @staff_member_required
25 def tax_list(request):
26 taxes = VAT.objects.order_by('country_code')
27 tax_filter = TaxFilter(request.GET, queryset=taxes)
28 taxes = get_paginator_items(
29 tax_filter.qs, settings.DASHBOARD_PAGINATE_BY, request.GET.get('page'))
30 ctx = {
31 'taxes': taxes, 'filter_set': tax_filter,
32 'is_empty': not tax_filter.queryset.exists()}
33 return TemplateResponse(request, 'dashboard/taxes/list.html', ctx)
34
35
36 @staff_member_required
37 def tax_details(request, country_code):
38 tax = get_object_or_404(VAT, country_code=country_code)
39 tax_rates = get_taxes_for_country(Country(country_code))
40 translations = dict(TaxRateType.CHOICES)
41 tax_rates = [
42 (translations.get(rate_name, rate_name), tax['value'])
43 for rate_name, tax in tax_rates.items()]
44 ctx = {'tax': tax, 'tax_rates': sorted(tax_rates)}
45 return TemplateResponse(request, 'dashboard/taxes/details.html', ctx)
46
47
48 @staff_member_required
49 @permission_required('site.manage_settings')
50 def configure_taxes(request):
51 site_settings = request.site.settings
52 taxes_form = TaxesConfigurationForm(
53 request.POST or None, instance=site_settings)
54 if taxes_form.is_valid():
55 taxes_form.save()
56 msg = pgettext_lazy('Dashboard message', 'Updated taxes settings')
57 messages.success(request, msg)
58 return redirect('dashboard:taxes')
59 ctx = {'site': site_settings, 'taxes_form': taxes_form}
60 return TemplateResponse(request, 'dashboard/taxes/form.html', ctx)
61
62
63 @staff_member_required
64 @permission_required('site.manage_settings')
65 def fetch_tax_rates(request):
66 try:
67 call_command('get_vat_rates')
68 msg = pgettext_lazy(
69 'Dashboard message', 'Tax rates updated successfully')
70 messages.success(request, msg)
71 except ImproperlyConfigured as exc:
72 logger.exception(exc)
73 msg = pgettext_lazy(
74 'Dashboard message',
75 'Could not fetch tax rates. '
76 'Make sure you have supplied a valid API Access Key.<br/>'
77 'Check the server logs for more information about this error.')
78 messages.warning(request, msg)
79 return redirect('dashboard:taxes')
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/saleor/dashboard/taxes/views.py b/saleor/dashboard/taxes/views.py
--- a/saleor/dashboard/taxes/views.py
+++ b/saleor/dashboard/taxes/views.py
@@ -8,6 +8,7 @@
from django.shortcuts import get_object_or_404, redirect
from django.template.response import TemplateResponse
from django.utils.translation import pgettext_lazy
+from django.views.decorators.http import require_POST
from django_countries.fields import Country
from django_prices_vatlayer.models import VAT
@@ -61,6 +62,7 @@
@staff_member_required
+@require_POST
@permission_required('site.manage_settings')
def fetch_tax_rates(request):
try:
|
{"golden_diff": "diff --git a/saleor/dashboard/taxes/views.py b/saleor/dashboard/taxes/views.py\n--- a/saleor/dashboard/taxes/views.py\n+++ b/saleor/dashboard/taxes/views.py\n@@ -8,6 +8,7 @@\n from django.shortcuts import get_object_or_404, redirect\n from django.template.response import TemplateResponse\n from django.utils.translation import pgettext_lazy\n+from django.views.decorators.http import require_POST\n from django_countries.fields import Country\n from django_prices_vatlayer.models import VAT\n \n@@ -61,6 +62,7 @@\n \n \n @staff_member_required\n+@require_POST\n @permission_required('site.manage_settings')\n def fetch_tax_rates(request):\n try:\n", "issue": "The fetch vat rates button should not be a GET method\n### What I'm trying to achieve\r\nNot to allow GET methods to fetch vat rates.\r\n\r\n### Steps to reproduce the problem\r\n1. Go to configuration -> Taxes ;\r\n2. The fetch tax rates button, is a GET button.\r\n\r\n### What I expected to happen\r\nGet a POST instead of a GET, which is safer against attacks.\r\n\r\n### Describe a proposed solution\r\nDrop the button link on the dashboard for a submit button or a modal.\r\n\n", "before_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import permission_required\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.core.management import call_command\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.translation import pgettext_lazy\nfrom django_countries.fields import Country\nfrom django_prices_vatlayer.models import VAT\n\nfrom ...core import TaxRateType\nfrom ...core.utils import get_paginator_items\nfrom ...core.utils.taxes import get_taxes_for_country\nfrom ...dashboard.taxes.filters import TaxFilter\nfrom ...dashboard.taxes.forms import TaxesConfigurationForm\nfrom ...dashboard.views import staff_member_required\n\nlogger = logging.getLogger(__name__)\n\n\n@staff_member_required\ndef tax_list(request):\n taxes = VAT.objects.order_by('country_code')\n tax_filter = TaxFilter(request.GET, queryset=taxes)\n taxes = get_paginator_items(\n tax_filter.qs, settings.DASHBOARD_PAGINATE_BY, request.GET.get('page'))\n ctx = {\n 'taxes': taxes, 'filter_set': tax_filter,\n 'is_empty': not tax_filter.queryset.exists()}\n return TemplateResponse(request, 'dashboard/taxes/list.html', ctx)\n\n\n@staff_member_required\ndef tax_details(request, country_code):\n tax = get_object_or_404(VAT, country_code=country_code)\n tax_rates = get_taxes_for_country(Country(country_code))\n translations = dict(TaxRateType.CHOICES)\n tax_rates = [\n (translations.get(rate_name, rate_name), tax['value'])\n for rate_name, tax in tax_rates.items()]\n ctx = {'tax': tax, 'tax_rates': sorted(tax_rates)}\n return TemplateResponse(request, 'dashboard/taxes/details.html', ctx)\n\n\n@staff_member_required\n@permission_required('site.manage_settings')\ndef configure_taxes(request):\n site_settings = request.site.settings\n taxes_form = TaxesConfigurationForm(\n request.POST or None, instance=site_settings)\n if taxes_form.is_valid():\n taxes_form.save()\n msg = pgettext_lazy('Dashboard message', 'Updated taxes settings')\n messages.success(request, msg)\n return redirect('dashboard:taxes')\n ctx = {'site': site_settings, 'taxes_form': taxes_form}\n return TemplateResponse(request, 'dashboard/taxes/form.html', ctx)\n\n\n@staff_member_required\n@permission_required('site.manage_settings')\ndef fetch_tax_rates(request):\n try:\n call_command('get_vat_rates')\n msg = pgettext_lazy(\n 'Dashboard message', 'Tax rates updated successfully')\n messages.success(request, msg)\n except ImproperlyConfigured as exc:\n logger.exception(exc)\n msg = pgettext_lazy(\n 'Dashboard message',\n 'Could not fetch tax rates. '\n 'Make sure you have supplied a valid API Access Key.<br/>'\n 'Check the server logs for more information about this error.')\n messages.warning(request, msg)\n return redirect('dashboard:taxes')\n", "path": "saleor/dashboard/taxes/views.py"}], "after_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import permission_required\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.core.management import call_command\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.translation import pgettext_lazy\nfrom django.views.decorators.http import require_POST\nfrom django_countries.fields import Country\nfrom django_prices_vatlayer.models import VAT\n\nfrom ...core import TaxRateType\nfrom ...core.utils import get_paginator_items\nfrom ...core.utils.taxes import get_taxes_for_country\nfrom ...dashboard.taxes.filters import TaxFilter\nfrom ...dashboard.taxes.forms import TaxesConfigurationForm\nfrom ...dashboard.views import staff_member_required\n\nlogger = logging.getLogger(__name__)\n\n\n@staff_member_required\ndef tax_list(request):\n taxes = VAT.objects.order_by('country_code')\n tax_filter = TaxFilter(request.GET, queryset=taxes)\n taxes = get_paginator_items(\n tax_filter.qs, settings.DASHBOARD_PAGINATE_BY, request.GET.get('page'))\n ctx = {\n 'taxes': taxes, 'filter_set': tax_filter,\n 'is_empty': not tax_filter.queryset.exists()}\n return TemplateResponse(request, 'dashboard/taxes/list.html', ctx)\n\n\n@staff_member_required\ndef tax_details(request, country_code):\n tax = get_object_or_404(VAT, country_code=country_code)\n tax_rates = get_taxes_for_country(Country(country_code))\n translations = dict(TaxRateType.CHOICES)\n tax_rates = [\n (translations.get(rate_name, rate_name), tax['value'])\n for rate_name, tax in tax_rates.items()]\n ctx = {'tax': tax, 'tax_rates': sorted(tax_rates)}\n return TemplateResponse(request, 'dashboard/taxes/details.html', ctx)\n\n\n@staff_member_required\n@permission_required('site.manage_settings')\ndef configure_taxes(request):\n site_settings = request.site.settings\n taxes_form = TaxesConfigurationForm(\n request.POST or None, instance=site_settings)\n if taxes_form.is_valid():\n taxes_form.save()\n msg = pgettext_lazy('Dashboard message', 'Updated taxes settings')\n messages.success(request, msg)\n return redirect('dashboard:taxes')\n ctx = {'site': site_settings, 'taxes_form': taxes_form}\n return TemplateResponse(request, 'dashboard/taxes/form.html', ctx)\n\n\n@staff_member_required\n@require_POST\n@permission_required('site.manage_settings')\ndef fetch_tax_rates(request):\n try:\n call_command('get_vat_rates')\n msg = pgettext_lazy(\n 'Dashboard message', 'Tax rates updated successfully')\n messages.success(request, msg)\n except ImproperlyConfigured as exc:\n logger.exception(exc)\n msg = pgettext_lazy(\n 'Dashboard message',\n 'Could not fetch tax rates. '\n 'Make sure you have supplied a valid API Access Key.<br/>'\n 'Check the server logs for more information about this error.')\n messages.warning(request, msg)\n return redirect('dashboard:taxes')\n", "path": "saleor/dashboard/taxes/views.py"}]}
| 1,171 | 152 |
gh_patches_debug_8446
|
rasdani/github-patches
|
git_diff
|
ytdl-org__youtube-dl-18583
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Extractor for yourporn.sexy is broken
## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.12.03*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.12.03**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
```
$ youtube-dl -v https://yourporn.sexy/post/5bf56573616c2.html
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-v', u'https://yourporn.sexy/post/5bf56573616c2.html']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2018.12.03
[debug] Python version 2.7.10 (CPython) - Darwin-17.7.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg 4.0.2, ffprobe 4.0.2
[debug] Proxy map: {}
[YourPorn] 5bf56573616c2: Downloading webpage
[debug] Default format spec: bestvideo+bestaudio/best
[debug] Invoking downloader on u'https://yourporn.sexy/cdn/c11/ldjJi9usRy26gVwhgzEn9w/1544086469/hk5sajembx0dd41hcp09ah8m3s2/25qb3fr5d605l7m316y1969c42k.mp4'
ERROR: Did not get any data blocks
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/Users/v-delta/.local/bin/youtube-dl/__main__.py", line 19, in <module>
youtube_dl.main()
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/__init__.py", line 472, in main
_real_main(argv)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/__init__.py", line 462, in _real_main
retcode = ydl.download(all_urls)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 2001, in download
url, force_generic_extractor=self.params.get('force_generic_extractor', False))
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 803, in extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 857, in process_ie_result
return self.process_video_result(ie_result, download=download)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1635, in process_video_result
self.process_info(new_info)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1908, in process_info
success = dl(filename, info_dict)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1847, in dl
return fd.download(name, info)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/common.py", line 364, in download
return self.real_download(filename, info_dict)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/http.py", line 342, in real_download
return download()
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/http.py", line 312, in download
self.report_error('Did not get any data blocks')
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/common.py", line 165, in report_error
self.ydl.report_error(*args, **kargs)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 620, in report_error
self.trouble(error_message, tb)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 582, in trouble
tb_data = traceback.format_list(traceback.extract_stack())
```
### Description of your *issue*, suggested solution and other information
The videos play fine in any browser, that is because somewehre the URL the extractor delivers is changed from
```
https://yourporn.sexy/cdn/c11/tlRIwnitpU4dxFtCUK1OMQ/1544087142/fx5xahe3b40kda1sc709a98q342/e51bafd56655a7m356z1s6tcv2i.mp4
```
to
```
https://yourporn.sexy/cdn2/c11/tlRIwnitpU4dxFtCUK1OMQ/1544087142/fx5xahe3b40kda1sc709a98q342/e51bafd56655a7m356z1s6tcv2i.mp4
```
A `2` is inserted after `/cdn`. I will create a pull request fixing this bug soon.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `youtube_dl/extractor/yourporn.py`
Content:
```
1 from __future__ import unicode_literals
2
3 from .common import InfoExtractor
4 from ..utils import urljoin
5
6
7 class YourPornIE(InfoExtractor):
8 _VALID_URL = r'https?://(?:www\.)?yourporn\.sexy/post/(?P<id>[^/?#&.]+)'
9 _TEST = {
10 'url': 'https://yourporn.sexy/post/57ffcb2e1179b.html',
11 'md5': '6f8682b6464033d87acaa7a8ff0c092e',
12 'info_dict': {
13 'id': '57ffcb2e1179b',
14 'ext': 'mp4',
15 'title': 'md5:c9f43630bd968267672651ba905a7d35',
16 'thumbnail': r're:^https?://.*\.jpg$',
17 },
18 }
19
20 def _real_extract(self, url):
21 video_id = self._match_id(url)
22
23 webpage = self._download_webpage(url, video_id)
24
25 video_url = urljoin(url, self._parse_json(
26 self._search_regex(
27 r'data-vnfo=(["\'])(?P<data>{.+?})\1', webpage, 'data info',
28 group='data'),
29 video_id)[video_id]).replace('/cdn/', '/cdn2/')
30
31 title = (self._search_regex(
32 r'<[^>]+\bclass=["\']PostEditTA[^>]+>([^<]+)', webpage, 'title',
33 default=None) or self._og_search_description(webpage)).strip()
34 thumbnail = self._og_search_thumbnail(webpage)
35
36 return {
37 'id': video_id,
38 'url': video_url,
39 'title': title,
40 'thumbnail': thumbnail,
41 }
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/youtube_dl/extractor/yourporn.py b/youtube_dl/extractor/yourporn.py
--- a/youtube_dl/extractor/yourporn.py
+++ b/youtube_dl/extractor/yourporn.py
@@ -26,7 +26,7 @@
self._search_regex(
r'data-vnfo=(["\'])(?P<data>{.+?})\1', webpage, 'data info',
group='data'),
- video_id)[video_id]).replace('/cdn/', '/cdn2/')
+ video_id)[video_id]).replace('/cdn/', '/cdn3/')
title = (self._search_regex(
r'<[^>]+\bclass=["\']PostEditTA[^>]+>([^<]+)', webpage, 'title',
|
{"golden_diff": "diff --git a/youtube_dl/extractor/yourporn.py b/youtube_dl/extractor/yourporn.py\n--- a/youtube_dl/extractor/yourporn.py\n+++ b/youtube_dl/extractor/yourporn.py\n@@ -26,7 +26,7 @@\n self._search_regex(\n r'data-vnfo=([\"\\'])(?P<data>{.+?})\\1', webpage, 'data info',\n group='data'),\n- video_id)[video_id]).replace('/cdn/', '/cdn2/')\n+ video_id)[video_id]).replace('/cdn/', '/cdn3/')\n \n title = (self._search_regex(\n r'<[^>]+\\bclass=[\"\\']PostEditTA[^>]+>([^<]+)', webpage, 'title',\n", "issue": "Extractor for yourporn.sexy is broken\n## Please follow the guide below\r\n\r\n- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly\r\n- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)\r\n- Use the *Preview* tab to see what your issue will actually look like\r\n\r\n---\r\n\r\n### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.12.03*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.12.03**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser\r\n\r\n### What is the purpose of your *issue*?\r\n- [x] Bug report (encountered problems with youtube-dl)\r\n- [ ] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [ ] Question\r\n- [ ] Other\r\n\r\n```\r\n$ youtube-dl -v https://yourporn.sexy/post/5bf56573616c2.html\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: [u'-v', u'https://yourporn.sexy/post/5bf56573616c2.html']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8\r\n[debug] youtube-dl version 2018.12.03\r\n[debug] Python version 2.7.10 (CPython) - Darwin-17.7.0-x86_64-i386-64bit\r\n[debug] exe versions: ffmpeg 4.0.2, ffprobe 4.0.2\r\n[debug] Proxy map: {}\r\n[YourPorn] 5bf56573616c2: Downloading webpage\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on u'https://yourporn.sexy/cdn/c11/ldjJi9usRy26gVwhgzEn9w/1544086469/hk5sajembx0dd41hcp09ah8m3s2/25qb3fr5d605l7m316y1969c42k.mp4'\r\n\r\n\r\nERROR: Did not get any data blocks\r\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 162, in _run_module_as_main\r\n \"__main__\", fname, loader, pkg_name)\r\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 72, in _run_code\r\n exec code in run_globals\r\n File \"/Users/v-delta/.local/bin/youtube-dl/__main__.py\", line 19, in <module>\r\n youtube_dl.main()\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/__init__.py\", line 472, in main\r\n _real_main(argv)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/__init__.py\", line 462, in _real_main\r\n retcode = ydl.download(all_urls)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 2001, in download\r\n url, force_generic_extractor=self.params.get('force_generic_extractor', False))\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 803, in extract_info\r\n return self.process_ie_result(ie_result, download, extra_info)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 857, in process_ie_result\r\n return self.process_video_result(ie_result, download=download)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 1635, in process_video_result\r\n self.process_info(new_info)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 1908, in process_info\r\n success = dl(filename, info_dict)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 1847, in dl\r\n return fd.download(name, info)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/common.py\", line 364, in download\r\n return self.real_download(filename, info_dict)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/http.py\", line 342, in real_download\r\n return download()\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/http.py\", line 312, in download\r\n self.report_error('Did not get any data blocks')\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/common.py\", line 165, in report_error\r\n self.ydl.report_error(*args, **kargs)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 620, in report_error\r\n self.trouble(error_message, tb)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 582, in trouble\r\n tb_data = traceback.format_list(traceback.extract_stack())\r\n```\r\n\r\n### Description of your *issue*, suggested solution and other information\r\n\r\nThe videos play fine in any browser, that is because somewehre the URL the extractor delivers is changed from\r\n\r\n```\r\nhttps://yourporn.sexy/cdn/c11/tlRIwnitpU4dxFtCUK1OMQ/1544087142/fx5xahe3b40kda1sc709a98q342/e51bafd56655a7m356z1s6tcv2i.mp4\r\n```\r\n\r\nto\r\n\r\n```\r\nhttps://yourporn.sexy/cdn2/c11/tlRIwnitpU4dxFtCUK1OMQ/1544087142/fx5xahe3b40kda1sc709a98q342/e51bafd56655a7m356z1s6tcv2i.mp4\r\n```\r\n\r\nA `2` is inserted after `/cdn`. I will create a pull request fixing this bug soon.\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom .common import InfoExtractor\nfrom ..utils import urljoin\n\n\nclass YourPornIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?yourporn\\.sexy/post/(?P<id>[^/?#&.]+)'\n _TEST = {\n 'url': 'https://yourporn.sexy/post/57ffcb2e1179b.html',\n 'md5': '6f8682b6464033d87acaa7a8ff0c092e',\n 'info_dict': {\n 'id': '57ffcb2e1179b',\n 'ext': 'mp4',\n 'title': 'md5:c9f43630bd968267672651ba905a7d35',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n },\n }\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n\n webpage = self._download_webpage(url, video_id)\n\n video_url = urljoin(url, self._parse_json(\n self._search_regex(\n r'data-vnfo=([\"\\'])(?P<data>{.+?})\\1', webpage, 'data info',\n group='data'),\n video_id)[video_id]).replace('/cdn/', '/cdn2/')\n\n title = (self._search_regex(\n r'<[^>]+\\bclass=[\"\\']PostEditTA[^>]+>([^<]+)', webpage, 'title',\n default=None) or self._og_search_description(webpage)).strip()\n thumbnail = self._og_search_thumbnail(webpage)\n\n return {\n 'id': video_id,\n 'url': video_url,\n 'title': title,\n 'thumbnail': thumbnail,\n }\n", "path": "youtube_dl/extractor/yourporn.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nfrom .common import InfoExtractor\nfrom ..utils import urljoin\n\n\nclass YourPornIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?yourporn\\.sexy/post/(?P<id>[^/?#&.]+)'\n _TEST = {\n 'url': 'https://yourporn.sexy/post/57ffcb2e1179b.html',\n 'md5': '6f8682b6464033d87acaa7a8ff0c092e',\n 'info_dict': {\n 'id': '57ffcb2e1179b',\n 'ext': 'mp4',\n 'title': 'md5:c9f43630bd968267672651ba905a7d35',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n },\n }\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n\n webpage = self._download_webpage(url, video_id)\n\n video_url = urljoin(url, self._parse_json(\n self._search_regex(\n r'data-vnfo=([\"\\'])(?P<data>{.+?})\\1', webpage, 'data info',\n group='data'),\n video_id)[video_id]).replace('/cdn/', '/cdn3/')\n\n title = (self._search_regex(\n r'<[^>]+\\bclass=[\"\\']PostEditTA[^>]+>([^<]+)', webpage, 'title',\n default=None) or self._og_search_description(webpage)).strip()\n thumbnail = self._og_search_thumbnail(webpage)\n\n return {\n 'id': video_id,\n 'url': video_url,\n 'title': title,\n 'thumbnail': thumbnail,\n }\n", "path": "youtube_dl/extractor/yourporn.py"}]}
| 2,480 | 172 |
gh_patches_debug_42018
|
rasdani/github-patches
|
git_diff
|
facebookresearch__ParlAI-949
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add BLEU as a metric
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parlai/core/metrics.py`
Content:
```
1 # Copyright (c) 2017-present, Facebook, Inc.
2 # All rights reserved.
3 # This source code is licensed under the BSD-style license found in the
4 # LICENSE file in the root directory of this source tree. An additional grant
5 # of patent rights can be found in the PATENTS file in the same directory.
6 """Provides standard metric evaluations for dialog.
7 Uses locking and shared memory when ``numthreads`` is set to >1 to share metrics
8 between processes.
9 """
10
11 from parlai.core.thread_utils import SharedTable
12 from parlai.core.utils import round_sigfigs, no_lock
13 from collections import Counter
14
15 import re
16 import math
17
18 re_art = re.compile(r'\b(a|an|the)\b')
19 re_punc = re.compile(r'[!"#$%&()*+,-./:;<=>?@\[\]\\^`{|}~_\']')
20 def normalize_answer(s):
21 """Lower text and remove punctuation, articles and extra whitespace."""
22 def remove_articles(text):
23 return re_art.sub(' ', text)
24
25 def white_space_fix(text):
26 return ' '.join(text.split())
27
28 def remove_punc(text):
29 return re_punc.sub(' ', text) # convert punctuation to spaces
30
31 def lower(text):
32 return text.lower()
33
34 return white_space_fix(remove_articles(remove_punc(lower(s))))
35
36
37 def _exact_match(guess, answers):
38 """Check if guess is a (normalized) exact match with any answer."""
39 if guess is None or answers is None:
40 return False
41 guess = normalize_answer(guess)
42 for a in answers:
43 if guess == normalize_answer(a):
44 return True
45 return False
46
47
48 def _f1_score(guess, answers):
49 """Return the max F1 score between the guess and any answer."""
50 def _score(g_tokens, a_tokens):
51 common = Counter(g_tokens) & Counter(a_tokens)
52 num_same = sum(common.values())
53 if num_same == 0:
54 return 0
55 precision = 1.0 * num_same / len(g_tokens)
56 recall = 1.0 * num_same / len(a_tokens)
57 f1 = (2 * precision * recall) / (precision + recall)
58 return f1
59
60 if guess is None or answers is None:
61 return 0
62 g_tokens = normalize_answer(guess).split()
63 scores = [_score(g_tokens, normalize_answer(a).split()) for a in answers]
64 return max(scores)
65
66
67 def aggregate_metrics(reporters):
68 #reporters is a list of teachers or worlds
69 m = {}
70 m['tasks'] = {}
71 sums = {'accuracy': 0, 'f1': 0, 'loss': 0, 'ppl': 0}
72 num_tasks = 0
73 total = 0
74 for i in range(len(reporters)):
75 tid = reporters[i].getID()
76 mt = reporters[i].report()
77 while tid in m['tasks']:
78 # prevent name cloberring if using multiple tasks with same ID
79 tid += '_'
80 m['tasks'][tid] = mt
81 total += mt['exs']
82 found_any = False
83 for k in sums.keys():
84 if k in mt:
85 sums[k] += mt[k]
86 found_any = True
87 if found_any:
88 num_tasks += 1
89 m['exs'] = total
90 m['accuracy'] = 0
91 if num_tasks > 0:
92 for k in sums.keys():
93 m[k] = round_sigfigs(sums[k] / num_tasks, 4)
94 return m
95
96
97 def compute_time_metrics(world, max_time):
98 # Determine time_left and num_epochs
99 exs_per_epoch = world.num_examples() if world.num_examples() else 0
100 num_epochs = world.opt.get('num_epochs', 0)
101 max_exs = exs_per_epoch * num_epochs
102 total_exs = world.get_total_exs()
103
104 m = {}
105 if (max_exs > 0 and total_exs > 0) or max_time > 0:
106 m = {}
107 time_left = None
108 time = world.get_time()
109 total_epochs = world.get_total_epochs()
110
111 if (num_epochs > 0 and total_exs > 0 and max_exs > 0):
112 exs_per_sec = time / total_exs
113 time_left = (max_exs - total_exs) * exs_per_sec
114 if max_time > 0:
115 other_time_left = max_time - time
116 if time_left is not None:
117 time_left = min(time_left, other_time_left)
118 else:
119 time_left = other_time_left
120 if time_left is not None:
121 m['time_left'] = math.floor(time_left)
122 if num_epochs > 0:
123 if (total_exs > 0 and exs_per_epoch > 0):
124 display_epochs = int(total_exs / exs_per_epoch)
125 else:
126 display_epochs = total_epochs
127 m['num_epochs'] = display_epochs
128 return m
129
130
131 class Metrics(object):
132 """Class that maintains evaluation metrics over dialog."""
133
134 def __init__(self, opt):
135 self.metrics = {}
136 self.metrics['cnt'] = 0
137 self.metrics_list = ['mean_rank', 'loss', 'correct', 'f1', 'ppl']
138 for k in self.metrics_list:
139 self.metrics[k] = 0.0
140 self.metrics[k + '_cnt'] = 0
141 self.eval_pr = [1, 5, 10, 100]
142 for k in self.eval_pr:
143 self.metrics['hits@' + str(k)] = 0
144 self.metrics['hits@_cnt'] = 0
145 self.flags = {'has_text_cands': False, 'print_prediction_metrics': False}
146 if opt.get('numthreads', 1) > 1:
147 self.metrics = SharedTable(self.metrics)
148 self.flags = SharedTable(self.flags)
149
150 def __str__(self):
151 return str(self.metrics)
152
153 def __repr__(self):
154 representation = super().__repr__()
155 return representation.replace('>', ': {}>'.format(repr(self.metrics)))
156
157 def _lock(self):
158 if hasattr(self.metrics, 'get_lock'):
159 # use the shared_table's lock
160 return self.metrics.get_lock()
161 else:
162 # otherwise do nothing
163 return no_lock()
164
165 def update_ranking_metrics(self, observation, labels):
166 text_cands = observation.get('text_candidates', None)
167 if text_cands is None:
168 return
169 else:
170 text = observation.get('text', None)
171
172 # Now loop through text candidates, assuming they are sorted.
173 # If any of them is a label then score a point.
174 # maintain hits@1, 5, 10, 50, 100, etc.
175 label_set = set(normalize_answer(l) for l in labels)
176 cnts = {k: 0 for k in self.eval_pr}
177 cnt = 0
178 for c in text_cands:
179 cnt += 1
180 if normalize_answer(c) in label_set:
181 for k in self.eval_pr:
182 if cnt <= k:
183 cnts[k] += 1
184 # hits metric is 1 if cnts[k] > 0.
185 # (other metrics such as p@k and r@k take
186 # the value of cnt into account.)
187 with self._lock():
188 self.flags['has_text_cands'] = True
189 for k in self.eval_pr:
190 if cnts[k] > 0:
191 self.metrics['hits@' + str(k)] += 1
192 self.metrics['hits@_cnt'] += 1
193
194 def update(self, observation, labels):
195 with self._lock():
196 self.metrics['cnt'] += 1
197
198 # Exact match metric.
199 correct = 0
200 prediction = observation.get('text', None)
201 if prediction is not None:
202 if _exact_match(prediction, labels):
203 correct = 1
204 with self._lock():
205 self.flags['print_prediction_metrics'] = True
206 self.metrics['correct'] += correct
207 self.metrics['correct_cnt'] += 1
208
209 # F1 metric.
210 f1 = _f1_score(prediction, labels)
211 with self._lock():
212 self.metrics['f1'] += f1
213 self.metrics['f1_cnt'] += 1
214
215 # Ranking metrics.
216 self.update_ranking_metrics(observation, labels)
217
218 # User-reported metrics
219 if 'metrics' in observation:
220 for k, v in observation['metrics'].items():
221 if k not in ['correct', 'f1', 'hits@k']:
222 if k in self.metrics_list:
223 with self._lock():
224 self.metrics[k] += v
225 self.metrics[k + '_cnt'] += 1
226 else:
227 if type(self.metrics) is SharedTable:
228 # can't share custom metrics during hogwild
229 pass
230 else:
231 # no need to lock because not SharedTable
232 if k not in self.metrics:
233 self.metrics[k] = v
234 self.metrics_list.append(k)
235 self.metrics[k + '_cnt'] = 1.0
236 else:
237 self.metrics[k] += v
238
239 # Return a dict containing the metrics for this specific example.
240 # Metrics across all data is stored internally in the class, and
241 # can be accessed with the report method.
242 loss = {}
243 loss['correct'] = correct
244 return loss
245
246 def report(self):
247 # Report the metrics over all data seen so far.
248 m = {}
249 total = self.metrics['cnt']
250 m['exs'] = total
251 if total > 0:
252 if self.flags['print_prediction_metrics']:
253 m['accuracy'] = round_sigfigs(self.metrics['correct'] / max(1, self.metrics['correct_cnt']), 4)
254 m['f1'] = round_sigfigs(self.metrics['f1'] / max(1, self.metrics['f1_cnt']), 4)
255 if self.flags['has_text_cands']:
256 for k in self.eval_pr:
257 m['hits@' + str(k)] = round_sigfigs(
258 self.metrics['hits@' + str(k)] / max(1, self.metrics['hits@_cnt']), 3)
259 for k in self.metrics_list:
260 if self.metrics[k + '_cnt'] > 0 and k != 'correct' and k != 'f1':
261 m[k] = round_sigfigs(self.metrics[k] / max(1, self.metrics[k + '_cnt']), 4)
262 return m
263
264 def clear(self):
265 with self._lock():
266 self.metrics['cnt'] = 0
267 for k in self.metrics_list:
268 v = self.metrics[k]
269 v_typ = type(v)
270 if 'Tensor' in str(v_typ):
271 self.metrics[k].zero_()
272 else:
273 self.metrics[k] = 0.0
274 self.metrics[k + '_cnt'] = 0
275 for k in self.eval_pr:
276 self.metrics['hits@' + str(k)] = 0
277 self.metrics['hits@_cnt'] = 0
278
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parlai/core/metrics.py b/parlai/core/metrics.py
--- a/parlai/core/metrics.py
+++ b/parlai/core/metrics.py
@@ -15,8 +15,17 @@
import re
import math
+try:
+ from nltk.translate import bleu_score as nltkbleu
+except ImportError:
+ # User doesn't have nltk installed, so we can't use it for bleu
+ # We'll just turn off things, but we might want to warn the user
+ nltkbleu = None
+
re_art = re.compile(r'\b(a|an|the)\b')
re_punc = re.compile(r'[!"#$%&()*+,-./:;<=>?@\[\]\\^`{|}~_\']')
+
+
def normalize_answer(s):
"""Lower text and remove punctuation, articles and extra whitespace."""
def remove_articles(text):
@@ -64,11 +73,29 @@
return max(scores)
+def _bleu(guess, answers):
+ if nltkbleu is None:
+ # bleu library not installed, just return a default value
+ return None
+ # Warning: BLEU calculation *should* include proper tokenization and
+ # punctuation etc. We're using the normalize_answer for everything though,
+ # so we're over-estimating our BLEU scores. Also note that NLTK's bleu is
+ # going to be slower than fairseq's (which is written in C), but fairseq's
+ # requires that everything be in arrays of ints (i.e. as tensors). NLTK's
+ # works with strings, which is better suited for this module.
+ return nltkbleu.sentence_bleu(
+ [normalize_answer(a).split(" ") for a in answers],
+ normalize_answer(guess).split(" ")
+ )
+
+
def aggregate_metrics(reporters):
#reporters is a list of teachers or worlds
m = {}
m['tasks'] = {}
sums = {'accuracy': 0, 'f1': 0, 'loss': 0, 'ppl': 0}
+ if nltkbleu is not None:
+ sums['bleu'] = 0
num_tasks = 0
total = 0
for i in range(len(reporters)):
@@ -135,6 +162,9 @@
self.metrics = {}
self.metrics['cnt'] = 0
self.metrics_list = ['mean_rank', 'loss', 'correct', 'f1', 'ppl']
+ if nltkbleu is not None:
+ # only compute bleu if we can
+ self.metrics_list.append('bleu')
for k in self.metrics_list:
self.metrics[k] = 0.0
self.metrics[k + '_cnt'] = 0
@@ -206,11 +236,15 @@
self.metrics['correct'] += correct
self.metrics['correct_cnt'] += 1
- # F1 metric.
+ # F1 and BLEU metrics.
f1 = _f1_score(prediction, labels)
+ bleu = _bleu(prediction, labels)
with self._lock():
self.metrics['f1'] += f1
self.metrics['f1_cnt'] += 1
+ if bleu is not None:
+ self.metrics['bleu'] += bleu
+ self.metrics['bleu_cnt'] += 1
# Ranking metrics.
self.update_ranking_metrics(observation, labels)
@@ -218,7 +252,7 @@
# User-reported metrics
if 'metrics' in observation:
for k, v in observation['metrics'].items():
- if k not in ['correct', 'f1', 'hits@k']:
+ if k not in ['correct', 'f1', 'hits@k', 'bleu']:
if k in self.metrics_list:
with self._lock():
self.metrics[k] += v
|
{"golden_diff": "diff --git a/parlai/core/metrics.py b/parlai/core/metrics.py\n--- a/parlai/core/metrics.py\n+++ b/parlai/core/metrics.py\n@@ -15,8 +15,17 @@\n import re\n import math\n \n+try:\n+ from nltk.translate import bleu_score as nltkbleu\n+except ImportError:\n+ # User doesn't have nltk installed, so we can't use it for bleu\n+ # We'll just turn off things, but we might want to warn the user\n+ nltkbleu = None\n+\n re_art = re.compile(r'\\b(a|an|the)\\b')\n re_punc = re.compile(r'[!\"#$%&()*+,-./:;<=>?@\\[\\]\\\\^`{|}~_\\']')\n+\n+\n def normalize_answer(s):\n \"\"\"Lower text and remove punctuation, articles and extra whitespace.\"\"\"\n def remove_articles(text):\n@@ -64,11 +73,29 @@\n return max(scores)\n \n \n+def _bleu(guess, answers):\n+ if nltkbleu is None:\n+ # bleu library not installed, just return a default value\n+ return None\n+ # Warning: BLEU calculation *should* include proper tokenization and\n+ # punctuation etc. We're using the normalize_answer for everything though,\n+ # so we're over-estimating our BLEU scores. Also note that NLTK's bleu is\n+ # going to be slower than fairseq's (which is written in C), but fairseq's\n+ # requires that everything be in arrays of ints (i.e. as tensors). NLTK's\n+ # works with strings, which is better suited for this module.\n+ return nltkbleu.sentence_bleu(\n+ [normalize_answer(a).split(\" \") for a in answers],\n+ normalize_answer(guess).split(\" \")\n+ )\n+\n+\n def aggregate_metrics(reporters):\n #reporters is a list of teachers or worlds\n m = {}\n m['tasks'] = {}\n sums = {'accuracy': 0, 'f1': 0, 'loss': 0, 'ppl': 0}\n+ if nltkbleu is not None:\n+ sums['bleu'] = 0\n num_tasks = 0\n total = 0\n for i in range(len(reporters)):\n@@ -135,6 +162,9 @@\n self.metrics = {}\n self.metrics['cnt'] = 0\n self.metrics_list = ['mean_rank', 'loss', 'correct', 'f1', 'ppl']\n+ if nltkbleu is not None:\n+ # only compute bleu if we can\n+ self.metrics_list.append('bleu')\n for k in self.metrics_list:\n self.metrics[k] = 0.0\n self.metrics[k + '_cnt'] = 0\n@@ -206,11 +236,15 @@\n self.metrics['correct'] += correct\n self.metrics['correct_cnt'] += 1\n \n- # F1 metric.\n+ # F1 and BLEU metrics.\n f1 = _f1_score(prediction, labels)\n+ bleu = _bleu(prediction, labels)\n with self._lock():\n self.metrics['f1'] += f1\n self.metrics['f1_cnt'] += 1\n+ if bleu is not None:\n+ self.metrics['bleu'] += bleu\n+ self.metrics['bleu_cnt'] += 1\n \n # Ranking metrics.\n self.update_ranking_metrics(observation, labels)\n@@ -218,7 +252,7 @@\n # User-reported metrics\n if 'metrics' in observation:\n for k, v in observation['metrics'].items():\n- if k not in ['correct', 'f1', 'hits@k']:\n+ if k not in ['correct', 'f1', 'hits@k', 'bleu']:\n if k in self.metrics_list:\n with self._lock():\n self.metrics[k] += v\n", "issue": "Add BLEU as a metric\n\n", "before_files": [{"content": "# Copyright (c) 2017-present, Facebook, Inc.\n# All rights reserved.\n# This source code is licensed under the BSD-style license found in the\n# LICENSE file in the root directory of this source tree. An additional grant\n# of patent rights can be found in the PATENTS file in the same directory.\n\"\"\"Provides standard metric evaluations for dialog.\nUses locking and shared memory when ``numthreads`` is set to >1 to share metrics\nbetween processes.\n\"\"\"\n\nfrom parlai.core.thread_utils import SharedTable\nfrom parlai.core.utils import round_sigfigs, no_lock\nfrom collections import Counter\n\nimport re\nimport math\n\nre_art = re.compile(r'\\b(a|an|the)\\b')\nre_punc = re.compile(r'[!\"#$%&()*+,-./:;<=>?@\\[\\]\\\\^`{|}~_\\']')\ndef normalize_answer(s):\n \"\"\"Lower text and remove punctuation, articles and extra whitespace.\"\"\"\n def remove_articles(text):\n return re_art.sub(' ', text)\n\n def white_space_fix(text):\n return ' '.join(text.split())\n\n def remove_punc(text):\n return re_punc.sub(' ', text) # convert punctuation to spaces\n\n def lower(text):\n return text.lower()\n\n return white_space_fix(remove_articles(remove_punc(lower(s))))\n\n\ndef _exact_match(guess, answers):\n \"\"\"Check if guess is a (normalized) exact match with any answer.\"\"\"\n if guess is None or answers is None:\n return False\n guess = normalize_answer(guess)\n for a in answers:\n if guess == normalize_answer(a):\n return True\n return False\n\n\ndef _f1_score(guess, answers):\n \"\"\"Return the max F1 score between the guess and any answer.\"\"\"\n def _score(g_tokens, a_tokens):\n common = Counter(g_tokens) & Counter(a_tokens)\n num_same = sum(common.values())\n if num_same == 0:\n return 0\n precision = 1.0 * num_same / len(g_tokens)\n recall = 1.0 * num_same / len(a_tokens)\n f1 = (2 * precision * recall) / (precision + recall)\n return f1\n\n if guess is None or answers is None:\n return 0\n g_tokens = normalize_answer(guess).split()\n scores = [_score(g_tokens, normalize_answer(a).split()) for a in answers]\n return max(scores)\n\n\ndef aggregate_metrics(reporters):\n #reporters is a list of teachers or worlds\n m = {}\n m['tasks'] = {}\n sums = {'accuracy': 0, 'f1': 0, 'loss': 0, 'ppl': 0}\n num_tasks = 0\n total = 0\n for i in range(len(reporters)):\n tid = reporters[i].getID()\n mt = reporters[i].report()\n while tid in m['tasks']:\n # prevent name cloberring if using multiple tasks with same ID\n tid += '_'\n m['tasks'][tid] = mt\n total += mt['exs']\n found_any = False\n for k in sums.keys():\n if k in mt:\n sums[k] += mt[k]\n found_any = True\n if found_any:\n num_tasks += 1\n m['exs'] = total\n m['accuracy'] = 0\n if num_tasks > 0:\n for k in sums.keys():\n m[k] = round_sigfigs(sums[k] / num_tasks, 4)\n return m\n\n\ndef compute_time_metrics(world, max_time):\n # Determine time_left and num_epochs\n exs_per_epoch = world.num_examples() if world.num_examples() else 0\n num_epochs = world.opt.get('num_epochs', 0)\n max_exs = exs_per_epoch * num_epochs\n total_exs = world.get_total_exs()\n\n m = {}\n if (max_exs > 0 and total_exs > 0) or max_time > 0:\n m = {}\n time_left = None\n time = world.get_time()\n total_epochs = world.get_total_epochs()\n\n if (num_epochs > 0 and total_exs > 0 and max_exs > 0):\n exs_per_sec = time / total_exs\n time_left = (max_exs - total_exs) * exs_per_sec\n if max_time > 0:\n other_time_left = max_time - time\n if time_left is not None:\n time_left = min(time_left, other_time_left)\n else:\n time_left = other_time_left\n if time_left is not None:\n m['time_left'] = math.floor(time_left)\n if num_epochs > 0:\n if (total_exs > 0 and exs_per_epoch > 0):\n display_epochs = int(total_exs / exs_per_epoch)\n else:\n display_epochs = total_epochs\n m['num_epochs'] = display_epochs\n return m\n\n\nclass Metrics(object):\n \"\"\"Class that maintains evaluation metrics over dialog.\"\"\"\n\n def __init__(self, opt):\n self.metrics = {}\n self.metrics['cnt'] = 0\n self.metrics_list = ['mean_rank', 'loss', 'correct', 'f1', 'ppl']\n for k in self.metrics_list:\n self.metrics[k] = 0.0\n self.metrics[k + '_cnt'] = 0\n self.eval_pr = [1, 5, 10, 100]\n for k in self.eval_pr:\n self.metrics['hits@' + str(k)] = 0\n self.metrics['hits@_cnt'] = 0\n self.flags = {'has_text_cands': False, 'print_prediction_metrics': False}\n if opt.get('numthreads', 1) > 1:\n self.metrics = SharedTable(self.metrics)\n self.flags = SharedTable(self.flags)\n\n def __str__(self):\n return str(self.metrics)\n\n def __repr__(self):\n representation = super().__repr__()\n return representation.replace('>', ': {}>'.format(repr(self.metrics)))\n\n def _lock(self):\n if hasattr(self.metrics, 'get_lock'):\n # use the shared_table's lock\n return self.metrics.get_lock()\n else:\n # otherwise do nothing\n return no_lock()\n\n def update_ranking_metrics(self, observation, labels):\n text_cands = observation.get('text_candidates', None)\n if text_cands is None:\n return\n else:\n text = observation.get('text', None)\n\n # Now loop through text candidates, assuming they are sorted.\n # If any of them is a label then score a point.\n # maintain hits@1, 5, 10, 50, 100, etc.\n label_set = set(normalize_answer(l) for l in labels)\n cnts = {k: 0 for k in self.eval_pr}\n cnt = 0\n for c in text_cands:\n cnt += 1\n if normalize_answer(c) in label_set:\n for k in self.eval_pr:\n if cnt <= k:\n cnts[k] += 1\n # hits metric is 1 if cnts[k] > 0.\n # (other metrics such as p@k and r@k take\n # the value of cnt into account.)\n with self._lock():\n self.flags['has_text_cands'] = True\n for k in self.eval_pr:\n if cnts[k] > 0:\n self.metrics['hits@' + str(k)] += 1\n self.metrics['hits@_cnt'] += 1\n\n def update(self, observation, labels):\n with self._lock():\n self.metrics['cnt'] += 1\n\n # Exact match metric.\n correct = 0\n prediction = observation.get('text', None)\n if prediction is not None:\n if _exact_match(prediction, labels):\n correct = 1\n with self._lock():\n self.flags['print_prediction_metrics'] = True\n self.metrics['correct'] += correct\n self.metrics['correct_cnt'] += 1\n\n # F1 metric.\n f1 = _f1_score(prediction, labels)\n with self._lock():\n self.metrics['f1'] += f1\n self.metrics['f1_cnt'] += 1\n\n # Ranking metrics.\n self.update_ranking_metrics(observation, labels)\n\n # User-reported metrics\n if 'metrics' in observation:\n for k, v in observation['metrics'].items():\n if k not in ['correct', 'f1', 'hits@k']:\n if k in self.metrics_list:\n with self._lock():\n self.metrics[k] += v\n self.metrics[k + '_cnt'] += 1\n else:\n if type(self.metrics) is SharedTable:\n # can't share custom metrics during hogwild\n pass\n else:\n # no need to lock because not SharedTable\n if k not in self.metrics:\n self.metrics[k] = v\n self.metrics_list.append(k)\n self.metrics[k + '_cnt'] = 1.0\n else:\n self.metrics[k] += v\n\n # Return a dict containing the metrics for this specific example.\n # Metrics across all data is stored internally in the class, and\n # can be accessed with the report method.\n loss = {}\n loss['correct'] = correct\n return loss\n\n def report(self):\n # Report the metrics over all data seen so far.\n m = {}\n total = self.metrics['cnt']\n m['exs'] = total\n if total > 0:\n if self.flags['print_prediction_metrics']:\n m['accuracy'] = round_sigfigs(self.metrics['correct'] / max(1, self.metrics['correct_cnt']), 4)\n m['f1'] = round_sigfigs(self.metrics['f1'] / max(1, self.metrics['f1_cnt']), 4)\n if self.flags['has_text_cands']:\n for k in self.eval_pr:\n m['hits@' + str(k)] = round_sigfigs(\n self.metrics['hits@' + str(k)] / max(1, self.metrics['hits@_cnt']), 3)\n for k in self.metrics_list:\n if self.metrics[k + '_cnt'] > 0 and k != 'correct' and k != 'f1':\n m[k] = round_sigfigs(self.metrics[k] / max(1, self.metrics[k + '_cnt']), 4)\n return m\n\n def clear(self):\n with self._lock():\n self.metrics['cnt'] = 0\n for k in self.metrics_list:\n v = self.metrics[k]\n v_typ = type(v)\n if 'Tensor' in str(v_typ):\n self.metrics[k].zero_()\n else:\n self.metrics[k] = 0.0\n self.metrics[k + '_cnt'] = 0\n for k in self.eval_pr:\n self.metrics['hits@' + str(k)] = 0\n self.metrics['hits@_cnt'] = 0\n", "path": "parlai/core/metrics.py"}], "after_files": [{"content": "# Copyright (c) 2017-present, Facebook, Inc.\n# All rights reserved.\n# This source code is licensed under the BSD-style license found in the\n# LICENSE file in the root directory of this source tree. An additional grant\n# of patent rights can be found in the PATENTS file in the same directory.\n\"\"\"Provides standard metric evaluations for dialog.\nUses locking and shared memory when ``numthreads`` is set to >1 to share metrics\nbetween processes.\n\"\"\"\n\nfrom parlai.core.thread_utils import SharedTable\nfrom parlai.core.utils import round_sigfigs, no_lock\nfrom collections import Counter\n\nimport re\nimport math\n\ntry:\n from nltk.translate import bleu_score as nltkbleu\nexcept ImportError:\n # User doesn't have nltk installed, so we can't use it for bleu\n # We'll just turn off things, but we might want to warn the user\n nltkbleu = None\n\nre_art = re.compile(r'\\b(a|an|the)\\b')\nre_punc = re.compile(r'[!\"#$%&()*+,-./:;<=>?@\\[\\]\\\\^`{|}~_\\']')\n\n\ndef normalize_answer(s):\n \"\"\"Lower text and remove punctuation, articles and extra whitespace.\"\"\"\n def remove_articles(text):\n return re_art.sub(' ', text)\n\n def white_space_fix(text):\n return ' '.join(text.split())\n\n def remove_punc(text):\n return re_punc.sub(' ', text) # convert punctuation to spaces\n\n def lower(text):\n return text.lower()\n\n return white_space_fix(remove_articles(remove_punc(lower(s))))\n\n\ndef _exact_match(guess, answers):\n \"\"\"Check if guess is a (normalized) exact match with any answer.\"\"\"\n if guess is None or answers is None:\n return False\n guess = normalize_answer(guess)\n for a in answers:\n if guess == normalize_answer(a):\n return True\n return False\n\n\ndef _f1_score(guess, answers):\n \"\"\"Return the max F1 score between the guess and any answer.\"\"\"\n def _score(g_tokens, a_tokens):\n common = Counter(g_tokens) & Counter(a_tokens)\n num_same = sum(common.values())\n if num_same == 0:\n return 0\n precision = 1.0 * num_same / len(g_tokens)\n recall = 1.0 * num_same / len(a_tokens)\n f1 = (2 * precision * recall) / (precision + recall)\n return f1\n\n if guess is None or answers is None:\n return 0\n g_tokens = normalize_answer(guess).split()\n scores = [_score(g_tokens, normalize_answer(a).split()) for a in answers]\n return max(scores)\n\n\ndef _bleu(guess, answers):\n if nltkbleu is None:\n # bleu library not installed, just return a default value\n return None\n # Warning: BLEU calculation *should* include proper tokenization and\n # punctuation etc. We're using the normalize_answer for everything though,\n # so we're over-estimating our BLEU scores. Also note that NLTK's bleu is\n # going to be slower than fairseq's (which is written in C), but fairseq's\n # requires that everything be in arrays of ints (i.e. as tensors). NLTK's\n # works with strings, which is better suited for this module.\n return nltkbleu.sentence_bleu(\n [normalize_answer(a).split(\" \") for a in answers],\n normalize_answer(guess).split(\" \")\n )\n\n\ndef aggregate_metrics(reporters):\n #reporters is a list of teachers or worlds\n m = {}\n m['tasks'] = {}\n sums = {'accuracy': 0, 'f1': 0, 'loss': 0, 'ppl': 0}\n if nltkbleu is not None:\n sums['bleu'] = 0\n num_tasks = 0\n total = 0\n for i in range(len(reporters)):\n tid = reporters[i].getID()\n mt = reporters[i].report()\n while tid in m['tasks']:\n # prevent name cloberring if using multiple tasks with same ID\n tid += '_'\n m['tasks'][tid] = mt\n total += mt['exs']\n found_any = False\n for k in sums.keys():\n if k in mt:\n sums[k] += mt[k]\n found_any = True\n if found_any:\n num_tasks += 1\n m['exs'] = total\n m['accuracy'] = 0\n if num_tasks > 0:\n for k in sums.keys():\n m[k] = round_sigfigs(sums[k] / num_tasks, 4)\n return m\n\n\ndef compute_time_metrics(world, max_time):\n # Determine time_left and num_epochs\n exs_per_epoch = world.num_examples() if world.num_examples() else 0\n num_epochs = world.opt.get('num_epochs', 0)\n max_exs = exs_per_epoch * num_epochs\n total_exs = world.get_total_exs()\n\n m = {}\n if (max_exs > 0 and total_exs > 0) or max_time > 0:\n m = {}\n time_left = None\n time = world.get_time()\n total_epochs = world.get_total_epochs()\n\n if (num_epochs > 0 and total_exs > 0 and max_exs > 0):\n exs_per_sec = time / total_exs\n time_left = (max_exs - total_exs) * exs_per_sec\n if max_time > 0:\n other_time_left = max_time - time\n if time_left is not None:\n time_left = min(time_left, other_time_left)\n else:\n time_left = other_time_left\n if time_left is not None:\n m['time_left'] = math.floor(time_left)\n if num_epochs > 0:\n if (total_exs > 0 and exs_per_epoch > 0):\n display_epochs = int(total_exs / exs_per_epoch)\n else:\n display_epochs = total_epochs\n m['num_epochs'] = display_epochs\n return m\n\n\nclass Metrics(object):\n \"\"\"Class that maintains evaluation metrics over dialog.\"\"\"\n\n def __init__(self, opt):\n self.metrics = {}\n self.metrics['cnt'] = 0\n self.metrics_list = ['mean_rank', 'loss', 'correct', 'f1', 'ppl']\n if nltkbleu is not None:\n # only compute bleu if we can\n self.metrics_list.append('bleu')\n for k in self.metrics_list:\n self.metrics[k] = 0.0\n self.metrics[k + '_cnt'] = 0\n self.eval_pr = [1, 5, 10, 100]\n for k in self.eval_pr:\n self.metrics['hits@' + str(k)] = 0\n self.metrics['hits@_cnt'] = 0\n self.flags = {'has_text_cands': False, 'print_prediction_metrics': False}\n if opt.get('numthreads', 1) > 1:\n self.metrics = SharedTable(self.metrics)\n self.flags = SharedTable(self.flags)\n\n def __str__(self):\n return str(self.metrics)\n\n def __repr__(self):\n representation = super().__repr__()\n return representation.replace('>', ': {}>'.format(repr(self.metrics)))\n\n def _lock(self):\n if hasattr(self.metrics, 'get_lock'):\n # use the shared_table's lock\n return self.metrics.get_lock()\n else:\n # otherwise do nothing\n return no_lock()\n\n def update_ranking_metrics(self, observation, labels):\n text_cands = observation.get('text_candidates', None)\n if text_cands is None:\n return\n else:\n text = observation.get('text', None)\n\n # Now loop through text candidates, assuming they are sorted.\n # If any of them is a label then score a point.\n # maintain hits@1, 5, 10, 50, 100, etc.\n label_set = set(normalize_answer(l) for l in labels)\n cnts = {k: 0 for k in self.eval_pr}\n cnt = 0\n for c in text_cands:\n cnt += 1\n if normalize_answer(c) in label_set:\n for k in self.eval_pr:\n if cnt <= k:\n cnts[k] += 1\n # hits metric is 1 if cnts[k] > 0.\n # (other metrics such as p@k and r@k take\n # the value of cnt into account.)\n with self._lock():\n self.flags['has_text_cands'] = True\n for k in self.eval_pr:\n if cnts[k] > 0:\n self.metrics['hits@' + str(k)] += 1\n self.metrics['hits@_cnt'] += 1\n\n def update(self, observation, labels):\n with self._lock():\n self.metrics['cnt'] += 1\n\n # Exact match metric.\n correct = 0\n prediction = observation.get('text', None)\n if prediction is not None:\n if _exact_match(prediction, labels):\n correct = 1\n with self._lock():\n self.flags['print_prediction_metrics'] = True\n self.metrics['correct'] += correct\n self.metrics['correct_cnt'] += 1\n\n # F1 and BLEU metrics.\n f1 = _f1_score(prediction, labels)\n bleu = _bleu(prediction, labels)\n with self._lock():\n self.metrics['f1'] += f1\n self.metrics['f1_cnt'] += 1\n if bleu is not None:\n self.metrics['bleu'] += bleu\n self.metrics['bleu_cnt'] += 1\n\n # Ranking metrics.\n self.update_ranking_metrics(observation, labels)\n\n # User-reported metrics\n if 'metrics' in observation:\n for k, v in observation['metrics'].items():\n if k not in ['correct', 'f1', 'hits@k', 'bleu']:\n if k in self.metrics_list:\n with self._lock():\n self.metrics[k] += v\n self.metrics[k + '_cnt'] += 1\n else:\n if type(self.metrics) is SharedTable:\n # can't share custom metrics during hogwild\n pass\n else:\n # no need to lock because not SharedTable\n if k not in self.metrics:\n self.metrics[k] = v\n self.metrics_list.append(k)\n self.metrics[k + '_cnt'] = 1.0\n else:\n self.metrics[k] += v\n\n # Return a dict containing the metrics for this specific example.\n # Metrics across all data is stored internally in the class, and\n # can be accessed with the report method.\n loss = {}\n loss['correct'] = correct\n return loss\n\n def report(self):\n # Report the metrics over all data seen so far.\n m = {}\n total = self.metrics['cnt']\n m['exs'] = total\n if total > 0:\n if self.flags['print_prediction_metrics']:\n m['accuracy'] = round_sigfigs(self.metrics['correct'] / max(1, self.metrics['correct_cnt']), 4)\n m['f1'] = round_sigfigs(self.metrics['f1'] / max(1, self.metrics['f1_cnt']), 4)\n if self.flags['has_text_cands']:\n for k in self.eval_pr:\n m['hits@' + str(k)] = round_sigfigs(\n self.metrics['hits@' + str(k)] / max(1, self.metrics['hits@_cnt']), 3)\n for k in self.metrics_list:\n if self.metrics[k + '_cnt'] > 0 and k != 'correct' and k != 'f1':\n m[k] = round_sigfigs(self.metrics[k] / max(1, self.metrics[k + '_cnt']), 4)\n return m\n\n def clear(self):\n with self._lock():\n self.metrics['cnt'] = 0\n for k in self.metrics_list:\n v = self.metrics[k]\n v_typ = type(v)\n if 'Tensor' in str(v_typ):\n self.metrics[k].zero_()\n else:\n self.metrics[k] = 0.0\n self.metrics[k + '_cnt'] = 0\n for k in self.eval_pr:\n self.metrics['hits@' + str(k)] = 0\n self.metrics['hits@_cnt'] = 0\n", "path": "parlai/core/metrics.py"}]}
| 3,431 | 895 |
gh_patches_debug_33802
|
rasdani/github-patches
|
git_diff
|
ESMCI__cime-2501
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
cesm2_alpha10d doesn't attempt to download missing input files
I recently ported both alpha10c and alpha10d on my macbook. While alpha10c successfully downloads all missing input files, alpha10d doesn't appear to attempt to download missing input files. I've encountered this behavior both with A and C compsets.
Here is a snippet of ./case.submit standart output for alpha10c:
```
Loading input file list: 'Buildconf/cpl.input_data_list'
Model cpl missing file ocn2atm_fmapname = '/Users/altuntas/projects/cesm-inputdata/cpl/gridmaps/gx1v6/map_gx1v6_TO_fv0.9x1.25_aave.130322.nc'
Trying to download file: 'cpl/gridmaps/gx1v6/map_gx1v6_TO_fv0.9x1.25_aave.130322.nc' to path '/Users/altuntas/projects/cesm-inputdata/cpl/gridmaps/gx1v6/map_gx1v6_TO_fv0.9x1.25_aave.130322.nc'
SUCCESS
```
And here is the corresponding ./case.submit standart output for alpha10d:
```
Loading input file list: 'Buildconf/cpl.input_data_list'
Model cpl missing file ocn2atm_fmapname = '/Users/altuntas/projects/cesm-inputdata/cpl/gridmaps/gx1v6/map_gx1v6_TO_fv0.9x1.25_aave.130322.nc'
Model cpl missing file ocn2atm_smapname = '/Users/altuntas/projects/cesm-inputdata/cpl/gridmaps/gx1v6/map_gx1v6_TO_fv0.9x1.25_aave.130322.nc'
Model cpl missing file ice2atm_fmapname = '/Users/altuntas/projects/cesm-inputdata/cpl/gridmaps/gx1v6/map_gx1v6_TO_fv0.9x1.25_aave.130322.nc'
Model cpl missing file ice2atm_smapname = '/Users/altuntas/projects/cesm-inputdata/cpl/gridmaps/gx1v6/map_gx1v6_TO_fv0.9x1.25_aave.130322.nc'
```
While alpha10c runs sucessfully on my macbook (high sierra), alpha10d eventually fails due to missing input files.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/lib/CIME/case/check_input_data.py`
Content:
```
1 """
2 API for checking input for testcase
3 """
4 from CIME.XML.standard_module_setup import *
5 from CIME.utils import SharedArea, find_files, safe_copy, expect
6 from CIME.XML.inputdata import Inputdata
7 import CIME.Servers
8
9 import glob
10
11 logger = logging.getLogger(__name__)
12
13 def _download_if_in_repo(server, input_data_root, rel_path, isdirectory=False):
14 """
15 Return True if successfully downloaded
16 """
17 if not server.fileexists(rel_path):
18 return False
19
20 full_path = os.path.join(input_data_root, rel_path)
21 logging.info("Trying to download file: '{}' to path '{}'".format(rel_path, full_path))
22 # Make sure local path exists, create if it does not
23 if isdirectory or full_path.endswith(os.sep):
24 if not os.path.exists(full_path):
25 logger.info("Creating directory {}".format(full_path))
26 os.makedirs(full_path)
27 isdirectory = True
28 elif not os.path.exists(os.path.dirname(full_path)):
29 os.makedirs(os.path.dirname(full_path))
30
31 # Use umask to make sure files are group read/writable. As long as parent directories
32 # have +s, then everything should work.
33 with SharedArea():
34 if isdirectory:
35 return server.getdirectory(rel_path, full_path)
36 else:
37 return server.getfile(rel_path, full_path)
38
39 ###############################################################################
40 def check_all_input_data(self, protocal=None, address=None, input_data_root=None, data_list_dir="Buildconf", download=True):
41 ###############################################################################
42 success = False
43 if protocal is not None and address is not None:
44 success = self.check_input_data(protocal=protocal, address=address, download=download,
45 input_data_root=input_data_root, data_list_dir=data_list_dir)
46 else:
47 success = self.check_input_data(protocal=protocal, address=address, download=False,
48 input_data_root=input_data_root, data_list_dir=data_list_dir)
49 if download and not success:
50 success = _downloadfromserver(self, input_data_root, data_list_dir)
51
52 self.stage_refcase(input_data_root=input_data_root, data_list_dir=data_list_dir)
53 return success
54
55 def _downloadfromserver(case, input_data_root, data_list_dir):
56 # needs to be downloaded
57 success = False
58 protocal = 'svn'
59 inputdata = Inputdata()
60 while not success and protocal is not None:
61 protocal, address = inputdata.get_next_server()
62 logger.info("Checking server {} with protocal {}".format(address, protocal))
63 success = case.check_input_data(protocal=protocal, address=address, download=True,
64 input_data_root=input_data_root, data_list_dir=data_list_dir)
65 return success
66
67 def stage_refcase(self, input_data_root=None, data_list_dir=None):
68 get_refcase = self.get_value("GET_REFCASE")
69 run_type = self.get_value("RUN_TYPE")
70 continue_run = self.get_value("CONTINUE_RUN")
71
72 # We do not fully populate the inputdata directory on every
73 # machine and do not expect every user to download the 3TB+ of
74 # data in our inputdata repository. This code checks for the
75 # existence of inputdata in the local inputdata directory and
76 # attempts to download data from the server if it's needed and
77 # missing.
78 if get_refcase and run_type != "startup" and not continue_run:
79 din_loc_root = self.get_value("DIN_LOC_ROOT")
80 run_refdate = self.get_value("RUN_REFDATE")
81 run_refcase = self.get_value("RUN_REFCASE")
82 run_refdir = self.get_value("RUN_REFDIR")
83 rundir = self.get_value("RUNDIR")
84
85 refdir = os.path.join(din_loc_root, run_refdir, run_refcase, run_refdate)
86 if not os.path.isdir(refdir):
87 logger.warning("Refcase not found in {}, will attempt to download from inputdata".format(refdir))
88 with open(os.path.join("Buildconf","refcase.input_data_list"),"w") as fd:
89 fd.write("refdir = {}{}".format(refdir, os.sep))
90 if input_data_root is None:
91 input_data_root = din_loc_root
92 if data_list_dir is None:
93 data_list_dir = "Buildconf"
94 success = _downloadfromserver(self, input_data_root=input_data_root, data_list_dir=data_list_dir)
95 expect(success, "Could not download refcase from any server")
96
97 logger.info(" - Prestaging REFCASE ({}) to {}".format(refdir, rundir))
98
99 # prestage the reference case's files.
100
101 if (not os.path.exists(rundir)):
102 logger.debug("Creating run directory: {}".format(rundir))
103 os.makedirs(rundir)
104
105 # copy the refcases' rpointer files to the run directory
106 for rpointerfile in glob.iglob(os.path.join("{}","*rpointer*").format(refdir)):
107 logger.info("Copy rpointer {}".format(rpointerfile))
108 safe_copy(rpointerfile, rundir)
109
110 # link everything else
111
112 for rcfile in glob.iglob(os.path.join(refdir,"*")):
113 rcbaseline = os.path.basename(rcfile)
114 if not os.path.exists("{}/{}".format(rundir, rcbaseline)):
115 logger.info("Staging file {}".format(rcfile))
116 os.symlink(rcfile, "{}/{}".format(rundir, rcbaseline))
117 # Backward compatibility, some old refcases have cam2 in the name
118 # link to local cam file.
119 for cam2file in glob.iglob(os.path.join("{}","*.cam2.*").format(rundir)):
120 camfile = cam2file.replace("cam2", "cam")
121 os.symlink(cam2file, camfile)
122
123 return True
124
125 def check_input_data(case, protocal="svn", address=None, input_data_root=None, data_list_dir="Buildconf", download=False):
126 """
127 Return True if no files missing
128 """
129 case.load_env(reset=True)
130 # Fill in defaults as needed
131 input_data_root = case.get_value("DIN_LOC_ROOT") if input_data_root is None else input_data_root
132
133 expect(os.path.isdir(input_data_root), "Invalid input_data_root directory: '{}'".format(input_data_root))
134 expect(os.path.isdir(data_list_dir), "Invalid data_list_dir directory: '{}'".format(data_list_dir))
135
136 data_list_files = find_files(data_list_dir, "*.input_data_list")
137 expect(data_list_files, "No .input_data_list files found in dir '{}'".format(data_list_dir))
138
139 no_files_missing = True
140
141 if download:
142 if protocal not in vars(CIME.Servers):
143 logger.warning("Client protocal {} not enabled".format(protocal))
144 return False
145
146 if protocal == "svn":
147 server = CIME.Servers.SVN(address)
148 elif protocal == "gftp":
149 server = CIME.Servers.GridFTP(address)
150 elif protocal == "ftp":
151 server = CIME.Servers.FTP(address)
152 elif protocal == "wget":
153 server = CIME.Servers.WGET(address)
154 else:
155 expect(False, "Unsupported inputdata protocal: {}".format(protocal))
156
157
158
159 for data_list_file in data_list_files:
160 logging.info("Loading input file list: '{}'".format(data_list_file))
161 with open(data_list_file, "r") as fd:
162 lines = fd.readlines()
163
164 for line in lines:
165 line = line.strip()
166 if (line and not line.startswith("#")):
167 tokens = line.split('=')
168 description, full_path = tokens[0].strip(), tokens[1].strip()
169 if(full_path):
170 # expand xml variables
171 full_path = case.get_resolved_value(full_path)
172 rel_path = full_path.replace(input_data_root, "")
173 model = os.path.basename(data_list_file).split('.')[0]
174
175 if ("/" in rel_path and rel_path == full_path):
176 # User pointing to a file outside of input_data_root, we cannot determine
177 # rel_path, and so cannot download the file. If it already exists, we can
178 # proceed
179 if not os.path.exists(full_path):
180 logging.warning(" Model {} missing file {} = '{}'".format(model, description, full_path))
181 if download:
182 logging.warning(" Cannot download file since it lives outside of the input_data_root '{}'".format(input_data_root))
183 no_files_missing = False
184 else:
185 logging.debug(" Found input file: '{}'".format(full_path))
186
187 else:
188 # There are some special values of rel_path that
189 # we need to ignore - some of the component models
190 # set things like 'NULL' or 'same_as_TS' -
191 # basically if rel_path does not contain '/' (a
192 # directory tree) you can assume it's a special
193 # value and ignore it (perhaps with a warning)
194 if ("/" in rel_path and not os.path.exists(full_path)):
195 logging.warning(" Model {} missing file {} = '{}'".format(model, description, full_path))
196
197 if (download):
198 success = _download_if_in_repo(server, input_data_root, rel_path.strip(os.sep),
199 isdirectory=rel_path.endswith(os.sep))
200 if not success:
201 no_files_missing = False
202 else:
203 logging.debug(" Already had input file: '{}'".format(full_path))
204
205 else:
206 model = os.path.basename(data_list_file).split('.')[0]
207 logging.warning("Model {} no file specified for {}".format(model, description))
208
209 return no_files_missing
210
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scripts/lib/CIME/case/check_input_data.py b/scripts/lib/CIME/case/check_input_data.py
--- a/scripts/lib/CIME/case/check_input_data.py
+++ b/scripts/lib/CIME/case/check_input_data.py
@@ -177,7 +177,7 @@
# rel_path, and so cannot download the file. If it already exists, we can
# proceed
if not os.path.exists(full_path):
- logging.warning(" Model {} missing file {} = '{}'".format(model, description, full_path))
+ logging.warning("Model {} missing file {} = '{}'".format(model, description, full_path))
if download:
logging.warning(" Cannot download file since it lives outside of the input_data_root '{}'".format(input_data_root))
no_files_missing = False
@@ -193,12 +193,11 @@
# value and ignore it (perhaps with a warning)
if ("/" in rel_path and not os.path.exists(full_path)):
logging.warning(" Model {} missing file {} = '{}'".format(model, description, full_path))
+ no_files_missing = False
if (download):
- success = _download_if_in_repo(server, input_data_root, rel_path.strip(os.sep),
+ no_files_missing = _download_if_in_repo(server, input_data_root, rel_path.strip(os.sep),
isdirectory=rel_path.endswith(os.sep))
- if not success:
- no_files_missing = False
else:
logging.debug(" Already had input file: '{}'".format(full_path))
|
{"golden_diff": "diff --git a/scripts/lib/CIME/case/check_input_data.py b/scripts/lib/CIME/case/check_input_data.py\n--- a/scripts/lib/CIME/case/check_input_data.py\n+++ b/scripts/lib/CIME/case/check_input_data.py\n@@ -177,7 +177,7 @@\n # rel_path, and so cannot download the file. If it already exists, we can\n # proceed\n if not os.path.exists(full_path):\n- logging.warning(\" Model {} missing file {} = '{}'\".format(model, description, full_path))\n+ logging.warning(\"Model {} missing file {} = '{}'\".format(model, description, full_path))\n if download:\n logging.warning(\" Cannot download file since it lives outside of the input_data_root '{}'\".format(input_data_root))\n no_files_missing = False\n@@ -193,12 +193,11 @@\n # value and ignore it (perhaps with a warning)\n if (\"/\" in rel_path and not os.path.exists(full_path)):\n logging.warning(\" Model {} missing file {} = '{}'\".format(model, description, full_path))\n+ no_files_missing = False\n \n if (download):\n- success = _download_if_in_repo(server, input_data_root, rel_path.strip(os.sep),\n+ no_files_missing = _download_if_in_repo(server, input_data_root, rel_path.strip(os.sep),\n isdirectory=rel_path.endswith(os.sep))\n- if not success:\n- no_files_missing = False\n else:\n logging.debug(\" Already had input file: '{}'\".format(full_path))\n", "issue": "cesm2_alpha10d doesn't attempt to download missing input files\nI recently ported both alpha10c and alpha10d on my macbook. While alpha10c successfully downloads all missing input files, alpha10d doesn't appear to attempt to download missing input files. I've encountered this behavior both with A and C compsets.\r\n\r\nHere is a snippet of ./case.submit standart output for alpha10c:\r\n```\r\nLoading input file list: 'Buildconf/cpl.input_data_list'\r\n Model cpl missing file ocn2atm_fmapname = '/Users/altuntas/projects/cesm-inputdata/cpl/gridmaps/gx1v6/map_gx1v6_TO_fv0.9x1.25_aave.130322.nc'\r\nTrying to download file: 'cpl/gridmaps/gx1v6/map_gx1v6_TO_fv0.9x1.25_aave.130322.nc' to path '/Users/altuntas/projects/cesm-inputdata/cpl/gridmaps/gx1v6/map_gx1v6_TO_fv0.9x1.25_aave.130322.nc'\r\nSUCCESS\r\n```\r\n\r\nAnd here is the corresponding ./case.submit standart output for alpha10d:\r\n```\r\nLoading input file list: 'Buildconf/cpl.input_data_list'\r\n Model cpl missing file ocn2atm_fmapname = '/Users/altuntas/projects/cesm-inputdata/cpl/gridmaps/gx1v6/map_gx1v6_TO_fv0.9x1.25_aave.130322.nc'\r\n Model cpl missing file ocn2atm_smapname = '/Users/altuntas/projects/cesm-inputdata/cpl/gridmaps/gx1v6/map_gx1v6_TO_fv0.9x1.25_aave.130322.nc'\r\n Model cpl missing file ice2atm_fmapname = '/Users/altuntas/projects/cesm-inputdata/cpl/gridmaps/gx1v6/map_gx1v6_TO_fv0.9x1.25_aave.130322.nc'\r\n Model cpl missing file ice2atm_smapname = '/Users/altuntas/projects/cesm-inputdata/cpl/gridmaps/gx1v6/map_gx1v6_TO_fv0.9x1.25_aave.130322.nc'\r\n```\r\n\r\nWhile alpha10c runs sucessfully on my macbook (high sierra), alpha10d eventually fails due to missing input files.\r\n\n", "before_files": [{"content": "\"\"\"\nAPI for checking input for testcase\n\"\"\"\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.utils import SharedArea, find_files, safe_copy, expect\nfrom CIME.XML.inputdata import Inputdata\nimport CIME.Servers\n\nimport glob\n\nlogger = logging.getLogger(__name__)\n\ndef _download_if_in_repo(server, input_data_root, rel_path, isdirectory=False):\n \"\"\"\n Return True if successfully downloaded\n \"\"\"\n if not server.fileexists(rel_path):\n return False\n\n full_path = os.path.join(input_data_root, rel_path)\n logging.info(\"Trying to download file: '{}' to path '{}'\".format(rel_path, full_path))\n # Make sure local path exists, create if it does not\n if isdirectory or full_path.endswith(os.sep):\n if not os.path.exists(full_path):\n logger.info(\"Creating directory {}\".format(full_path))\n os.makedirs(full_path)\n isdirectory = True\n elif not os.path.exists(os.path.dirname(full_path)):\n os.makedirs(os.path.dirname(full_path))\n\n # Use umask to make sure files are group read/writable. As long as parent directories\n # have +s, then everything should work.\n with SharedArea():\n if isdirectory:\n return server.getdirectory(rel_path, full_path)\n else:\n return server.getfile(rel_path, full_path)\n\n###############################################################################\ndef check_all_input_data(self, protocal=None, address=None, input_data_root=None, data_list_dir=\"Buildconf\", download=True):\n###############################################################################\n success = False\n if protocal is not None and address is not None:\n success = self.check_input_data(protocal=protocal, address=address, download=download,\n input_data_root=input_data_root, data_list_dir=data_list_dir)\n else:\n success = self.check_input_data(protocal=protocal, address=address, download=False,\n input_data_root=input_data_root, data_list_dir=data_list_dir)\n if download and not success:\n success = _downloadfromserver(self, input_data_root, data_list_dir)\n\n self.stage_refcase(input_data_root=input_data_root, data_list_dir=data_list_dir)\n return success\n\ndef _downloadfromserver(case, input_data_root, data_list_dir):\n # needs to be downloaded\n success = False\n protocal = 'svn'\n inputdata = Inputdata()\n while not success and protocal is not None:\n protocal, address = inputdata.get_next_server()\n logger.info(\"Checking server {} with protocal {}\".format(address, protocal))\n success = case.check_input_data(protocal=protocal, address=address, download=True,\n input_data_root=input_data_root, data_list_dir=data_list_dir)\n return success\n\ndef stage_refcase(self, input_data_root=None, data_list_dir=None):\n get_refcase = self.get_value(\"GET_REFCASE\")\n run_type = self.get_value(\"RUN_TYPE\")\n continue_run = self.get_value(\"CONTINUE_RUN\")\n\n # We do not fully populate the inputdata directory on every\n # machine and do not expect every user to download the 3TB+ of\n # data in our inputdata repository. This code checks for the\n # existence of inputdata in the local inputdata directory and\n # attempts to download data from the server if it's needed and\n # missing.\n if get_refcase and run_type != \"startup\" and not continue_run:\n din_loc_root = self.get_value(\"DIN_LOC_ROOT\")\n run_refdate = self.get_value(\"RUN_REFDATE\")\n run_refcase = self.get_value(\"RUN_REFCASE\")\n run_refdir = self.get_value(\"RUN_REFDIR\")\n rundir = self.get_value(\"RUNDIR\")\n\n refdir = os.path.join(din_loc_root, run_refdir, run_refcase, run_refdate)\n if not os.path.isdir(refdir):\n logger.warning(\"Refcase not found in {}, will attempt to download from inputdata\".format(refdir))\n with open(os.path.join(\"Buildconf\",\"refcase.input_data_list\"),\"w\") as fd:\n fd.write(\"refdir = {}{}\".format(refdir, os.sep))\n if input_data_root is None:\n input_data_root = din_loc_root\n if data_list_dir is None:\n data_list_dir = \"Buildconf\"\n success = _downloadfromserver(self, input_data_root=input_data_root, data_list_dir=data_list_dir)\n expect(success, \"Could not download refcase from any server\")\n\n logger.info(\" - Prestaging REFCASE ({}) to {}\".format(refdir, rundir))\n\n # prestage the reference case's files.\n\n if (not os.path.exists(rundir)):\n logger.debug(\"Creating run directory: {}\".format(rundir))\n os.makedirs(rundir)\n\n # copy the refcases' rpointer files to the run directory\n for rpointerfile in glob.iglob(os.path.join(\"{}\",\"*rpointer*\").format(refdir)):\n logger.info(\"Copy rpointer {}\".format(rpointerfile))\n safe_copy(rpointerfile, rundir)\n\n # link everything else\n\n for rcfile in glob.iglob(os.path.join(refdir,\"*\")):\n rcbaseline = os.path.basename(rcfile)\n if not os.path.exists(\"{}/{}\".format(rundir, rcbaseline)):\n logger.info(\"Staging file {}\".format(rcfile))\n os.symlink(rcfile, \"{}/{}\".format(rundir, rcbaseline))\n # Backward compatibility, some old refcases have cam2 in the name\n # link to local cam file.\n for cam2file in glob.iglob(os.path.join(\"{}\",\"*.cam2.*\").format(rundir)):\n camfile = cam2file.replace(\"cam2\", \"cam\")\n os.symlink(cam2file, camfile)\n\n return True\n\ndef check_input_data(case, protocal=\"svn\", address=None, input_data_root=None, data_list_dir=\"Buildconf\", download=False):\n \"\"\"\n Return True if no files missing\n \"\"\"\n case.load_env(reset=True)\n # Fill in defaults as needed\n input_data_root = case.get_value(\"DIN_LOC_ROOT\") if input_data_root is None else input_data_root\n\n expect(os.path.isdir(input_data_root), \"Invalid input_data_root directory: '{}'\".format(input_data_root))\n expect(os.path.isdir(data_list_dir), \"Invalid data_list_dir directory: '{}'\".format(data_list_dir))\n\n data_list_files = find_files(data_list_dir, \"*.input_data_list\")\n expect(data_list_files, \"No .input_data_list files found in dir '{}'\".format(data_list_dir))\n\n no_files_missing = True\n\n if download:\n if protocal not in vars(CIME.Servers):\n logger.warning(\"Client protocal {} not enabled\".format(protocal))\n return False\n\n if protocal == \"svn\":\n server = CIME.Servers.SVN(address)\n elif protocal == \"gftp\":\n server = CIME.Servers.GridFTP(address)\n elif protocal == \"ftp\":\n server = CIME.Servers.FTP(address)\n elif protocal == \"wget\":\n server = CIME.Servers.WGET(address)\n else:\n expect(False, \"Unsupported inputdata protocal: {}\".format(protocal))\n\n\n\n for data_list_file in data_list_files:\n logging.info(\"Loading input file list: '{}'\".format(data_list_file))\n with open(data_list_file, \"r\") as fd:\n lines = fd.readlines()\n\n for line in lines:\n line = line.strip()\n if (line and not line.startswith(\"#\")):\n tokens = line.split('=')\n description, full_path = tokens[0].strip(), tokens[1].strip()\n if(full_path):\n # expand xml variables\n full_path = case.get_resolved_value(full_path)\n rel_path = full_path.replace(input_data_root, \"\")\n model = os.path.basename(data_list_file).split('.')[0]\n\n if (\"/\" in rel_path and rel_path == full_path):\n # User pointing to a file outside of input_data_root, we cannot determine\n # rel_path, and so cannot download the file. If it already exists, we can\n # proceed\n if not os.path.exists(full_path):\n logging.warning(\" Model {} missing file {} = '{}'\".format(model, description, full_path))\n if download:\n logging.warning(\" Cannot download file since it lives outside of the input_data_root '{}'\".format(input_data_root))\n no_files_missing = False\n else:\n logging.debug(\" Found input file: '{}'\".format(full_path))\n\n else:\n # There are some special values of rel_path that\n # we need to ignore - some of the component models\n # set things like 'NULL' or 'same_as_TS' -\n # basically if rel_path does not contain '/' (a\n # directory tree) you can assume it's a special\n # value and ignore it (perhaps with a warning)\n if (\"/\" in rel_path and not os.path.exists(full_path)):\n logging.warning(\" Model {} missing file {} = '{}'\".format(model, description, full_path))\n\n if (download):\n success = _download_if_in_repo(server, input_data_root, rel_path.strip(os.sep),\n isdirectory=rel_path.endswith(os.sep))\n if not success:\n no_files_missing = False\n else:\n logging.debug(\" Already had input file: '{}'\".format(full_path))\n\n else:\n model = os.path.basename(data_list_file).split('.')[0]\n logging.warning(\"Model {} no file specified for {}\".format(model, description))\n\n return no_files_missing\n", "path": "scripts/lib/CIME/case/check_input_data.py"}], "after_files": [{"content": "\"\"\"\nAPI for checking input for testcase\n\"\"\"\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.utils import SharedArea, find_files, safe_copy, expect\nfrom CIME.XML.inputdata import Inputdata\nimport CIME.Servers\n\nimport glob\n\nlogger = logging.getLogger(__name__)\n\ndef _download_if_in_repo(server, input_data_root, rel_path, isdirectory=False):\n \"\"\"\n Return True if successfully downloaded\n \"\"\"\n if not server.fileexists(rel_path):\n return False\n\n full_path = os.path.join(input_data_root, rel_path)\n logging.info(\"Trying to download file: '{}' to path '{}'\".format(rel_path, full_path))\n # Make sure local path exists, create if it does not\n if isdirectory or full_path.endswith(os.sep):\n if not os.path.exists(full_path):\n logger.info(\"Creating directory {}\".format(full_path))\n os.makedirs(full_path)\n isdirectory = True\n elif not os.path.exists(os.path.dirname(full_path)):\n os.makedirs(os.path.dirname(full_path))\n\n # Use umask to make sure files are group read/writable. As long as parent directories\n # have +s, then everything should work.\n with SharedArea():\n if isdirectory:\n return server.getdirectory(rel_path, full_path)\n else:\n return server.getfile(rel_path, full_path)\n\n###############################################################################\ndef check_all_input_data(self, protocal=None, address=None, input_data_root=None, data_list_dir=\"Buildconf\", download=True):\n###############################################################################\n success = False\n if protocal is not None and address is not None:\n success = self.check_input_data(protocal=protocal, address=address, download=download,\n input_data_root=input_data_root, data_list_dir=data_list_dir)\n else:\n success = self.check_input_data(protocal=protocal, address=address, download=False,\n input_data_root=input_data_root, data_list_dir=data_list_dir)\n if download and not success:\n success = _downloadfromserver(self, input_data_root, data_list_dir)\n\n self.stage_refcase(input_data_root=input_data_root, data_list_dir=data_list_dir)\n return success\n\ndef _downloadfromserver(case, input_data_root, data_list_dir):\n # needs to be downloaded\n success = False\n protocal = 'svn'\n inputdata = Inputdata()\n while not success and protocal is not None:\n protocal, address = inputdata.get_next_server()\n logger.info(\"Checking server {} with protocal {}\".format(address, protocal))\n success = case.check_input_data(protocal=protocal, address=address, download=True,\n input_data_root=input_data_root, data_list_dir=data_list_dir)\n return success\n\ndef stage_refcase(self, input_data_root=None, data_list_dir=None):\n get_refcase = self.get_value(\"GET_REFCASE\")\n run_type = self.get_value(\"RUN_TYPE\")\n continue_run = self.get_value(\"CONTINUE_RUN\")\n\n # We do not fully populate the inputdata directory on every\n # machine and do not expect every user to download the 3TB+ of\n # data in our inputdata repository. This code checks for the\n # existence of inputdata in the local inputdata directory and\n # attempts to download data from the server if it's needed and\n # missing.\n if get_refcase and run_type != \"startup\" and not continue_run:\n din_loc_root = self.get_value(\"DIN_LOC_ROOT\")\n run_refdate = self.get_value(\"RUN_REFDATE\")\n run_refcase = self.get_value(\"RUN_REFCASE\")\n run_refdir = self.get_value(\"RUN_REFDIR\")\n rundir = self.get_value(\"RUNDIR\")\n\n refdir = os.path.join(din_loc_root, run_refdir, run_refcase, run_refdate)\n if not os.path.isdir(refdir):\n logger.warning(\"Refcase not found in {}, will attempt to download from inputdata\".format(refdir))\n with open(os.path.join(\"Buildconf\",\"refcase.input_data_list\"),\"w\") as fd:\n fd.write(\"refdir = {}{}\".format(refdir, os.sep))\n if input_data_root is None:\n input_data_root = din_loc_root\n if data_list_dir is None:\n data_list_dir = \"Buildconf\"\n success = _downloadfromserver(self, input_data_root=input_data_root, data_list_dir=data_list_dir)\n expect(success, \"Could not download refcase from any server\")\n\n logger.info(\" - Prestaging REFCASE ({}) to {}\".format(refdir, rundir))\n\n # prestage the reference case's files.\n\n if (not os.path.exists(rundir)):\n logger.debug(\"Creating run directory: {}\".format(rundir))\n os.makedirs(rundir)\n\n # copy the refcases' rpointer files to the run directory\n for rpointerfile in glob.iglob(os.path.join(\"{}\",\"*rpointer*\").format(refdir)):\n logger.info(\"Copy rpointer {}\".format(rpointerfile))\n safe_copy(rpointerfile, rundir)\n\n # link everything else\n\n for rcfile in glob.iglob(os.path.join(refdir,\"*\")):\n rcbaseline = os.path.basename(rcfile)\n if not os.path.exists(\"{}/{}\".format(rundir, rcbaseline)):\n logger.info(\"Staging file {}\".format(rcfile))\n os.symlink(rcfile, \"{}/{}\".format(rundir, rcbaseline))\n # Backward compatibility, some old refcases have cam2 in the name\n # link to local cam file.\n for cam2file in glob.iglob(os.path.join(\"{}\",\"*.cam2.*\").format(rundir)):\n camfile = cam2file.replace(\"cam2\", \"cam\")\n os.symlink(cam2file, camfile)\n\n return True\n\ndef check_input_data(case, protocal=\"svn\", address=None, input_data_root=None, data_list_dir=\"Buildconf\", download=False):\n \"\"\"\n Return True if no files missing\n \"\"\"\n case.load_env(reset=True)\n # Fill in defaults as needed\n input_data_root = case.get_value(\"DIN_LOC_ROOT\") if input_data_root is None else input_data_root\n\n expect(os.path.isdir(input_data_root), \"Invalid input_data_root directory: '{}'\".format(input_data_root))\n expect(os.path.isdir(data_list_dir), \"Invalid data_list_dir directory: '{}'\".format(data_list_dir))\n\n data_list_files = find_files(data_list_dir, \"*.input_data_list\")\n expect(data_list_files, \"No .input_data_list files found in dir '{}'\".format(data_list_dir))\n\n no_files_missing = True\n\n if download:\n if protocal not in vars(CIME.Servers):\n logger.warning(\"Client protocal {} not enabled\".format(protocal))\n return False\n\n if protocal == \"svn\":\n server = CIME.Servers.SVN(address)\n elif protocal == \"gftp\":\n server = CIME.Servers.GridFTP(address)\n elif protocal == \"ftp\":\n server = CIME.Servers.FTP(address)\n elif protocal == \"wget\":\n server = CIME.Servers.WGET(address)\n else:\n expect(False, \"Unsupported inputdata protocal: {}\".format(protocal))\n\n\n\n for data_list_file in data_list_files:\n logging.info(\"Loading input file list: '{}'\".format(data_list_file))\n with open(data_list_file, \"r\") as fd:\n lines = fd.readlines()\n\n for line in lines:\n line = line.strip()\n if (line and not line.startswith(\"#\")):\n tokens = line.split('=')\n description, full_path = tokens[0].strip(), tokens[1].strip()\n if(full_path):\n # expand xml variables\n full_path = case.get_resolved_value(full_path)\n rel_path = full_path.replace(input_data_root, \"\")\n model = os.path.basename(data_list_file).split('.')[0]\n\n if (\"/\" in rel_path and rel_path == full_path):\n # User pointing to a file outside of input_data_root, we cannot determine\n # rel_path, and so cannot download the file. If it already exists, we can\n # proceed\n if not os.path.exists(full_path):\n logging.warning(\"Model {} missing file {} = '{}'\".format(model, description, full_path))\n if download:\n logging.warning(\" Cannot download file since it lives outside of the input_data_root '{}'\".format(input_data_root))\n no_files_missing = False\n else:\n logging.debug(\" Found input file: '{}'\".format(full_path))\n\n else:\n # There are some special values of rel_path that\n # we need to ignore - some of the component models\n # set things like 'NULL' or 'same_as_TS' -\n # basically if rel_path does not contain '/' (a\n # directory tree) you can assume it's a special\n # value and ignore it (perhaps with a warning)\n if (\"/\" in rel_path and not os.path.exists(full_path)):\n logging.warning(\" Model {} missing file {} = '{}'\".format(model, description, full_path))\n no_files_missing = False\n\n if (download):\n no_files_missing = _download_if_in_repo(server, input_data_root, rel_path.strip(os.sep),\n isdirectory=rel_path.endswith(os.sep))\n else:\n logging.debug(\" Already had input file: '{}'\".format(full_path))\n\n else:\n model = os.path.basename(data_list_file).split('.')[0]\n logging.warning(\"Model {} no file specified for {}\".format(model, description))\n\n return no_files_missing\n", "path": "scripts/lib/CIME/case/check_input_data.py"}]}
| 3,465 | 341 |
gh_patches_debug_4274
|
rasdani/github-patches
|
git_diff
|
OpenEnergyPlatform__oeplatform-980
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Permissions: Renaming a permission group is not possible
Existing groups cannot be renamed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `login/views.py`
Content:
```
1 from django import forms
2 from django.contrib.auth import update_session_auth_hash
3 from django.contrib.auth.mixins import LoginRequiredMixin
4 from django.contrib.auth.models import Group
5 from django.contrib.auth.views import PasswordChangeView, PasswordResetView
6 from django.core.exceptions import ObjectDoesNotExist, PermissionDenied
7 from django.http import Http404
8 from django.shortcuts import get_object_or_404, redirect, render
9 from django.views.generic import FormView, View
10 from django.views.generic.edit import UpdateView
11
12 import login.models as models
13
14 from .forms import ChangeEmailForm, CreateUserForm, DetachForm, EditUserForm, GroupForm
15 from .models import ADMIN_PERM, GroupMembership, UserGroup
16 from .models import myuser as OepUser
17
18 from oeplatform.settings import URL
19
20 class ProfileView(View):
21 def get(self, request, user_id):
22 """
23 Load the user identified by user_id and is OAuth-token. If latter does not exist yet, create one.
24 :param request: A HTTP-request object sent by the Django framework.
25 :param user_id: An user id
26 :return: Profile renderer
27 """
28 from rest_framework.authtoken.models import Token
29
30 for user in OepUser.objects.all():
31 Token.objects.get_or_create(user=user)
32 user = get_object_or_404(OepUser, pk=user_id)
33 token = None
34 if request.user.is_authenticated:
35 token = Token.objects.get(user=request.user)
36 return render(
37 request, "login/profile.html", {"profile_user": user, "token": token}
38 )
39
40
41 class GroupManagement(View, LoginRequiredMixin):
42 def get(self, request):
43 """
44 Load and list the available groups by groupadmin.
45 :param request: A HTTP-request object sent by the Django framework.
46 :param user_id: An user id
47 :return: Profile renderer
48 """
49
50 membership = request.user.memberships
51 return render(
52 request, "login/list_memberships.html", {"membership": membership}
53 )
54
55
56 class GroupCreate(View, LoginRequiredMixin):
57 def get(self, request, group_id=None):
58 """
59 Load the chosen action(create or edit) for a group.
60 :param request: A HTTP-request object sent by the Django framework.
61 :param user_id: An user id
62 :param user_id: An group id
63 :return: Profile renderer
64 """
65
66 if group_id:
67 group = UserGroup.objects.get(id=group_id)
68 form = GroupForm(instance=group)
69 membership = get_object_or_404(
70 GroupMembership, group=group, user=request.user
71 )
72 if membership.level < ADMIN_PERM:
73 raise PermissionDenied
74 else:
75 form = GroupForm()
76 return render(request, "login/group_create.html", {"form": form})
77
78 def post(self, request, group_id=None):
79 """
80 Performs selected action(save or delete) for a group. If a groupname already exists, then a error
81 will be output.
82 The selected users become members of this group. The groupadmin is already set.
83 :param request: A HTTP-request object sent by the Django framework.
84 :param user_id: An user id
85 :param user_id: An group id
86 :return: Profile renderer
87 """
88 group = UserGroup.objects.get(id=group_id) if group_id else None
89 form = GroupForm(request.POST, instance=group)
90 if form.is_valid():
91 if group_id:
92 membership = get_object_or_404(
93 GroupMembership, group=group, user=request.user
94 )
95 if membership.level < ADMIN_PERM:
96 raise PermissionDenied
97 else:
98 group = form.save()
99 membership = GroupMembership.objects.create(
100 user=request.user, group=group, level=ADMIN_PERM
101 )
102 membership.save()
103 return redirect("/user/groups/{id}".format(id=group.id), {"group": group})
104 else:
105 return render(request, "login/group_create.html", {"form": form})
106
107
108 class GroupView(View, LoginRequiredMixin):
109 def get(self, request, group_id):
110 """
111 Load the chosen action(create or edit) for a group.
112 :param request: A HTTP-request object sent by the Django framework.
113 :param user_id: An user id
114 :param user_id: An group id
115 :return: Profile renderer
116 """
117 group = get_object_or_404(UserGroup, pk=group_id)
118 return render(
119 request,
120 "login/group.html",
121 {"group": group},
122 )
123
124
125 class GroupEdit(View, LoginRequiredMixin):
126 def get(self, request, group_id):
127 """
128 Load the chosen action(create or edit) for a group.
129 :param request: A HTTP-request object sent by the Django framework.
130 :param user_id: An user id
131 :param user_id: An group id
132 :return: Profile renderer
133 """
134 group = get_object_or_404(UserGroup, pk=group_id)
135 is_admin = False
136 membership = GroupMembership.objects.filter(
137 group=group, user=request.user
138 ).first()
139 if membership:
140 is_admin = membership.level >= ADMIN_PERM
141 return render(
142 request,
143 "login/change_form.html",
144 {"group": group, "choices": GroupMembership.choices, "is_admin": is_admin},
145 )
146
147 def post(self, request, group_id):
148 """
149 Performs selected action(save or delete) for a group. If a groupname already exists, then a error
150 will be output.
151 The selected users become members of this group. The groupadmin is already set.
152 :param request: A HTTP-request object sent by the Django framework.
153 :param user_id: An user id
154 :param user_id: An group id
155 :return: Profile renderer
156 """
157 mode = request.POST["mode"]
158 group = get_object_or_404(UserGroup, id=group_id)
159 membership = get_object_or_404(GroupMembership, group=group, user=request.user)
160
161 errors = {}
162 if mode == "add_user":
163 if membership.level < models.WRITE_PERM:
164 raise PermissionDenied
165 try:
166 user = OepUser.objects.get(name=request.POST["name"])
167 membership, _ = GroupMembership.objects.get_or_create(
168 group=group, user=user
169 )
170 membership.save()
171 except OepUser.DoesNotExist:
172 errors["name"] = "User does not exist"
173 elif mode == "remove_user":
174 if membership.level < models.DELETE_PERM:
175 raise PermissionDenied
176 user = OepUser.objects.get(id=request.POST["user_id"])
177 membership = GroupMembership.objects.get(group=group, user=user)
178 if membership.level >= ADMIN_PERM:
179 admins = GroupMembership.objects.filter(group=group).exclude(user=user)
180 if not admins:
181 errors["name"] = "A group needs at least one admin"
182 else:
183 membership.delete()
184 else:
185 membership.delete()
186 elif mode == "alter_user":
187 if membership.level < models.ADMIN_PERM:
188 raise PermissionDenied
189 user = OepUser.objects.get(id=request.POST["user_id"])
190 if user == request.user:
191 errors["name"] = "You can not change your own permissions"
192 else:
193 membership = GroupMembership.objects.get(group=group, user=user)
194 membership.level = request.POST["level"]
195 membership.save()
196 elif mode == "delete_group":
197 if membership.level < models.ADMIN_PERM:
198 raise PermissionDenied
199 group.delete()
200 return redirect("/user/groups")
201 else:
202 raise PermissionDenied
203 return render(
204 request,
205 "login/change_form.html",
206 {
207 "group": group,
208 "choices": GroupMembership.choices,
209 "errors": errors,
210 "is_admin": True,
211 },
212 )
213
214 def __add_user(self, request, group):
215 user = OepUser.objects.filter(id=request.POST["user_id"]).first()
216 g = user.groups.add(group)
217 g.save()
218 return self.get(request)
219
220
221 class ProfileUpdateView(UpdateView, LoginRequiredMixin):
222 """
223 Autogenerate a update form for users.
224 """
225
226 model = OepUser
227 fields = ["name", "affiliation", "email"]
228 template_name_suffix = "_update_form"
229
230
231 class EditUserView(View):
232 def get(self, request, user_id):
233 if not request.user.id == int(user_id):
234 raise PermissionDenied
235 form = EditUserForm(instance=request.user)
236 return render(request, "login/oepuser_edit_form.html", {"form": form})
237
238 def post(self, request, user_id):
239 if not request.user.id == int(user_id):
240 raise PermissionDenied
241 form = EditUserForm(request.POST, instance=request.user)
242 if form.is_valid():
243 form.save()
244 return redirect("/user/profile/{id}".format(id=request.user.id))
245 else:
246 return render(request, "login/oepuser_edit_form.html", {"form": form})
247
248
249 class CreateUserView(View):
250 def get(self, request):
251 form = CreateUserForm()
252 return render(request, "login/oepuser_create_form.html", {"form": form})
253
254 def post(self, request):
255 form = CreateUserForm(request.POST)
256 if form.is_valid():
257 user = form.save()
258 return redirect("activate")
259 else:
260 return render(request, "login/oepuser_create_form.html", {"form": form})
261
262
263 class DetachView(LoginRequiredMixin, View):
264 def get(self, request):
265 if request.user.is_native:
266 raise PermissionDenied
267 form = DetachForm(request.user)
268 return render(request, "login/detach.html", {"form": form})
269
270 def post(self, request):
271 if request.user.is_native:
272 raise PermissionDenied
273 form = DetachForm(request.user, request.POST)
274 if form.is_valid():
275 form.save()
276 return redirect("/")
277 else:
278 print(form.errors)
279 return render(request, "login/detach.html", {"form": form})
280
281
282 class OEPPasswordChangeView(PasswordChangeView):
283 template_name = "login/generic_form.html"
284 success_url = "/"
285
286
287 class ActivationNoteView(FormView):
288 template_name = "login/activate.html"
289 form_class = ChangeEmailForm
290 success_url = "user/activate"
291
292 def form_valid(self, form):
293 if self.request.user.is_anonymous or self.request.user.is_mail_verified:
294 raise PermissionDenied
295 form.save(self.request.user)
296 return super(ActivationNoteView, self).form_valid(form)
297
298
299 def activate(request, token):
300 token_obj = models.ActivationToken.objects.filter(value=token).first()
301 if not token_obj:
302 form = ChangeEmailForm()
303 form._errors = {
304 forms.forms.NON_FIELD_ERRORS: form.error_class(
305 ["Your token was invalid or expired"]
306 )
307 }
308 return render(request, "login/activate.html", {"form": form})
309 else:
310 token_obj.user.is_mail_verified = True
311 token_obj.user.save()
312 token_obj.delete()
313 return redirect("/user/profile/{id}".format(id=token_obj.user.id))
314
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/login/views.py b/login/views.py
--- a/login/views.py
+++ b/login/views.py
@@ -89,6 +89,7 @@
form = GroupForm(request.POST, instance=group)
if form.is_valid():
if group_id:
+ group = form.save()
membership = get_object_or_404(
GroupMembership, group=group, user=request.user
)
|
{"golden_diff": "diff --git a/login/views.py b/login/views.py\n--- a/login/views.py\n+++ b/login/views.py\n@@ -89,6 +89,7 @@\n form = GroupForm(request.POST, instance=group)\n if form.is_valid():\n if group_id:\n+ group = form.save()\n membership = get_object_or_404(\n GroupMembership, group=group, user=request.user\n )\n", "issue": "Permissions: Renaming a permission group is not possible\nExisting groups cannot be renamed.\n", "before_files": [{"content": "from django import forms\nfrom django.contrib.auth import update_session_auth_hash\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.contrib.auth.models import Group\nfrom django.contrib.auth.views import PasswordChangeView, PasswordResetView\nfrom django.core.exceptions import ObjectDoesNotExist, PermissionDenied\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.views.generic import FormView, View\nfrom django.views.generic.edit import UpdateView\n\nimport login.models as models\n\nfrom .forms import ChangeEmailForm, CreateUserForm, DetachForm, EditUserForm, GroupForm\nfrom .models import ADMIN_PERM, GroupMembership, UserGroup\nfrom .models import myuser as OepUser\n\nfrom oeplatform.settings import URL\n\nclass ProfileView(View):\n def get(self, request, user_id):\n \"\"\"\n Load the user identified by user_id and is OAuth-token. If latter does not exist yet, create one.\n :param request: A HTTP-request object sent by the Django framework.\n :param user_id: An user id\n :return: Profile renderer\n \"\"\"\n from rest_framework.authtoken.models import Token\n\n for user in OepUser.objects.all():\n Token.objects.get_or_create(user=user)\n user = get_object_or_404(OepUser, pk=user_id)\n token = None\n if request.user.is_authenticated:\n token = Token.objects.get(user=request.user)\n return render(\n request, \"login/profile.html\", {\"profile_user\": user, \"token\": token}\n )\n\n\nclass GroupManagement(View, LoginRequiredMixin):\n def get(self, request):\n \"\"\"\n Load and list the available groups by groupadmin. \n :param request: A HTTP-request object sent by the Django framework.\n :param user_id: An user id\n :return: Profile renderer \n \"\"\"\n\n membership = request.user.memberships\n return render(\n request, \"login/list_memberships.html\", {\"membership\": membership}\n )\n\n\nclass GroupCreate(View, LoginRequiredMixin):\n def get(self, request, group_id=None):\n \"\"\"\n Load the chosen action(create or edit) for a group.\n :param request: A HTTP-request object sent by the Django framework.\n :param user_id: An user id\n :param user_id: An group id\n :return: Profile renderer\n \"\"\"\n\n if group_id:\n group = UserGroup.objects.get(id=group_id)\n form = GroupForm(instance=group)\n membership = get_object_or_404(\n GroupMembership, group=group, user=request.user\n )\n if membership.level < ADMIN_PERM:\n raise PermissionDenied\n else:\n form = GroupForm()\n return render(request, \"login/group_create.html\", {\"form\": form})\n\n def post(self, request, group_id=None):\n \"\"\"\n Performs selected action(save or delete) for a group. If a groupname already exists, then a error\n will be output.\n The selected users become members of this group. The groupadmin is already set.\n :param request: A HTTP-request object sent by the Django framework.\n :param user_id: An user id\n :param user_id: An group id\n :return: Profile renderer\n \"\"\"\n group = UserGroup.objects.get(id=group_id) if group_id else None\n form = GroupForm(request.POST, instance=group)\n if form.is_valid():\n if group_id:\n membership = get_object_or_404(\n GroupMembership, group=group, user=request.user\n )\n if membership.level < ADMIN_PERM:\n raise PermissionDenied\n else:\n group = form.save()\n membership = GroupMembership.objects.create(\n user=request.user, group=group, level=ADMIN_PERM\n )\n membership.save()\n return redirect(\"/user/groups/{id}\".format(id=group.id), {\"group\": group})\n else:\n return render(request, \"login/group_create.html\", {\"form\": form})\n\n\nclass GroupView(View, LoginRequiredMixin):\n def get(self, request, group_id):\n \"\"\"\n Load the chosen action(create or edit) for a group.\n :param request: A HTTP-request object sent by the Django framework.\n :param user_id: An user id\n :param user_id: An group id\n :return: Profile renderer\n \"\"\"\n group = get_object_or_404(UserGroup, pk=group_id)\n return render(\n request,\n \"login/group.html\",\n {\"group\": group},\n )\n\n\nclass GroupEdit(View, LoginRequiredMixin):\n def get(self, request, group_id):\n \"\"\"\n Load the chosen action(create or edit) for a group. \n :param request: A HTTP-request object sent by the Django framework.\n :param user_id: An user id\n :param user_id: An group id\n :return: Profile renderer \n \"\"\"\n group = get_object_or_404(UserGroup, pk=group_id)\n is_admin = False\n membership = GroupMembership.objects.filter(\n group=group, user=request.user\n ).first()\n if membership:\n is_admin = membership.level >= ADMIN_PERM\n return render(\n request,\n \"login/change_form.html\",\n {\"group\": group, \"choices\": GroupMembership.choices, \"is_admin\": is_admin},\n )\n\n def post(self, request, group_id):\n \"\"\"\n Performs selected action(save or delete) for a group. If a groupname already exists, then a error \n will be output. \n The selected users become members of this group. The groupadmin is already set.\n :param request: A HTTP-request object sent by the Django framework.\n :param user_id: An user id\n :param user_id: An group id\n :return: Profile renderer \n \"\"\"\n mode = request.POST[\"mode\"]\n group = get_object_or_404(UserGroup, id=group_id)\n membership = get_object_or_404(GroupMembership, group=group, user=request.user)\n\n errors = {}\n if mode == \"add_user\":\n if membership.level < models.WRITE_PERM:\n raise PermissionDenied\n try:\n user = OepUser.objects.get(name=request.POST[\"name\"])\n membership, _ = GroupMembership.objects.get_or_create(\n group=group, user=user\n )\n membership.save()\n except OepUser.DoesNotExist:\n errors[\"name\"] = \"User does not exist\"\n elif mode == \"remove_user\":\n if membership.level < models.DELETE_PERM:\n raise PermissionDenied\n user = OepUser.objects.get(id=request.POST[\"user_id\"])\n membership = GroupMembership.objects.get(group=group, user=user)\n if membership.level >= ADMIN_PERM:\n admins = GroupMembership.objects.filter(group=group).exclude(user=user)\n if not admins:\n errors[\"name\"] = \"A group needs at least one admin\"\n else:\n membership.delete()\n else:\n membership.delete()\n elif mode == \"alter_user\":\n if membership.level < models.ADMIN_PERM:\n raise PermissionDenied\n user = OepUser.objects.get(id=request.POST[\"user_id\"])\n if user == request.user:\n errors[\"name\"] = \"You can not change your own permissions\"\n else:\n membership = GroupMembership.objects.get(group=group, user=user)\n membership.level = request.POST[\"level\"]\n membership.save()\n elif mode == \"delete_group\":\n if membership.level < models.ADMIN_PERM:\n raise PermissionDenied\n group.delete()\n return redirect(\"/user/groups\")\n else:\n raise PermissionDenied\n return render(\n request,\n \"login/change_form.html\",\n {\n \"group\": group,\n \"choices\": GroupMembership.choices,\n \"errors\": errors,\n \"is_admin\": True,\n },\n )\n\n def __add_user(self, request, group):\n user = OepUser.objects.filter(id=request.POST[\"user_id\"]).first()\n g = user.groups.add(group)\n g.save()\n return self.get(request)\n\n\nclass ProfileUpdateView(UpdateView, LoginRequiredMixin):\n \"\"\"\n Autogenerate a update form for users.\n \"\"\"\n\n model = OepUser\n fields = [\"name\", \"affiliation\", \"email\"]\n template_name_suffix = \"_update_form\"\n\n\nclass EditUserView(View):\n def get(self, request, user_id):\n if not request.user.id == int(user_id):\n raise PermissionDenied\n form = EditUserForm(instance=request.user)\n return render(request, \"login/oepuser_edit_form.html\", {\"form\": form})\n\n def post(self, request, user_id):\n if not request.user.id == int(user_id):\n raise PermissionDenied\n form = EditUserForm(request.POST, instance=request.user)\n if form.is_valid():\n form.save()\n return redirect(\"/user/profile/{id}\".format(id=request.user.id))\n else:\n return render(request, \"login/oepuser_edit_form.html\", {\"form\": form})\n\n\nclass CreateUserView(View):\n def get(self, request):\n form = CreateUserForm()\n return render(request, \"login/oepuser_create_form.html\", {\"form\": form})\n\n def post(self, request):\n form = CreateUserForm(request.POST)\n if form.is_valid():\n user = form.save()\n return redirect(\"activate\")\n else:\n return render(request, \"login/oepuser_create_form.html\", {\"form\": form})\n\n\nclass DetachView(LoginRequiredMixin, View):\n def get(self, request):\n if request.user.is_native:\n raise PermissionDenied\n form = DetachForm(request.user)\n return render(request, \"login/detach.html\", {\"form\": form})\n\n def post(self, request):\n if request.user.is_native:\n raise PermissionDenied\n form = DetachForm(request.user, request.POST)\n if form.is_valid():\n form.save()\n return redirect(\"/\")\n else:\n print(form.errors)\n return render(request, \"login/detach.html\", {\"form\": form})\n\n\nclass OEPPasswordChangeView(PasswordChangeView):\n template_name = \"login/generic_form.html\"\n success_url = \"/\"\n\n\nclass ActivationNoteView(FormView):\n template_name = \"login/activate.html\"\n form_class = ChangeEmailForm\n success_url = \"user/activate\"\n\n def form_valid(self, form):\n if self.request.user.is_anonymous or self.request.user.is_mail_verified:\n raise PermissionDenied\n form.save(self.request.user)\n return super(ActivationNoteView, self).form_valid(form)\n\n\ndef activate(request, token):\n token_obj = models.ActivationToken.objects.filter(value=token).first()\n if not token_obj:\n form = ChangeEmailForm()\n form._errors = {\n forms.forms.NON_FIELD_ERRORS: form.error_class(\n [\"Your token was invalid or expired\"]\n )\n }\n return render(request, \"login/activate.html\", {\"form\": form})\n else:\n token_obj.user.is_mail_verified = True\n token_obj.user.save()\n token_obj.delete()\n return redirect(\"/user/profile/{id}\".format(id=token_obj.user.id))\n", "path": "login/views.py"}], "after_files": [{"content": "from django import forms\nfrom django.contrib.auth import update_session_auth_hash\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.contrib.auth.models import Group\nfrom django.contrib.auth.views import PasswordChangeView, PasswordResetView\nfrom django.core.exceptions import ObjectDoesNotExist, PermissionDenied\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.views.generic import FormView, View\nfrom django.views.generic.edit import UpdateView\n\nimport login.models as models\n\nfrom .forms import ChangeEmailForm, CreateUserForm, DetachForm, EditUserForm, GroupForm\nfrom .models import ADMIN_PERM, GroupMembership, UserGroup\nfrom .models import myuser as OepUser\n\nfrom oeplatform.settings import URL\n\nclass ProfileView(View):\n def get(self, request, user_id):\n \"\"\"\n Load the user identified by user_id and is OAuth-token. If latter does not exist yet, create one.\n :param request: A HTTP-request object sent by the Django framework.\n :param user_id: An user id\n :return: Profile renderer\n \"\"\"\n from rest_framework.authtoken.models import Token\n\n for user in OepUser.objects.all():\n Token.objects.get_or_create(user=user)\n user = get_object_or_404(OepUser, pk=user_id)\n token = None\n if request.user.is_authenticated:\n token = Token.objects.get(user=request.user)\n return render(\n request, \"login/profile.html\", {\"profile_user\": user, \"token\": token}\n )\n\n\nclass GroupManagement(View, LoginRequiredMixin):\n def get(self, request):\n \"\"\"\n Load and list the available groups by groupadmin. \n :param request: A HTTP-request object sent by the Django framework.\n :param user_id: An user id\n :return: Profile renderer \n \"\"\"\n\n membership = request.user.memberships\n return render(\n request, \"login/list_memberships.html\", {\"membership\": membership}\n )\n\n\nclass GroupCreate(View, LoginRequiredMixin):\n def get(self, request, group_id=None):\n \"\"\"\n Load the chosen action(create or edit) for a group.\n :param request: A HTTP-request object sent by the Django framework.\n :param user_id: An user id\n :param user_id: An group id\n :return: Profile renderer\n \"\"\"\n\n if group_id:\n group = UserGroup.objects.get(id=group_id)\n form = GroupForm(instance=group)\n membership = get_object_or_404(\n GroupMembership, group=group, user=request.user\n )\n if membership.level < ADMIN_PERM:\n raise PermissionDenied\n else:\n form = GroupForm()\n return render(request, \"login/group_create.html\", {\"form\": form})\n\n def post(self, request, group_id=None):\n \"\"\"\n Performs selected action(save or delete) for a group. If a groupname already exists, then a error\n will be output.\n The selected users become members of this group. The groupadmin is already set.\n :param request: A HTTP-request object sent by the Django framework.\n :param user_id: An user id\n :param user_id: An group id\n :return: Profile renderer\n \"\"\"\n group = UserGroup.objects.get(id=group_id) if group_id else None\n form = GroupForm(request.POST, instance=group)\n if form.is_valid():\n if group_id:\n group = form.save()\n membership = get_object_or_404(\n GroupMembership, group=group, user=request.user\n )\n if membership.level < ADMIN_PERM:\n raise PermissionDenied\n else:\n group = form.save()\n membership = GroupMembership.objects.create(\n user=request.user, group=group, level=ADMIN_PERM\n )\n membership.save()\n return redirect(\"/user/groups/{id}\".format(id=group.id), {\"group\": group})\n else:\n return render(request, \"login/group_create.html\", {\"form\": form})\n\n\nclass GroupView(View, LoginRequiredMixin):\n def get(self, request, group_id):\n \"\"\"\n Load the chosen action(create or edit) for a group.\n :param request: A HTTP-request object sent by the Django framework.\n :param user_id: An user id\n :param user_id: An group id\n :return: Profile renderer\n \"\"\"\n group = get_object_or_404(UserGroup, pk=group_id)\n return render(\n request,\n \"login/group.html\",\n {\"group\": group},\n )\n\n\nclass GroupEdit(View, LoginRequiredMixin):\n def get(self, request, group_id):\n \"\"\"\n Load the chosen action(create or edit) for a group. \n :param request: A HTTP-request object sent by the Django framework.\n :param user_id: An user id\n :param user_id: An group id\n :return: Profile renderer \n \"\"\"\n group = get_object_or_404(UserGroup, pk=group_id)\n is_admin = False\n membership = GroupMembership.objects.filter(\n group=group, user=request.user\n ).first()\n if membership:\n is_admin = membership.level >= ADMIN_PERM\n return render(\n request,\n \"login/change_form.html\",\n {\"group\": group, \"choices\": GroupMembership.choices, \"is_admin\": is_admin},\n )\n\n def post(self, request, group_id):\n \"\"\"\n Performs selected action(save or delete) for a group. If a groupname already exists, then a error \n will be output. \n The selected users become members of this group. The groupadmin is already set.\n :param request: A HTTP-request object sent by the Django framework.\n :param user_id: An user id\n :param user_id: An group id\n :return: Profile renderer \n \"\"\"\n mode = request.POST[\"mode\"]\n group = get_object_or_404(UserGroup, id=group_id)\n membership = get_object_or_404(GroupMembership, group=group, user=request.user)\n\n errors = {}\n if mode == \"add_user\":\n if membership.level < models.WRITE_PERM:\n raise PermissionDenied\n try:\n user = OepUser.objects.get(name=request.POST[\"name\"])\n membership, _ = GroupMembership.objects.get_or_create(\n group=group, user=user\n )\n membership.save()\n except OepUser.DoesNotExist:\n errors[\"name\"] = \"User does not exist\"\n elif mode == \"remove_user\":\n if membership.level < models.DELETE_PERM:\n raise PermissionDenied\n user = OepUser.objects.get(id=request.POST[\"user_id\"])\n membership = GroupMembership.objects.get(group=group, user=user)\n if membership.level >= ADMIN_PERM:\n admins = GroupMembership.objects.filter(group=group).exclude(user=user)\n if not admins:\n errors[\"name\"] = \"A group needs at least one admin\"\n else:\n membership.delete()\n else:\n membership.delete()\n elif mode == \"alter_user\":\n if membership.level < models.ADMIN_PERM:\n raise PermissionDenied\n user = OepUser.objects.get(id=request.POST[\"user_id\"])\n if user == request.user:\n errors[\"name\"] = \"You can not change your own permissions\"\n else:\n membership = GroupMembership.objects.get(group=group, user=user)\n membership.level = request.POST[\"level\"]\n membership.save()\n elif mode == \"delete_group\":\n if membership.level < models.ADMIN_PERM:\n raise PermissionDenied\n group.delete()\n return redirect(\"/user/groups\")\n else:\n raise PermissionDenied\n return render(\n request,\n \"login/change_form.html\",\n {\n \"group\": group,\n \"choices\": GroupMembership.choices,\n \"errors\": errors,\n \"is_admin\": True,\n },\n )\n\n def __add_user(self, request, group):\n user = OepUser.objects.filter(id=request.POST[\"user_id\"]).first()\n g = user.groups.add(group)\n g.save()\n return self.get(request)\n\n\nclass ProfileUpdateView(UpdateView, LoginRequiredMixin):\n \"\"\"\n Autogenerate a update form for users.\n \"\"\"\n\n model = OepUser\n fields = [\"name\", \"affiliation\", \"email\"]\n template_name_suffix = \"_update_form\"\n\n\nclass EditUserView(View):\n def get(self, request, user_id):\n if not request.user.id == int(user_id):\n raise PermissionDenied\n form = EditUserForm(instance=request.user)\n return render(request, \"login/oepuser_edit_form.html\", {\"form\": form})\n\n def post(self, request, user_id):\n if not request.user.id == int(user_id):\n raise PermissionDenied\n form = EditUserForm(request.POST, instance=request.user)\n if form.is_valid():\n form.save()\n return redirect(\"/user/profile/{id}\".format(id=request.user.id))\n else:\n return render(request, \"login/oepuser_edit_form.html\", {\"form\": form})\n\n\nclass CreateUserView(View):\n def get(self, request):\n form = CreateUserForm()\n return render(request, \"login/oepuser_create_form.html\", {\"form\": form})\n\n def post(self, request):\n form = CreateUserForm(request.POST)\n if form.is_valid():\n user = form.save()\n return redirect(\"activate\")\n else:\n return render(request, \"login/oepuser_create_form.html\", {\"form\": form})\n\n\nclass DetachView(LoginRequiredMixin, View):\n def get(self, request):\n if request.user.is_native:\n raise PermissionDenied\n form = DetachForm(request.user)\n return render(request, \"login/detach.html\", {\"form\": form})\n\n def post(self, request):\n if request.user.is_native:\n raise PermissionDenied\n form = DetachForm(request.user, request.POST)\n if form.is_valid():\n form.save()\n return redirect(\"/\")\n else:\n print(form.errors)\n return render(request, \"login/detach.html\", {\"form\": form})\n\n\nclass OEPPasswordChangeView(PasswordChangeView):\n template_name = \"login/generic_form.html\"\n success_url = \"/\"\n\n\nclass ActivationNoteView(FormView):\n template_name = \"login/activate.html\"\n form_class = ChangeEmailForm\n success_url = \"user/activate\"\n\n def form_valid(self, form):\n if self.request.user.is_anonymous or self.request.user.is_mail_verified:\n raise PermissionDenied\n form.save(self.request.user)\n return super(ActivationNoteView, self).form_valid(form)\n\n\ndef activate(request, token):\n token_obj = models.ActivationToken.objects.filter(value=token).first()\n if not token_obj:\n form = ChangeEmailForm()\n form._errors = {\n forms.forms.NON_FIELD_ERRORS: form.error_class(\n [\"Your token was invalid or expired\"]\n )\n }\n return render(request, \"login/activate.html\", {\"form\": form})\n else:\n token_obj.user.is_mail_verified = True\n token_obj.user.save()\n token_obj.delete()\n return redirect(\"/user/profile/{id}\".format(id=token_obj.user.id))\n", "path": "login/views.py"}]}
| 3,511 | 91 |
gh_patches_debug_1474
|
rasdani/github-patches
|
git_diff
|
ray-project__ray-9429
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[rllib] MARWIL tuned cartpole example (and my own experiments) produce nan rewards only.
<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->
### What is the problem? + Reproduction
I have a custom example that produces offline data and picks it up with MARWIL for training. I observed that I get `nan` reward values for my example every time, so I went a step back and used your cartpole example:
https://github.com/ray-project/ray/blob/cd5a207d69cdaf05b47d956c18e89d928585eec7/rllib/tuned_examples/marwil/cartpole-marwil.yaml
I'm following the exact steps there, i.e. first run
```
./train.py --run=PPO --env=CartPole-v0 \
--stop='{"timesteps_total": 50000}' \
--config='{"output": "/tmp/out", "batch_mode": "complete_episodes"}'
```
followed by
```
rllib train -f cartpole-marwil.yaml
```
I did this both on my currently preferred stable version `0.8.5`, as well as on the `0.9.0.dev0` wheel. The result is this:
```
== Status ==
Memory usage on this node: 19.4/32.0 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/12 CPUs, 0/0 GPUs, 0.0/9.96 GiB heap, 0.0/3.42 GiB objects
Result logdir: /Users/maxpumperla/ray_results/cartpole-marwil
Number of trials: 2 (2 TERMINATED)
+--------------------------------+------------+-------+--------+--------+------------------+--------+----------+
| Trial name | status | loc | beta | iter | total time (s) | ts | reward |
|--------------------------------+------------+-------+--------+--------+------------------+--------+----------|
| MARWIL_CartPole-v0_7af06_00000 | TERMINATED | | 0 | 2206 | 58.5661 | 500007 | nan |
| MARWIL_CartPole-v0_7af06_00001 | TERMINATED | | 1 | 2248 | 58.6117 | 500286 | nan |
+--------------------------------+------------+-------+--------+--------+------------------+--------+----------+
```
Also, I've noticed that your MARWIL unit test is a pure smoke test and doesn't check reward values, but I didn't run that locally. Maybe it produces nan values as well.
In any case I'd appreciate any input here, as we'd love to use MARWIL for our "real" use case, in which we see the same behaviour.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rllib/examples/custom_loss.py`
Content:
```
1 """Example of using custom_loss() with an imitation learning loss.
2
3 The default input file is too small to learn a good policy, but you can
4 generate new experiences for IL training as follows:
5
6 To generate experiences:
7 $ ./train.py --run=PG --config='{"output": "/tmp/cartpole"}' --env=CartPole-v0
8
9 To train on experiences with joint PG + IL loss:
10 $ python custom_loss.py --input-files=/tmp/cartpole
11 """
12
13 import argparse
14 from pathlib import Path
15 import os
16
17 import ray
18 from ray import tune
19 from ray.rllib.examples.models.custom_loss_model import CustomLossModel, \
20 TorchCustomLossModel
21 from ray.rllib.models import ModelCatalog
22 from ray.rllib.utils.framework import try_import_tf
23
24 tf1, tf, tfv = try_import_tf()
25
26 parser = argparse.ArgumentParser()
27 parser.add_argument("--torch", action="store_true")
28 parser.add_argument("--stop-iters", type=int, default=200)
29 parser.add_argument(
30 "--input-files",
31 type=str,
32 default=os.path.join(
33 os.path.dirname(os.path.abspath(__file__)),
34 "../tests/data/cartpole_small"))
35
36 if __name__ == "__main__":
37 ray.init()
38 args = parser.parse_args()
39
40 # Bazel makes it hard to find files specified in `args` (and `data`).
41 # Look for them here.
42 if not os.path.exists(args.input_files):
43 # This script runs in the ray/rllib/examples dir.
44 rllib_dir = Path(__file__).parent.parent
45 input_dir = rllib_dir.absolute().joinpath(args.input_files)
46 args.input_files = str(input_dir)
47
48 ModelCatalog.register_custom_model(
49 "custom_loss", TorchCustomLossModel if args.torch else CustomLossModel)
50
51 config = {
52 "env": "CartPole-v0",
53 "num_workers": 0,
54 "model": {
55 "custom_model": "custom_loss",
56 "custom_model_config": {
57 "input_files": args.input_files,
58 },
59 },
60 "framework": "torch" if args.torch else "tf",
61 }
62
63 stop = {
64 "training_iteration": args.stop_iters,
65 }
66
67 tune.run("PG", config=config, stop=stop)
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rllib/examples/custom_loss.py b/rllib/examples/custom_loss.py
--- a/rllib/examples/custom_loss.py
+++ b/rllib/examples/custom_loss.py
@@ -31,7 +31,7 @@
type=str,
default=os.path.join(
os.path.dirname(os.path.abspath(__file__)),
- "../tests/data/cartpole_small"))
+ "../tests/data/cartpole/small"))
if __name__ == "__main__":
ray.init()
|
{"golden_diff": "diff --git a/rllib/examples/custom_loss.py b/rllib/examples/custom_loss.py\n--- a/rllib/examples/custom_loss.py\n+++ b/rllib/examples/custom_loss.py\n@@ -31,7 +31,7 @@\n type=str,\n default=os.path.join(\n os.path.dirname(os.path.abspath(__file__)),\n- \"../tests/data/cartpole_small\"))\n+ \"../tests/data/cartpole/small\"))\n \n if __name__ == \"__main__\":\n ray.init()\n", "issue": "[rllib] MARWIL tuned cartpole example (and my own experiments) produce nan rewards only.\n<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->\r\n\r\n### What is the problem? + Reproduction\r\n\r\nI have a custom example that produces offline data and picks it up with MARWIL for training. I observed that I get `nan` reward values for my example every time, so I went a step back and used your cartpole example:\r\n\r\nhttps://github.com/ray-project/ray/blob/cd5a207d69cdaf05b47d956c18e89d928585eec7/rllib/tuned_examples/marwil/cartpole-marwil.yaml\r\n\r\nI'm following the exact steps there, i.e. first run \r\n\r\n```\r\n./train.py --run=PPO --env=CartPole-v0 \\\r\n --stop='{\"timesteps_total\": 50000}' \\\r\n --config='{\"output\": \"/tmp/out\", \"batch_mode\": \"complete_episodes\"}'\r\n```\r\n\r\nfollowed by \r\n\r\n```\r\nrllib train -f cartpole-marwil.yaml\r\n```\r\n\r\nI did this both on my currently preferred stable version `0.8.5`, as well as on the `0.9.0.dev0` wheel. The result is this:\r\n\r\n```\r\n== Status ==\r\nMemory usage on this node: 19.4/32.0 GiB\r\nUsing FIFO scheduling algorithm.\r\nResources requested: 0/12 CPUs, 0/0 GPUs, 0.0/9.96 GiB heap, 0.0/3.42 GiB objects\r\nResult logdir: /Users/maxpumperla/ray_results/cartpole-marwil\r\nNumber of trials: 2 (2 TERMINATED)\r\n+--------------------------------+------------+-------+--------+--------+------------------+--------+----------+\r\n| Trial name | status | loc | beta | iter | total time (s) | ts | reward |\r\n|--------------------------------+------------+-------+--------+--------+------------------+--------+----------|\r\n| MARWIL_CartPole-v0_7af06_00000 | TERMINATED | | 0 | 2206 | 58.5661 | 500007 | nan |\r\n| MARWIL_CartPole-v0_7af06_00001 | TERMINATED | | 1 | 2248 | 58.6117 | 500286 | nan |\r\n+--------------------------------+------------+-------+--------+--------+------------------+--------+----------+\r\n```\r\n\r\nAlso, I've noticed that your MARWIL unit test is a pure smoke test and doesn't check reward values, but I didn't run that locally. Maybe it produces nan values as well.\r\n\r\nIn any case I'd appreciate any input here, as we'd love to use MARWIL for our \"real\" use case, in which we see the same behaviour.\n", "before_files": [{"content": "\"\"\"Example of using custom_loss() with an imitation learning loss.\n\nThe default input file is too small to learn a good policy, but you can\ngenerate new experiences for IL training as follows:\n\nTo generate experiences:\n$ ./train.py --run=PG --config='{\"output\": \"/tmp/cartpole\"}' --env=CartPole-v0\n\nTo train on experiences with joint PG + IL loss:\n$ python custom_loss.py --input-files=/tmp/cartpole\n\"\"\"\n\nimport argparse\nfrom pathlib import Path\nimport os\n\nimport ray\nfrom ray import tune\nfrom ray.rllib.examples.models.custom_loss_model import CustomLossModel, \\\n TorchCustomLossModel\nfrom ray.rllib.models import ModelCatalog\nfrom ray.rllib.utils.framework import try_import_tf\n\ntf1, tf, tfv = try_import_tf()\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--torch\", action=\"store_true\")\nparser.add_argument(\"--stop-iters\", type=int, default=200)\nparser.add_argument(\n \"--input-files\",\n type=str,\n default=os.path.join(\n os.path.dirname(os.path.abspath(__file__)),\n \"../tests/data/cartpole_small\"))\n\nif __name__ == \"__main__\":\n ray.init()\n args = parser.parse_args()\n\n # Bazel makes it hard to find files specified in `args` (and `data`).\n # Look for them here.\n if not os.path.exists(args.input_files):\n # This script runs in the ray/rllib/examples dir.\n rllib_dir = Path(__file__).parent.parent\n input_dir = rllib_dir.absolute().joinpath(args.input_files)\n args.input_files = str(input_dir)\n\n ModelCatalog.register_custom_model(\n \"custom_loss\", TorchCustomLossModel if args.torch else CustomLossModel)\n\n config = {\n \"env\": \"CartPole-v0\",\n \"num_workers\": 0,\n \"model\": {\n \"custom_model\": \"custom_loss\",\n \"custom_model_config\": {\n \"input_files\": args.input_files,\n },\n },\n \"framework\": \"torch\" if args.torch else \"tf\",\n }\n\n stop = {\n \"training_iteration\": args.stop_iters,\n }\n\n tune.run(\"PG\", config=config, stop=stop)\n", "path": "rllib/examples/custom_loss.py"}], "after_files": [{"content": "\"\"\"Example of using custom_loss() with an imitation learning loss.\n\nThe default input file is too small to learn a good policy, but you can\ngenerate new experiences for IL training as follows:\n\nTo generate experiences:\n$ ./train.py --run=PG --config='{\"output\": \"/tmp/cartpole\"}' --env=CartPole-v0\n\nTo train on experiences with joint PG + IL loss:\n$ python custom_loss.py --input-files=/tmp/cartpole\n\"\"\"\n\nimport argparse\nfrom pathlib import Path\nimport os\n\nimport ray\nfrom ray import tune\nfrom ray.rllib.examples.models.custom_loss_model import CustomLossModel, \\\n TorchCustomLossModel\nfrom ray.rllib.models import ModelCatalog\nfrom ray.rllib.utils.framework import try_import_tf\n\ntf1, tf, tfv = try_import_tf()\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--torch\", action=\"store_true\")\nparser.add_argument(\"--stop-iters\", type=int, default=200)\nparser.add_argument(\n \"--input-files\",\n type=str,\n default=os.path.join(\n os.path.dirname(os.path.abspath(__file__)),\n \"../tests/data/cartpole/small\"))\n\nif __name__ == \"__main__\":\n ray.init()\n args = parser.parse_args()\n\n # Bazel makes it hard to find files specified in `args` (and `data`).\n # Look for them here.\n if not os.path.exists(args.input_files):\n # This script runs in the ray/rllib/examples dir.\n rllib_dir = Path(__file__).parent.parent\n input_dir = rllib_dir.absolute().joinpath(args.input_files)\n args.input_files = str(input_dir)\n\n ModelCatalog.register_custom_model(\n \"custom_loss\", TorchCustomLossModel if args.torch else CustomLossModel)\n\n config = {\n \"env\": \"CartPole-v0\",\n \"num_workers\": 0,\n \"model\": {\n \"custom_model\": \"custom_loss\",\n \"custom_model_config\": {\n \"input_files\": args.input_files,\n },\n },\n \"framework\": \"torch\" if args.torch else \"tf\",\n }\n\n stop = {\n \"training_iteration\": args.stop_iters,\n }\n\n tune.run(\"PG\", config=config, stop=stop)\n", "path": "rllib/examples/custom_loss.py"}]}
| 1,545 | 100 |
gh_patches_debug_35952
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-contrib-1773
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add `otelTraceSampled` field to LogEntry for OLTP Logging Instrumentation module
Before opening a feature request against this repo, consider whether the feature should/could be implemented in the [other OpenTelemetry client libraries](https://github.com/open-telemetry/). If so, please [open an issue on opentelemetry-specification](https://github.com/open-telemetry/opentelemetry-specification/issues/new) first.
**Is your feature request related to a problem?**
Getting span id and trace id in the log record is a must. Cloud provider libraries, e.g. Google Cloud Logging also provides a `logging.googleapis.com/trace_sampled` field under structured logging, which can be populated using this library.
**Describe the solution you'd like**
Add a `record.otelTraceSampled` field similar to `record.otelSpanID` and `record.otelTraceID` in the log entry using the `trace_flags` property in `SpanContext`.
**Describe alternatives you've considered**
Manually injecting the value of `trace_flags` property into the log record by using the current `SpanContext`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/constants.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 DEFAULT_LOGGING_FORMAT = "%(asctime)s %(levelname)s [%(name)s] [%(filename)s:%(lineno)d] [trace_id=%(otelTraceID)s span_id=%(otelSpanID)s resource.service.name=%(otelServiceName)s] - %(message)s"
16
17
18 _MODULE_DOC = """
19 The OpenTelemetry ``logging`` integration automatically injects tracing context into log statements.
20
21 The integration registers a custom log record factory with the the standard library logging module that automatically inject
22 tracing context into log record objects. Optionally, the integration can also call ``logging.basicConfig()`` to set a logging
23 format with placeholders for span ID, trace ID and service name.
24
25 The following keys are injected into log record objects by the factory:
26
27 - ``otelSpanID``
28 - ``otelTraceID``
29 - ``otelServiceName``
30
31 The integration uses the following logging format by default:
32
33 .. code-block::
34
35 {default_logging_format}
36
37 Enable trace context injection
38 ------------------------------
39
40 The integration is opt-in and must be enabled explicitly by setting the environment variable ``OTEL_PYTHON_LOG_CORRELATION`` to ``true``.
41
42 The integration always registers the custom factory that injects the tracing context into the log record objects. Setting
43 ``OTEL_PYTHON_LOG_CORRELATION`` to ``true`` calls ``logging.basicConfig()`` to set a logging format that actually makes
44 use of the injected variables.
45
46
47 Environment variables
48 ---------------------
49
50 .. envvar:: OTEL_PYTHON_LOG_CORRELATION
51
52 This env var must be set to ``true`` in order to enable trace context injection into logs by calling ``logging.basicConfig()`` and
53 setting a logging format that makes use of the injected tracing variables.
54
55 Alternatively, ``set_logging_format`` argument can be set to ``True`` when initializing the ``LoggingInstrumentor`` class to achieve the
56 same effect.
57
58 .. code-block::
59
60 LoggingInstrumentor(set_logging_format=True)
61
62 The default value is ``false``.
63
64 .. envvar:: OTEL_PYTHON_LOG_FORMAT
65
66 This env var can be used to instruct the instrumentation to use a custom logging format.
67
68 Alternatively, a custom logging format can be passed to the ``LoggingInstrumentor`` as the ``logging_format`` argument. For example:
69
70 .. code-block::
71
72 LoggingInstrumentor(logging_format='%(msg)s [span_id=%(span_id)s]')
73
74
75 The default value is:
76
77 .. code-block::
78
79 {default_logging_format}
80
81 .. envvar:: OTEL_PYTHON_LOG_LEVEL
82
83 This env var can be used to set a custom logging level.
84
85 Alternatively, log level can be passed to the ``LoggingInstrumentor`` during initialization. For example:
86
87 .. code-block::
88
89 LoggingInstrumentor(log_level=logging.DEBUG)
90
91
92 The default value is ``info``.
93
94 Options are:
95
96 - ``info``
97 - ``error``
98 - ``debug``
99 - ``warning``
100
101 Manually calling logging.basicConfig
102 ------------------------------------
103
104 ``logging.basicConfig()`` can be called to set a global logging level and format. Only the first ever call has any effect on the global logger.
105 Any subsequent calls have no effect and do not override a previously configured global logger. This integration calls ``logging.basicConfig()`` for you
106 when ``OTEL_PYTHON_LOG_CORRELATION`` is set to ``true``. It uses the format and level specified by ``OTEL_PYTHON_LOG_FORMAT`` and ``OTEL_PYTHON_LOG_LEVEL``
107 environment variables respectively.
108
109 If you code or some other library/framework you are using calls logging.basicConfig before this integration is enabled, then this integration's logging
110 format will not be used and log statements will not contain tracing context. For this reason, you'll need to make sure this integration is enabled as early
111 as possible in the service lifecycle or your framework is configured to use a logging format with placeholders for tracing context. This can be achieved by
112 adding the following placeholders to your logging format:
113
114 .. code-block::
115
116 %(otelSpanID)s %(otelTraceID)s %(otelServiceName)s
117
118
119
120 API
121 -----
122
123 .. code-block:: python
124
125 from opentelemetry.instrumentation.logging import LoggingInstrumentor
126
127 LoggingInstrumentor().instrument(set_logging_format=True)
128
129
130 Note
131 -----
132
133 If you do not set ``OTEL_PYTHON_LOG_CORRELATION`` to ``true`` but instead set the logging format manually or through your framework, you must ensure that this
134 integration is enabled before you set the logging format. This is important because unless the integration is enabled, the tracing context variables
135 are not injected into the log record objects. This means any attempted log statements made after setting the logging format and before enabling this integration
136 will result in KeyError exceptions. Such exceptions are automatically swallowed by the logging module and do not result in crashes but you may still lose out
137 on important log messages.
138 """.format(
139 default_logging_format=DEFAULT_LOGGING_FORMAT
140 )
141
```
Path: `instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # pylint: disable=empty-docstring,no-value-for-parameter,no-member,no-name-in-module
16
17 import logging # pylint: disable=import-self
18 from os import environ
19 from typing import Collection
20
21 from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
22 from opentelemetry.instrumentation.logging.constants import (
23 _MODULE_DOC,
24 DEFAULT_LOGGING_FORMAT,
25 )
26 from opentelemetry.instrumentation.logging.environment_variables import (
27 OTEL_PYTHON_LOG_CORRELATION,
28 OTEL_PYTHON_LOG_FORMAT,
29 OTEL_PYTHON_LOG_LEVEL,
30 )
31 from opentelemetry.instrumentation.logging.package import _instruments
32 from opentelemetry.trace import (
33 INVALID_SPAN,
34 INVALID_SPAN_CONTEXT,
35 get_current_span,
36 get_tracer_provider,
37 )
38
39 __doc__ = _MODULE_DOC
40
41 LEVELS = {
42 "debug": logging.DEBUG,
43 "info": logging.INFO,
44 "warning": logging.WARNING,
45 "error": logging.ERROR,
46 }
47
48
49 class LoggingInstrumentor(BaseInstrumentor): # pylint: disable=empty-docstring
50 __doc__ = f"""An instrumentor for stdlib logging module.
51
52 This instrumentor injects tracing context into logging records and optionally sets the global logging format to the following:
53
54 .. code-block::
55
56 {DEFAULT_LOGGING_FORMAT}
57
58 def log_hook(span: Span, record: LogRecord):
59 if span and span.is_recording():
60 record.custom_user_attribute_from_log_hook = "some-value"
61
62 Args:
63 tracer_provider: Tracer provider instance that can be used to fetch a tracer.
64 set_logging_format: When set to True, it calls logging.basicConfig() and sets a logging format.
65 logging_format: Accepts a string and sets it as the logging format when set_logging_format
66 is set to True.
67 log_level: Accepts one of the following values and sets the logging level to it.
68 logging.INFO
69 logging.DEBUG
70 logging.WARN
71 logging.ERROR
72 logging.FATAL
73 log_hook: execute custom logic when record is created
74
75 See `BaseInstrumentor`
76 """
77
78 _old_factory = None
79 _log_hook = None
80
81 def instrumentation_dependencies(self) -> Collection[str]:
82 return _instruments
83
84 def _instrument(self, **kwargs):
85 provider = kwargs.get("tracer_provider", None) or get_tracer_provider()
86 old_factory = logging.getLogRecordFactory()
87 LoggingInstrumentor._old_factory = old_factory
88 LoggingInstrumentor._log_hook = kwargs.get("log_hook", None)
89
90 service_name = None
91
92 def record_factory(*args, **kwargs):
93 record = old_factory(*args, **kwargs)
94
95 record.otelSpanID = "0"
96 record.otelTraceID = "0"
97
98 nonlocal service_name
99 if service_name is None:
100 resource = getattr(provider, "resource", None)
101 if resource:
102 service_name = (
103 resource.attributes.get("service.name") or ""
104 )
105 else:
106 service_name = ""
107
108 record.otelServiceName = service_name
109
110 span = get_current_span()
111 if span != INVALID_SPAN:
112 ctx = span.get_span_context()
113 if ctx != INVALID_SPAN_CONTEXT:
114 record.otelSpanID = format(ctx.span_id, "016x")
115 record.otelTraceID = format(ctx.trace_id, "032x")
116 if callable(LoggingInstrumentor._log_hook):
117 try:
118 LoggingInstrumentor._log_hook( # pylint: disable=E1102
119 span, record
120 )
121 except Exception: # pylint: disable=W0703
122 pass
123
124 return record
125
126 logging.setLogRecordFactory(record_factory)
127
128 set_logging_format = kwargs.get(
129 "set_logging_format",
130 environ.get(OTEL_PYTHON_LOG_CORRELATION, "false").lower()
131 == "true",
132 )
133
134 if set_logging_format:
135 log_format = kwargs.get(
136 "logging_format", environ.get(OTEL_PYTHON_LOG_FORMAT, None)
137 )
138 log_format = log_format or DEFAULT_LOGGING_FORMAT
139
140 log_level = kwargs.get(
141 "log_level", LEVELS.get(environ.get(OTEL_PYTHON_LOG_LEVEL))
142 )
143 log_level = log_level or logging.INFO
144
145 logging.basicConfig(format=log_format, level=log_level)
146
147 def _uninstrument(self, **kwargs):
148 if LoggingInstrumentor._old_factory:
149 logging.setLogRecordFactory(LoggingInstrumentor._old_factory)
150 LoggingInstrumentor._old_factory = None
151
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py b/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py
@@ -94,6 +94,7 @@
record.otelSpanID = "0"
record.otelTraceID = "0"
+ record.otelTraceSampled = False
nonlocal service_name
if service_name is None:
@@ -113,6 +114,7 @@
if ctx != INVALID_SPAN_CONTEXT:
record.otelSpanID = format(ctx.span_id, "016x")
record.otelTraceID = format(ctx.trace_id, "032x")
+ record.otelTraceSampled = ctx.trace_flags.sampled
if callable(LoggingInstrumentor._log_hook):
try:
LoggingInstrumentor._log_hook( # pylint: disable=E1102
diff --git a/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/constants.py b/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/constants.py
--- a/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/constants.py
+++ b/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/constants.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-DEFAULT_LOGGING_FORMAT = "%(asctime)s %(levelname)s [%(name)s] [%(filename)s:%(lineno)d] [trace_id=%(otelTraceID)s span_id=%(otelSpanID)s resource.service.name=%(otelServiceName)s] - %(message)s"
+DEFAULT_LOGGING_FORMAT = "%(asctime)s %(levelname)s [%(name)s] [%(filename)s:%(lineno)d] [trace_id=%(otelTraceID)s span_id=%(otelSpanID)s resource.service.name=%(otelServiceName)s trace_sampled=%(otelTraceSampled)s] - %(message)s"
_MODULE_DOC = """
@@ -27,6 +27,7 @@
- ``otelSpanID``
- ``otelTraceID``
- ``otelServiceName``
+- ``otelTraceSampled``
The integration uses the following logging format by default:
@@ -113,7 +114,7 @@
.. code-block::
- %(otelSpanID)s %(otelTraceID)s %(otelServiceName)s
+ %(otelSpanID)s %(otelTraceID)s %(otelServiceName)s %(otelTraceSampled)s
|
{"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py b/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py\n@@ -94,6 +94,7 @@\n \n record.otelSpanID = \"0\"\n record.otelTraceID = \"0\"\n+ record.otelTraceSampled = False\n \n nonlocal service_name\n if service_name is None:\n@@ -113,6 +114,7 @@\n if ctx != INVALID_SPAN_CONTEXT:\n record.otelSpanID = format(ctx.span_id, \"016x\")\n record.otelTraceID = format(ctx.trace_id, \"032x\")\n+ record.otelTraceSampled = ctx.trace_flags.sampled\n if callable(LoggingInstrumentor._log_hook):\n try:\n LoggingInstrumentor._log_hook( # pylint: disable=E1102\ndiff --git a/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/constants.py b/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/constants.py\n--- a/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/constants.py\n+++ b/instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/constants.py\n@@ -12,7 +12,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-DEFAULT_LOGGING_FORMAT = \"%(asctime)s %(levelname)s [%(name)s] [%(filename)s:%(lineno)d] [trace_id=%(otelTraceID)s span_id=%(otelSpanID)s resource.service.name=%(otelServiceName)s] - %(message)s\"\n+DEFAULT_LOGGING_FORMAT = \"%(asctime)s %(levelname)s [%(name)s] [%(filename)s:%(lineno)d] [trace_id=%(otelTraceID)s span_id=%(otelSpanID)s resource.service.name=%(otelServiceName)s trace_sampled=%(otelTraceSampled)s] - %(message)s\"\n \n \n _MODULE_DOC = \"\"\"\n@@ -27,6 +27,7 @@\n - ``otelSpanID``\n - ``otelTraceID``\n - ``otelServiceName``\n+- ``otelTraceSampled``\n \n The integration uses the following logging format by default:\n \n@@ -113,7 +114,7 @@\n \n .. code-block::\n \n- %(otelSpanID)s %(otelTraceID)s %(otelServiceName)s\n+ %(otelSpanID)s %(otelTraceID)s %(otelServiceName)s %(otelTraceSampled)s\n", "issue": "Add `otelTraceSampled` field to LogEntry for OLTP Logging Instrumentation module\nBefore opening a feature request against this repo, consider whether the feature should/could be implemented in the [other OpenTelemetry client libraries](https://github.com/open-telemetry/). If so, please [open an issue on opentelemetry-specification](https://github.com/open-telemetry/opentelemetry-specification/issues/new) first.\r\n\r\n**Is your feature request related to a problem?**\r\nGetting span id and trace id in the log record is a must. Cloud provider libraries, e.g. Google Cloud Logging also provides a `logging.googleapis.com/trace_sampled` field under structured logging, which can be populated using this library. \r\n\r\n\r\n**Describe the solution you'd like**\r\nAdd a `record.otelTraceSampled` field similar to `record.otelSpanID` and `record.otelTraceID` in the log entry using the `trace_flags` property in `SpanContext`. \r\n\r\n**Describe alternatives you've considered**\r\nManually injecting the value of `trace_flags` property into the log record by using the current `SpanContext`.\r\n\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nDEFAULT_LOGGING_FORMAT = \"%(asctime)s %(levelname)s [%(name)s] [%(filename)s:%(lineno)d] [trace_id=%(otelTraceID)s span_id=%(otelSpanID)s resource.service.name=%(otelServiceName)s] - %(message)s\"\n\n\n_MODULE_DOC = \"\"\"\nThe OpenTelemetry ``logging`` integration automatically injects tracing context into log statements.\n\nThe integration registers a custom log record factory with the the standard library logging module that automatically inject\ntracing context into log record objects. Optionally, the integration can also call ``logging.basicConfig()`` to set a logging\nformat with placeholders for span ID, trace ID and service name.\n\nThe following keys are injected into log record objects by the factory:\n\n- ``otelSpanID``\n- ``otelTraceID``\n- ``otelServiceName``\n\nThe integration uses the following logging format by default:\n\n.. code-block::\n\n {default_logging_format}\n\nEnable trace context injection\n------------------------------\n\nThe integration is opt-in and must be enabled explicitly by setting the environment variable ``OTEL_PYTHON_LOG_CORRELATION`` to ``true``.\n\nThe integration always registers the custom factory that injects the tracing context into the log record objects. Setting\n``OTEL_PYTHON_LOG_CORRELATION`` to ``true`` calls ``logging.basicConfig()`` to set a logging format that actually makes\nuse of the injected variables.\n\n\nEnvironment variables\n---------------------\n\n.. envvar:: OTEL_PYTHON_LOG_CORRELATION\n\nThis env var must be set to ``true`` in order to enable trace context injection into logs by calling ``logging.basicConfig()`` and\nsetting a logging format that makes use of the injected tracing variables.\n\nAlternatively, ``set_logging_format`` argument can be set to ``True`` when initializing the ``LoggingInstrumentor`` class to achieve the\nsame effect.\n\n.. code-block::\n\n LoggingInstrumentor(set_logging_format=True)\n\nThe default value is ``false``.\n\n.. envvar:: OTEL_PYTHON_LOG_FORMAT\n\nThis env var can be used to instruct the instrumentation to use a custom logging format.\n\nAlternatively, a custom logging format can be passed to the ``LoggingInstrumentor`` as the ``logging_format`` argument. For example:\n\n.. code-block::\n\n LoggingInstrumentor(logging_format='%(msg)s [span_id=%(span_id)s]')\n\n\nThe default value is:\n\n.. code-block::\n\n {default_logging_format}\n\n.. envvar:: OTEL_PYTHON_LOG_LEVEL\n\nThis env var can be used to set a custom logging level.\n\nAlternatively, log level can be passed to the ``LoggingInstrumentor`` during initialization. For example:\n\n.. code-block::\n\n LoggingInstrumentor(log_level=logging.DEBUG)\n\n\nThe default value is ``info``.\n\nOptions are:\n\n- ``info``\n- ``error``\n- ``debug``\n- ``warning``\n\nManually calling logging.basicConfig\n------------------------------------\n\n``logging.basicConfig()`` can be called to set a global logging level and format. Only the first ever call has any effect on the global logger.\nAny subsequent calls have no effect and do not override a previously configured global logger. This integration calls ``logging.basicConfig()`` for you\nwhen ``OTEL_PYTHON_LOG_CORRELATION`` is set to ``true``. It uses the format and level specified by ``OTEL_PYTHON_LOG_FORMAT`` and ``OTEL_PYTHON_LOG_LEVEL``\nenvironment variables respectively.\n\nIf you code or some other library/framework you are using calls logging.basicConfig before this integration is enabled, then this integration's logging\nformat will not be used and log statements will not contain tracing context. For this reason, you'll need to make sure this integration is enabled as early\nas possible in the service lifecycle or your framework is configured to use a logging format with placeholders for tracing context. This can be achieved by\nadding the following placeholders to your logging format:\n\n.. code-block::\n\n %(otelSpanID)s %(otelTraceID)s %(otelServiceName)s\n\n\n\nAPI\n-----\n\n.. code-block:: python\n\n from opentelemetry.instrumentation.logging import LoggingInstrumentor\n\n LoggingInstrumentor().instrument(set_logging_format=True)\n\n\nNote\n-----\n\nIf you do not set ``OTEL_PYTHON_LOG_CORRELATION`` to ``true`` but instead set the logging format manually or through your framework, you must ensure that this\nintegration is enabled before you set the logging format. This is important because unless the integration is enabled, the tracing context variables\nare not injected into the log record objects. This means any attempted log statements made after setting the logging format and before enabling this integration\nwill result in KeyError exceptions. Such exceptions are automatically swallowed by the logging module and do not result in crashes but you may still lose out\non important log messages.\n\"\"\".format(\n default_logging_format=DEFAULT_LOGGING_FORMAT\n)\n", "path": "instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/constants.py"}, {"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pylint: disable=empty-docstring,no-value-for-parameter,no-member,no-name-in-module\n\nimport logging # pylint: disable=import-self\nfrom os import environ\nfrom typing import Collection\n\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.instrumentation.logging.constants import (\n _MODULE_DOC,\n DEFAULT_LOGGING_FORMAT,\n)\nfrom opentelemetry.instrumentation.logging.environment_variables import (\n OTEL_PYTHON_LOG_CORRELATION,\n OTEL_PYTHON_LOG_FORMAT,\n OTEL_PYTHON_LOG_LEVEL,\n)\nfrom opentelemetry.instrumentation.logging.package import _instruments\nfrom opentelemetry.trace import (\n INVALID_SPAN,\n INVALID_SPAN_CONTEXT,\n get_current_span,\n get_tracer_provider,\n)\n\n__doc__ = _MODULE_DOC\n\nLEVELS = {\n \"debug\": logging.DEBUG,\n \"info\": logging.INFO,\n \"warning\": logging.WARNING,\n \"error\": logging.ERROR,\n}\n\n\nclass LoggingInstrumentor(BaseInstrumentor): # pylint: disable=empty-docstring\n __doc__ = f\"\"\"An instrumentor for stdlib logging module.\n\n This instrumentor injects tracing context into logging records and optionally sets the global logging format to the following:\n\n .. code-block::\n\n {DEFAULT_LOGGING_FORMAT}\n\n def log_hook(span: Span, record: LogRecord):\n if span and span.is_recording():\n record.custom_user_attribute_from_log_hook = \"some-value\"\n\n Args:\n tracer_provider: Tracer provider instance that can be used to fetch a tracer.\n set_logging_format: When set to True, it calls logging.basicConfig() and sets a logging format.\n logging_format: Accepts a string and sets it as the logging format when set_logging_format\n is set to True.\n log_level: Accepts one of the following values and sets the logging level to it.\n logging.INFO\n logging.DEBUG\n logging.WARN\n logging.ERROR\n logging.FATAL\n log_hook: execute custom logic when record is created\n\n See `BaseInstrumentor`\n \"\"\"\n\n _old_factory = None\n _log_hook = None\n\n def instrumentation_dependencies(self) -> Collection[str]:\n return _instruments\n\n def _instrument(self, **kwargs):\n provider = kwargs.get(\"tracer_provider\", None) or get_tracer_provider()\n old_factory = logging.getLogRecordFactory()\n LoggingInstrumentor._old_factory = old_factory\n LoggingInstrumentor._log_hook = kwargs.get(\"log_hook\", None)\n\n service_name = None\n\n def record_factory(*args, **kwargs):\n record = old_factory(*args, **kwargs)\n\n record.otelSpanID = \"0\"\n record.otelTraceID = \"0\"\n\n nonlocal service_name\n if service_name is None:\n resource = getattr(provider, \"resource\", None)\n if resource:\n service_name = (\n resource.attributes.get(\"service.name\") or \"\"\n )\n else:\n service_name = \"\"\n\n record.otelServiceName = service_name\n\n span = get_current_span()\n if span != INVALID_SPAN:\n ctx = span.get_span_context()\n if ctx != INVALID_SPAN_CONTEXT:\n record.otelSpanID = format(ctx.span_id, \"016x\")\n record.otelTraceID = format(ctx.trace_id, \"032x\")\n if callable(LoggingInstrumentor._log_hook):\n try:\n LoggingInstrumentor._log_hook( # pylint: disable=E1102\n span, record\n )\n except Exception: # pylint: disable=W0703\n pass\n\n return record\n\n logging.setLogRecordFactory(record_factory)\n\n set_logging_format = kwargs.get(\n \"set_logging_format\",\n environ.get(OTEL_PYTHON_LOG_CORRELATION, \"false\").lower()\n == \"true\",\n )\n\n if set_logging_format:\n log_format = kwargs.get(\n \"logging_format\", environ.get(OTEL_PYTHON_LOG_FORMAT, None)\n )\n log_format = log_format or DEFAULT_LOGGING_FORMAT\n\n log_level = kwargs.get(\n \"log_level\", LEVELS.get(environ.get(OTEL_PYTHON_LOG_LEVEL))\n )\n log_level = log_level or logging.INFO\n\n logging.basicConfig(format=log_format, level=log_level)\n\n def _uninstrument(self, **kwargs):\n if LoggingInstrumentor._old_factory:\n logging.setLogRecordFactory(LoggingInstrumentor._old_factory)\n LoggingInstrumentor._old_factory = None\n", "path": "instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nDEFAULT_LOGGING_FORMAT = \"%(asctime)s %(levelname)s [%(name)s] [%(filename)s:%(lineno)d] [trace_id=%(otelTraceID)s span_id=%(otelSpanID)s resource.service.name=%(otelServiceName)s trace_sampled=%(otelTraceSampled)s] - %(message)s\"\n\n\n_MODULE_DOC = \"\"\"\nThe OpenTelemetry ``logging`` integration automatically injects tracing context into log statements.\n\nThe integration registers a custom log record factory with the the standard library logging module that automatically inject\ntracing context into log record objects. Optionally, the integration can also call ``logging.basicConfig()`` to set a logging\nformat with placeholders for span ID, trace ID and service name.\n\nThe following keys are injected into log record objects by the factory:\n\n- ``otelSpanID``\n- ``otelTraceID``\n- ``otelServiceName``\n- ``otelTraceSampled``\n\nThe integration uses the following logging format by default:\n\n.. code-block::\n\n {default_logging_format}\n\nEnable trace context injection\n------------------------------\n\nThe integration is opt-in and must be enabled explicitly by setting the environment variable ``OTEL_PYTHON_LOG_CORRELATION`` to ``true``.\n\nThe integration always registers the custom factory that injects the tracing context into the log record objects. Setting\n``OTEL_PYTHON_LOG_CORRELATION`` to ``true`` calls ``logging.basicConfig()`` to set a logging format that actually makes\nuse of the injected variables.\n\n\nEnvironment variables\n---------------------\n\n.. envvar:: OTEL_PYTHON_LOG_CORRELATION\n\nThis env var must be set to ``true`` in order to enable trace context injection into logs by calling ``logging.basicConfig()`` and\nsetting a logging format that makes use of the injected tracing variables.\n\nAlternatively, ``set_logging_format`` argument can be set to ``True`` when initializing the ``LoggingInstrumentor`` class to achieve the\nsame effect.\n\n.. code-block::\n\n LoggingInstrumentor(set_logging_format=True)\n\nThe default value is ``false``.\n\n.. envvar:: OTEL_PYTHON_LOG_FORMAT\n\nThis env var can be used to instruct the instrumentation to use a custom logging format.\n\nAlternatively, a custom logging format can be passed to the ``LoggingInstrumentor`` as the ``logging_format`` argument. For example:\n\n.. code-block::\n\n LoggingInstrumentor(logging_format='%(msg)s [span_id=%(span_id)s]')\n\n\nThe default value is:\n\n.. code-block::\n\n {default_logging_format}\n\n.. envvar:: OTEL_PYTHON_LOG_LEVEL\n\nThis env var can be used to set a custom logging level.\n\nAlternatively, log level can be passed to the ``LoggingInstrumentor`` during initialization. For example:\n\n.. code-block::\n\n LoggingInstrumentor(log_level=logging.DEBUG)\n\n\nThe default value is ``info``.\n\nOptions are:\n\n- ``info``\n- ``error``\n- ``debug``\n- ``warning``\n\nManually calling logging.basicConfig\n------------------------------------\n\n``logging.basicConfig()`` can be called to set a global logging level and format. Only the first ever call has any effect on the global logger.\nAny subsequent calls have no effect and do not override a previously configured global logger. This integration calls ``logging.basicConfig()`` for you\nwhen ``OTEL_PYTHON_LOG_CORRELATION`` is set to ``true``. It uses the format and level specified by ``OTEL_PYTHON_LOG_FORMAT`` and ``OTEL_PYTHON_LOG_LEVEL``\nenvironment variables respectively.\n\nIf you code or some other library/framework you are using calls logging.basicConfig before this integration is enabled, then this integration's logging\nformat will not be used and log statements will not contain tracing context. For this reason, you'll need to make sure this integration is enabled as early\nas possible in the service lifecycle or your framework is configured to use a logging format with placeholders for tracing context. This can be achieved by\nadding the following placeholders to your logging format:\n\n.. code-block::\n\n %(otelSpanID)s %(otelTraceID)s %(otelServiceName)s %(otelTraceSampled)s\n\n\n\nAPI\n-----\n\n.. code-block:: python\n\n from opentelemetry.instrumentation.logging import LoggingInstrumentor\n\n LoggingInstrumentor().instrument(set_logging_format=True)\n\n\nNote\n-----\n\nIf you do not set ``OTEL_PYTHON_LOG_CORRELATION`` to ``true`` but instead set the logging format manually or through your framework, you must ensure that this\nintegration is enabled before you set the logging format. This is important because unless the integration is enabled, the tracing context variables\nare not injected into the log record objects. This means any attempted log statements made after setting the logging format and before enabling this integration\nwill result in KeyError exceptions. Such exceptions are automatically swallowed by the logging module and do not result in crashes but you may still lose out\non important log messages.\n\"\"\".format(\n default_logging_format=DEFAULT_LOGGING_FORMAT\n)\n", "path": "instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/constants.py"}, {"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pylint: disable=empty-docstring,no-value-for-parameter,no-member,no-name-in-module\n\nimport logging # pylint: disable=import-self\nfrom os import environ\nfrom typing import Collection\n\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.instrumentation.logging.constants import (\n _MODULE_DOC,\n DEFAULT_LOGGING_FORMAT,\n)\nfrom opentelemetry.instrumentation.logging.environment_variables import (\n OTEL_PYTHON_LOG_CORRELATION,\n OTEL_PYTHON_LOG_FORMAT,\n OTEL_PYTHON_LOG_LEVEL,\n)\nfrom opentelemetry.instrumentation.logging.package import _instruments\nfrom opentelemetry.trace import (\n INVALID_SPAN,\n INVALID_SPAN_CONTEXT,\n get_current_span,\n get_tracer_provider,\n)\n\n__doc__ = _MODULE_DOC\n\nLEVELS = {\n \"debug\": logging.DEBUG,\n \"info\": logging.INFO,\n \"warning\": logging.WARNING,\n \"error\": logging.ERROR,\n}\n\n\nclass LoggingInstrumentor(BaseInstrumentor): # pylint: disable=empty-docstring\n __doc__ = f\"\"\"An instrumentor for stdlib logging module.\n\n This instrumentor injects tracing context into logging records and optionally sets the global logging format to the following:\n\n .. code-block::\n\n {DEFAULT_LOGGING_FORMAT}\n\n def log_hook(span: Span, record: LogRecord):\n if span and span.is_recording():\n record.custom_user_attribute_from_log_hook = \"some-value\"\n\n Args:\n tracer_provider: Tracer provider instance that can be used to fetch a tracer.\n set_logging_format: When set to True, it calls logging.basicConfig() and sets a logging format.\n logging_format: Accepts a string and sets it as the logging format when set_logging_format\n is set to True.\n log_level: Accepts one of the following values and sets the logging level to it.\n logging.INFO\n logging.DEBUG\n logging.WARN\n logging.ERROR\n logging.FATAL\n log_hook: execute custom logic when record is created\n\n See `BaseInstrumentor`\n \"\"\"\n\n _old_factory = None\n _log_hook = None\n\n def instrumentation_dependencies(self) -> Collection[str]:\n return _instruments\n\n def _instrument(self, **kwargs):\n provider = kwargs.get(\"tracer_provider\", None) or get_tracer_provider()\n old_factory = logging.getLogRecordFactory()\n LoggingInstrumentor._old_factory = old_factory\n LoggingInstrumentor._log_hook = kwargs.get(\"log_hook\", None)\n\n service_name = None\n\n def record_factory(*args, **kwargs):\n record = old_factory(*args, **kwargs)\n\n record.otelSpanID = \"0\"\n record.otelTraceID = \"0\"\n record.otelTraceSampled = False\n\n nonlocal service_name\n if service_name is None:\n resource = getattr(provider, \"resource\", None)\n if resource:\n service_name = (\n resource.attributes.get(\"service.name\") or \"\"\n )\n else:\n service_name = \"\"\n\n record.otelServiceName = service_name\n\n span = get_current_span()\n if span != INVALID_SPAN:\n ctx = span.get_span_context()\n if ctx != INVALID_SPAN_CONTEXT:\n record.otelSpanID = format(ctx.span_id, \"016x\")\n record.otelTraceID = format(ctx.trace_id, \"032x\")\n record.otelTraceSampled = ctx.trace_flags.sampled\n if callable(LoggingInstrumentor._log_hook):\n try:\n LoggingInstrumentor._log_hook( # pylint: disable=E1102\n span, record\n )\n except Exception: # pylint: disable=W0703\n pass\n\n return record\n\n logging.setLogRecordFactory(record_factory)\n\n set_logging_format = kwargs.get(\n \"set_logging_format\",\n environ.get(OTEL_PYTHON_LOG_CORRELATION, \"false\").lower()\n == \"true\",\n )\n\n if set_logging_format:\n log_format = kwargs.get(\n \"logging_format\", environ.get(OTEL_PYTHON_LOG_FORMAT, None)\n )\n log_format = log_format or DEFAULT_LOGGING_FORMAT\n\n log_level = kwargs.get(\n \"log_level\", LEVELS.get(environ.get(OTEL_PYTHON_LOG_LEVEL))\n )\n log_level = log_level or logging.INFO\n\n logging.basicConfig(format=log_format, level=log_level)\n\n def _uninstrument(self, **kwargs):\n if LoggingInstrumentor._old_factory:\n logging.setLogRecordFactory(LoggingInstrumentor._old_factory)\n LoggingInstrumentor._old_factory = None\n", "path": "instrumentation/opentelemetry-instrumentation-logging/src/opentelemetry/instrumentation/logging/__init__.py"}]}
| 3,452 | 640 |
gh_patches_debug_57313
|
rasdani/github-patches
|
git_diff
|
vllm-project__vllm-3129
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[v0.3.3] Release Tracker
**ETA**: Feb 29th - Mar 1st
## Major changes
* StarCoder2 support
* Performance optimization and LoRA support for Gemma
* Performance optimization for MoE kernel
* 2/3/8-bit GPTQ support
* [Experimental] AWS Inferentia2 support
## PRs to be merged before the release
- [x] #2330 #2223
- [ ] ~~#2761~~
- [x] #2819
- [x] #3087 #3099
- [x] #3089
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vllm/__init__.py`
Content:
```
1 """vLLM: a high-throughput and memory-efficient inference engine for LLMs"""
2
3 from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs
4 from vllm.engine.async_llm_engine import AsyncLLMEngine
5 from vllm.engine.llm_engine import LLMEngine
6 from vllm.engine.ray_utils import initialize_cluster
7 from vllm.entrypoints.llm import LLM
8 from vllm.outputs import CompletionOutput, RequestOutput
9 from vllm.sampling_params import SamplingParams
10
11 __version__ = "0.3.2"
12
13 __all__ = [
14 "LLM",
15 "SamplingParams",
16 "RequestOutput",
17 "CompletionOutput",
18 "LLMEngine",
19 "EngineArgs",
20 "AsyncLLMEngine",
21 "AsyncEngineArgs",
22 "initialize_cluster",
23 ]
24
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/vllm/__init__.py b/vllm/__init__.py
--- a/vllm/__init__.py
+++ b/vllm/__init__.py
@@ -8,7 +8,7 @@
from vllm.outputs import CompletionOutput, RequestOutput
from vllm.sampling_params import SamplingParams
-__version__ = "0.3.2"
+__version__ = "0.3.3"
__all__ = [
"LLM",
|
{"golden_diff": "diff --git a/vllm/__init__.py b/vllm/__init__.py\n--- a/vllm/__init__.py\n+++ b/vllm/__init__.py\n@@ -8,7 +8,7 @@\n from vllm.outputs import CompletionOutput, RequestOutput\n from vllm.sampling_params import SamplingParams\n \n-__version__ = \"0.3.2\"\n+__version__ = \"0.3.3\"\n \n __all__ = [\n \"LLM\",\n", "issue": "[v0.3.3] Release Tracker\n**ETA**: Feb 29th - Mar 1st\r\n\r\n## Major changes\r\n\r\n* StarCoder2 support\r\n* Performance optimization and LoRA support for Gemma\r\n* Performance optimization for MoE kernel\r\n* 2/3/8-bit GPTQ support\r\n* [Experimental] AWS Inferentia2 support\r\n\r\n## PRs to be merged before the release\r\n\r\n- [x] #2330 #2223\r\n- [ ] ~~#2761~~\r\n- [x] #2819 \r\n- [x] #3087 #3099\r\n- [x] #3089 \n", "before_files": [{"content": "\"\"\"vLLM: a high-throughput and memory-efficient inference engine for LLMs\"\"\"\n\nfrom vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs\nfrom vllm.engine.async_llm_engine import AsyncLLMEngine\nfrom vllm.engine.llm_engine import LLMEngine\nfrom vllm.engine.ray_utils import initialize_cluster\nfrom vllm.entrypoints.llm import LLM\nfrom vllm.outputs import CompletionOutput, RequestOutput\nfrom vllm.sampling_params import SamplingParams\n\n__version__ = \"0.3.2\"\n\n__all__ = [\n \"LLM\",\n \"SamplingParams\",\n \"RequestOutput\",\n \"CompletionOutput\",\n \"LLMEngine\",\n \"EngineArgs\",\n \"AsyncLLMEngine\",\n \"AsyncEngineArgs\",\n \"initialize_cluster\",\n]\n", "path": "vllm/__init__.py"}], "after_files": [{"content": "\"\"\"vLLM: a high-throughput and memory-efficient inference engine for LLMs\"\"\"\n\nfrom vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs\nfrom vllm.engine.async_llm_engine import AsyncLLMEngine\nfrom vllm.engine.llm_engine import LLMEngine\nfrom vllm.engine.ray_utils import initialize_cluster\nfrom vllm.entrypoints.llm import LLM\nfrom vllm.outputs import CompletionOutput, RequestOutput\nfrom vllm.sampling_params import SamplingParams\n\n__version__ = \"0.3.3\"\n\n__all__ = [\n \"LLM\",\n \"SamplingParams\",\n \"RequestOutput\",\n \"CompletionOutput\",\n \"LLMEngine\",\n \"EngineArgs\",\n \"AsyncLLMEngine\",\n \"AsyncEngineArgs\",\n \"initialize_cluster\",\n]\n", "path": "vllm/__init__.py"}]}
| 626 | 108 |
gh_patches_debug_24140
|
rasdani/github-patches
|
git_diff
|
mozmeao__snippets-service-1340
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Stop pulling data from RedShift
Starting in version 72 (Jan 2020), Firefox Telemetry uses BigQuery instead of RedShift.
We currently pull data from both data sources for frequency capping and performance reports.
In about a year from now the usage of pre-72 versions will be limited and we will be able to remove the RedShift queries from the codebase.
- [x] Stop pulling for Freq Capped Jobs
- [x] Stop pulling Daily Data
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `snippets/base/etl.py`
Content:
```
1 import collections
2 import json
3
4 from urllib.parse import urlencode
5
6 from django.conf import settings
7 from django.db.transaction import atomic
8 from redash_dynamic_query import RedashDynamicQuery
9
10 from snippets.base.models import CHANNELS, DailyImpressions, JobDailyPerformance, Job
11
12
13 REDASH_QUERY_IDS = {
14 'redshift-job': 68135,
15 'bq-job': 68136,
16 'redshift-impressions': 68345,
17 'bq-impressions': 68341,
18 }
19
20 redash = RedashDynamicQuery(
21 endpoint=settings.REDASH_ENDPOINT,
22 apikey=settings.REDASH_API_KEY,
23 max_wait=settings.REDASH_MAX_WAIT)
24
25
26 def redash_source_url(query_id_or_name, **params):
27 query_id = REDASH_QUERY_IDS.get(query_id_or_name, query_id_or_name)
28 url = f'{settings.REDASH_ENDPOINT}/queries/{query_id}/source'
29 if params:
30 url += '?' + urlencode({f'p_{key}_{query_id}': value
31 for key, value in params.items()})
32 return url
33
34
35 def redash_rows(query_name, date):
36 query_id = REDASH_QUERY_IDS[query_name]
37 bind_data = {'date': str(date)}
38 result = redash.query(query_id, bind_data)
39 return result['query_result']['data']['rows']
40
41
42 def prosses_rows(rows, key='message_id'):
43 job_ids = [str(x) for x in Job.objects.all().values_list('id', flat=True)]
44 new_rows = []
45 for row in sorted(rows, key=lambda x: x[key]):
46 # Remove rows with invalid Job IDs
47 if row['message_id'] not in job_ids:
48 continue
49
50 # Redash uses {} instead of null
51 if row['event_context'] == '{}':
52 row['event_context'] = ''
53
54 # Sometimes data in Telemetry populate `event_context`, some
55 # other times it uses `additional_properties['value']` to
56 # place the event context. Extract information from both
57 # places to identify the event.
58 properties = json.loads(row.get('additional_properties', '{}'))
59 event = row['event_context'] or properties.get('value', '') or row['event']
60
61 if event in ['CLICK_BUTTON', 'CLICK']:
62 event = 'click'
63 elif event == 'IMPRESSION':
64 event = 'impression'
65 elif event == 'BLOCK':
66 event = 'block'
67 elif event == 'DISMISS':
68 event = 'dismiss'
69 elif event == 'scene1-button-learn-more':
70 event = 'go_to_scene2'
71 elif event in ['subscribe-success',
72 'subscribe-error',
73 'conversion-subscribe-activation']:
74 event = event.replace('-', '_')
75 else:
76 # Ignore invalid event
77 continue
78
79 row['event'] = event
80
81 # Normalize channel name, based on what kind of snippets they get.
82 channel = row['channel']
83 if not channel:
84 channel = 'release'
85 row['channel'] = next(
86 (item for item in CHANNELS if
87 channel.startswith(item)), 'release'
88 )
89
90 # Normalize country
91 country_code = row['country_code']
92 if country_code in ['ERROR', None]:
93 row['country_code'] = 'XX'
94
95 # Not needed anymore
96 row.pop('event_context', None)
97 row.pop('additional_properties', None)
98
99 new_rows.append(row)
100
101 # Aggregate counts of same events for the global count.
102 processed = collections.defaultdict(dict)
103 for row in new_rows:
104 event = row['event']
105 processed[row[key]][event] = processed[row[key]].get(event, 0) + row['counts']
106
107 detail = [{
108 'event': row['event'],
109 'channel': row['channel'],
110 'country': row['country_code'],
111 'counts': row['counts'],
112 }]
113
114 if not processed[row[key]].get('details'):
115 processed[row[key]]['details'] = detail
116 else:
117 for drow in processed[row[key]]['details']:
118 if ((drow['event'] == row['event'] and
119 drow['channel'] == row['channel'] and
120 drow['country'] == row['country_code'])):
121 drow['counts'] += row['counts']
122 break
123 else:
124 processed[row[key]]['details'] += detail
125
126 # Last pass for multi-scene snippets: Click events here refer to
127 # clicks of secondary links listed on the template that go to
128 # terms of services or additional information and are displayed
129 # in the small text below the input element. These do not count
130 # clicking on `Learn more` (i.e. going from scene 1 to scene 2)
131 # or the main Call To Action. The later is measured in
132 # `conversion_subscribe_activation` and this is the value which
133 # is important to us and thus we rename this to `clicks`.
134 for k, v in processed.items():
135 if 'conversion_subscribe_activation' in v:
136 processed[k]['other_click'] = processed[k].get('click', 0)
137 processed[k]['click'] = processed[k].pop('conversion_subscribe_activation')
138 for row in processed[k]['details']:
139 if row['event'] == 'click':
140 row['event'] = 'other_click'
141 elif row['event'] == 'conversion_subscribe_activation':
142 row['event'] = 'click'
143
144 return processed
145
146
147 def update_job_metrics(date):
148 rows = []
149 for query in ['redshift-job', 'bq-job']:
150 rows += redash_rows(query, date)
151
152 processed = prosses_rows(rows, key='message_id')
153 with atomic():
154 JobDailyPerformance.objects.filter(date=date).delete()
155 for job, data in processed.items():
156 JobDailyPerformance.objects.create(
157 date=date,
158 job=Job.objects.get(id=job),
159 **data
160 )
161 return len(processed) > 0
162
163
164 def update_impressions(date):
165 rows = []
166
167 for query in ['redshift-impressions', 'bq-impressions']:
168 rows += redash_rows(query, date)
169
170 details = []
171 for row in rows:
172 # Normalize channel name, based on what kind of snippets they get.
173 channel = row['channel']
174 if not channel:
175 channel = 'release'
176 channel = next(
177 (item for item in CHANNELS if
178 channel.startswith(item)), 'release'
179 )
180
181 # Aggregate counts of the same duration and the same channel.
182 for item in details:
183 if (item['channel'] == channel and item['duration'] == row['duration']):
184 item['counts'] += row['counts']
185 break
186 else:
187 details.append({
188 'channel': channel,
189 'duration': row['duration'],
190 'counts': row['counts'],
191 })
192
193 with atomic():
194 DailyImpressions.objects.filter(date=date).delete()
195 DailyImpressions.objects.create(
196 date=date,
197 details=details
198 )
199
200 return len(details)
201
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/snippets/base/etl.py b/snippets/base/etl.py
--- a/snippets/base/etl.py
+++ b/snippets/base/etl.py
@@ -11,10 +11,12 @@
REDASH_QUERY_IDS = {
- 'redshift-job': 68135,
'bq-job': 68136,
- 'redshift-impressions': 68345,
'bq-impressions': 68341,
+
+ # Not currently used but kept here for reference.
+ 'redshift-job': 68135,
+ 'redshift-impressions': 68345,
}
redash = RedashDynamicQuery(
@@ -145,10 +147,7 @@
def update_job_metrics(date):
- rows = []
- for query in ['redshift-job', 'bq-job']:
- rows += redash_rows(query, date)
-
+ rows = redash_rows('bq-job', date)
processed = prosses_rows(rows, key='message_id')
with atomic():
JobDailyPerformance.objects.filter(date=date).delete()
@@ -162,11 +161,7 @@
def update_impressions(date):
- rows = []
-
- for query in ['redshift-impressions', 'bq-impressions']:
- rows += redash_rows(query, date)
-
+ rows = redash_rows('bq-impressions', date)
details = []
for row in rows:
# Normalize channel name, based on what kind of snippets they get.
|
{"golden_diff": "diff --git a/snippets/base/etl.py b/snippets/base/etl.py\n--- a/snippets/base/etl.py\n+++ b/snippets/base/etl.py\n@@ -11,10 +11,12 @@\n \n \n REDASH_QUERY_IDS = {\n- 'redshift-job': 68135,\n 'bq-job': 68136,\n- 'redshift-impressions': 68345,\n 'bq-impressions': 68341,\n+\n+ # Not currently used but kept here for reference.\n+ 'redshift-job': 68135,\n+ 'redshift-impressions': 68345,\n }\n \n redash = RedashDynamicQuery(\n@@ -145,10 +147,7 @@\n \n \n def update_job_metrics(date):\n- rows = []\n- for query in ['redshift-job', 'bq-job']:\n- rows += redash_rows(query, date)\n-\n+ rows = redash_rows('bq-job', date)\n processed = prosses_rows(rows, key='message_id')\n with atomic():\n JobDailyPerformance.objects.filter(date=date).delete()\n@@ -162,11 +161,7 @@\n \n \n def update_impressions(date):\n- rows = []\n-\n- for query in ['redshift-impressions', 'bq-impressions']:\n- rows += redash_rows(query, date)\n-\n+ rows = redash_rows('bq-impressions', date)\n details = []\n for row in rows:\n # Normalize channel name, based on what kind of snippets they get.\n", "issue": "Stop pulling data from RedShift \nStarting in version 72 (Jan 2020), Firefox Telemetry uses BigQuery instead of RedShift. \r\n\r\nWe currently pull data from both data sources for frequency capping and performance reports. \r\n\r\nIn about a year from now the usage of pre-72 versions will be limited and we will be able to remove the RedShift queries from the codebase.\r\n\r\n- [x] Stop pulling for Freq Capped Jobs\r\n- [x] Stop pulling Daily Data\n", "before_files": [{"content": "import collections\nimport json\n\nfrom urllib.parse import urlencode\n\nfrom django.conf import settings\nfrom django.db.transaction import atomic\nfrom redash_dynamic_query import RedashDynamicQuery\n\nfrom snippets.base.models import CHANNELS, DailyImpressions, JobDailyPerformance, Job\n\n\nREDASH_QUERY_IDS = {\n 'redshift-job': 68135,\n 'bq-job': 68136,\n 'redshift-impressions': 68345,\n 'bq-impressions': 68341,\n}\n\nredash = RedashDynamicQuery(\n endpoint=settings.REDASH_ENDPOINT,\n apikey=settings.REDASH_API_KEY,\n max_wait=settings.REDASH_MAX_WAIT)\n\n\ndef redash_source_url(query_id_or_name, **params):\n query_id = REDASH_QUERY_IDS.get(query_id_or_name, query_id_or_name)\n url = f'{settings.REDASH_ENDPOINT}/queries/{query_id}/source'\n if params:\n url += '?' + urlencode({f'p_{key}_{query_id}': value\n for key, value in params.items()})\n return url\n\n\ndef redash_rows(query_name, date):\n query_id = REDASH_QUERY_IDS[query_name]\n bind_data = {'date': str(date)}\n result = redash.query(query_id, bind_data)\n return result['query_result']['data']['rows']\n\n\ndef prosses_rows(rows, key='message_id'):\n job_ids = [str(x) for x in Job.objects.all().values_list('id', flat=True)]\n new_rows = []\n for row in sorted(rows, key=lambda x: x[key]):\n # Remove rows with invalid Job IDs\n if row['message_id'] not in job_ids:\n continue\n\n # Redash uses {} instead of null\n if row['event_context'] == '{}':\n row['event_context'] = ''\n\n # Sometimes data in Telemetry populate `event_context`, some\n # other times it uses `additional_properties['value']` to\n # place the event context. Extract information from both\n # places to identify the event.\n properties = json.loads(row.get('additional_properties', '{}'))\n event = row['event_context'] or properties.get('value', '') or row['event']\n\n if event in ['CLICK_BUTTON', 'CLICK']:\n event = 'click'\n elif event == 'IMPRESSION':\n event = 'impression'\n elif event == 'BLOCK':\n event = 'block'\n elif event == 'DISMISS':\n event = 'dismiss'\n elif event == 'scene1-button-learn-more':\n event = 'go_to_scene2'\n elif event in ['subscribe-success',\n 'subscribe-error',\n 'conversion-subscribe-activation']:\n event = event.replace('-', '_')\n else:\n # Ignore invalid event\n continue\n\n row['event'] = event\n\n # Normalize channel name, based on what kind of snippets they get.\n channel = row['channel']\n if not channel:\n channel = 'release'\n row['channel'] = next(\n (item for item in CHANNELS if\n channel.startswith(item)), 'release'\n )\n\n # Normalize country\n country_code = row['country_code']\n if country_code in ['ERROR', None]:\n row['country_code'] = 'XX'\n\n # Not needed anymore\n row.pop('event_context', None)\n row.pop('additional_properties', None)\n\n new_rows.append(row)\n\n # Aggregate counts of same events for the global count.\n processed = collections.defaultdict(dict)\n for row in new_rows:\n event = row['event']\n processed[row[key]][event] = processed[row[key]].get(event, 0) + row['counts']\n\n detail = [{\n 'event': row['event'],\n 'channel': row['channel'],\n 'country': row['country_code'],\n 'counts': row['counts'],\n }]\n\n if not processed[row[key]].get('details'):\n processed[row[key]]['details'] = detail\n else:\n for drow in processed[row[key]]['details']:\n if ((drow['event'] == row['event'] and\n drow['channel'] == row['channel'] and\n drow['country'] == row['country_code'])):\n drow['counts'] += row['counts']\n break\n else:\n processed[row[key]]['details'] += detail\n\n # Last pass for multi-scene snippets: Click events here refer to\n # clicks of secondary links listed on the template that go to\n # terms of services or additional information and are displayed\n # in the small text below the input element. These do not count\n # clicking on `Learn more` (i.e. going from scene 1 to scene 2)\n # or the main Call To Action. The later is measured in\n # `conversion_subscribe_activation` and this is the value which\n # is important to us and thus we rename this to `clicks`.\n for k, v in processed.items():\n if 'conversion_subscribe_activation' in v:\n processed[k]['other_click'] = processed[k].get('click', 0)\n processed[k]['click'] = processed[k].pop('conversion_subscribe_activation')\n for row in processed[k]['details']:\n if row['event'] == 'click':\n row['event'] = 'other_click'\n elif row['event'] == 'conversion_subscribe_activation':\n row['event'] = 'click'\n\n return processed\n\n\ndef update_job_metrics(date):\n rows = []\n for query in ['redshift-job', 'bq-job']:\n rows += redash_rows(query, date)\n\n processed = prosses_rows(rows, key='message_id')\n with atomic():\n JobDailyPerformance.objects.filter(date=date).delete()\n for job, data in processed.items():\n JobDailyPerformance.objects.create(\n date=date,\n job=Job.objects.get(id=job),\n **data\n )\n return len(processed) > 0\n\n\ndef update_impressions(date):\n rows = []\n\n for query in ['redshift-impressions', 'bq-impressions']:\n rows += redash_rows(query, date)\n\n details = []\n for row in rows:\n # Normalize channel name, based on what kind of snippets they get.\n channel = row['channel']\n if not channel:\n channel = 'release'\n channel = next(\n (item for item in CHANNELS if\n channel.startswith(item)), 'release'\n )\n\n # Aggregate counts of the same duration and the same channel.\n for item in details:\n if (item['channel'] == channel and item['duration'] == row['duration']):\n item['counts'] += row['counts']\n break\n else:\n details.append({\n 'channel': channel,\n 'duration': row['duration'],\n 'counts': row['counts'],\n })\n\n with atomic():\n DailyImpressions.objects.filter(date=date).delete()\n DailyImpressions.objects.create(\n date=date,\n details=details\n )\n\n return len(details)\n", "path": "snippets/base/etl.py"}], "after_files": [{"content": "import collections\nimport json\n\nfrom urllib.parse import urlencode\n\nfrom django.conf import settings\nfrom django.db.transaction import atomic\nfrom redash_dynamic_query import RedashDynamicQuery\n\nfrom snippets.base.models import CHANNELS, DailyImpressions, JobDailyPerformance, Job\n\n\nREDASH_QUERY_IDS = {\n 'bq-job': 68136,\n 'bq-impressions': 68341,\n\n # Not currently used but kept here for reference.\n 'redshift-job': 68135,\n 'redshift-impressions': 68345,\n}\n\nredash = RedashDynamicQuery(\n endpoint=settings.REDASH_ENDPOINT,\n apikey=settings.REDASH_API_KEY,\n max_wait=settings.REDASH_MAX_WAIT)\n\n\ndef redash_source_url(query_id_or_name, **params):\n query_id = REDASH_QUERY_IDS.get(query_id_or_name, query_id_or_name)\n url = f'{settings.REDASH_ENDPOINT}/queries/{query_id}/source'\n if params:\n url += '?' + urlencode({f'p_{key}_{query_id}': value\n for key, value in params.items()})\n return url\n\n\ndef redash_rows(query_name, date):\n query_id = REDASH_QUERY_IDS[query_name]\n bind_data = {'date': str(date)}\n result = redash.query(query_id, bind_data)\n return result['query_result']['data']['rows']\n\n\ndef prosses_rows(rows, key='message_id'):\n job_ids = [str(x) for x in Job.objects.all().values_list('id', flat=True)]\n new_rows = []\n for row in sorted(rows, key=lambda x: x[key]):\n # Remove rows with invalid Job IDs\n if row['message_id'] not in job_ids:\n continue\n\n # Redash uses {} instead of null\n if row['event_context'] == '{}':\n row['event_context'] = ''\n\n # Sometimes data in Telemetry populate `event_context`, some\n # other times it uses `additional_properties['value']` to\n # place the event context. Extract information from both\n # places to identify the event.\n properties = json.loads(row.get('additional_properties', '{}'))\n event = row['event_context'] or properties.get('value', '') or row['event']\n\n if event in ['CLICK_BUTTON', 'CLICK']:\n event = 'click'\n elif event == 'IMPRESSION':\n event = 'impression'\n elif event == 'BLOCK':\n event = 'block'\n elif event == 'DISMISS':\n event = 'dismiss'\n elif event == 'scene1-button-learn-more':\n event = 'go_to_scene2'\n elif event in ['subscribe-success',\n 'subscribe-error',\n 'conversion-subscribe-activation']:\n event = event.replace('-', '_')\n else:\n # Ignore invalid event\n continue\n\n row['event'] = event\n\n # Normalize channel name, based on what kind of snippets they get.\n channel = row['channel']\n if not channel:\n channel = 'release'\n row['channel'] = next(\n (item for item in CHANNELS if\n channel.startswith(item)), 'release'\n )\n\n # Normalize country\n country_code = row['country_code']\n if country_code in ['ERROR', None]:\n row['country_code'] = 'XX'\n\n # Not needed anymore\n row.pop('event_context', None)\n row.pop('additional_properties', None)\n\n new_rows.append(row)\n\n # Aggregate counts of same events for the global count.\n processed = collections.defaultdict(dict)\n for row in new_rows:\n event = row['event']\n processed[row[key]][event] = processed[row[key]].get(event, 0) + row['counts']\n\n detail = [{\n 'event': row['event'],\n 'channel': row['channel'],\n 'country': row['country_code'],\n 'counts': row['counts'],\n }]\n\n if not processed[row[key]].get('details'):\n processed[row[key]]['details'] = detail\n else:\n for drow in processed[row[key]]['details']:\n if ((drow['event'] == row['event'] and\n drow['channel'] == row['channel'] and\n drow['country'] == row['country_code'])):\n drow['counts'] += row['counts']\n break\n else:\n processed[row[key]]['details'] += detail\n\n # Last pass for multi-scene snippets: Click events here refer to\n # clicks of secondary links listed on the template that go to\n # terms of services or additional information and are displayed\n # in the small text below the input element. These do not count\n # clicking on `Learn more` (i.e. going from scene 1 to scene 2)\n # or the main Call To Action. The later is measured in\n # `conversion_subscribe_activation` and this is the value which\n # is important to us and thus we rename this to `clicks`.\n for k, v in processed.items():\n if 'conversion_subscribe_activation' in v:\n processed[k]['other_click'] = processed[k].get('click', 0)\n processed[k]['click'] = processed[k].pop('conversion_subscribe_activation')\n for row in processed[k]['details']:\n if row['event'] == 'click':\n row['event'] = 'other_click'\n elif row['event'] == 'conversion_subscribe_activation':\n row['event'] = 'click'\n\n return processed\n\n\ndef update_job_metrics(date):\n rows = redash_rows('bq-job', date)\n processed = prosses_rows(rows, key='message_id')\n with atomic():\n JobDailyPerformance.objects.filter(date=date).delete()\n for job, data in processed.items():\n JobDailyPerformance.objects.create(\n date=date,\n job=Job.objects.get(id=job),\n **data\n )\n return len(processed) > 0\n\n\ndef update_impressions(date):\n rows = redash_rows('bq-impressions', date)\n details = []\n for row in rows:\n # Normalize channel name, based on what kind of snippets they get.\n channel = row['channel']\n if not channel:\n channel = 'release'\n channel = next(\n (item for item in CHANNELS if\n channel.startswith(item)), 'release'\n )\n\n # Aggregate counts of the same duration and the same channel.\n for item in details:\n if (item['channel'] == channel and item['duration'] == row['duration']):\n item['counts'] += row['counts']\n break\n else:\n details.append({\n 'channel': channel,\n 'duration': row['duration'],\n 'counts': row['counts'],\n })\n\n with atomic():\n DailyImpressions.objects.filter(date=date).delete()\n DailyImpressions.objects.create(\n date=date,\n details=details\n )\n\n return len(details)\n", "path": "snippets/base/etl.py"}]}
| 2,402 | 363 |
gh_patches_debug_33139
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-contrib-2535
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Baggage span processor - key predicate
This issue is to track adding a method of selecting what baggage key entries should be copied.
Feedback in the JS contrib PR was to allow a user-provided predicate function. This puts the responsibility on the user to ensure sensitive baggage keys are not copied while also not prescribing how that is determined.
- https://github.com/open-telemetry/opentelemetry-js-contrib/issues/2166
We had a similar feedback in the .NET contrib project but thought it was more complicated than just using a set of prefixes so created an issue to continue the discussion. The plain processor that copies all baggage entries (like using `*` in your example) is likely to be accepted first.
- https://github.com/open-telemetry/opentelemetry-dotnet-contrib/issues/1695
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Optional
16
17 from opentelemetry.baggage import get_all as get_all_baggage
18 from opentelemetry.context import Context
19 from opentelemetry.sdk.trace.export import SpanProcessor
20 from opentelemetry.trace import Span
21
22
23 class BaggageSpanProcessor(SpanProcessor):
24 """
25 The BaggageSpanProcessor reads entries stored in Baggage
26 from the parent context and adds the baggage entries' keys and
27 values to the span as attributes on span start.
28
29 Add this span processor to a tracer provider.
30
31 Keys and values added to Baggage will appear on subsequent child
32 spans for a trace within this service *and* be propagated to external
33 services in accordance with any configured propagation formats
34 configured. If the external services also have a Baggage span
35 processor, the keys and values will appear in those child spans as
36 well.
37
38 ⚠ Warning ⚠️
39
40 Do not put sensitive information in Baggage.
41
42 To repeat: a consequence of adding data to Baggage is that the keys and
43 values will appear in all outgoing HTTP headers from the application.
44
45 """
46
47 def __init__(self) -> None:
48 pass
49
50 def on_start(
51 self, span: "Span", parent_context: Optional[Context] = None
52 ) -> None:
53 baggage = get_all_baggage(parent_context)
54 for key, value in baggage.items():
55 span.set_attribute(key, value)
56
```
Path: `processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # pylint: disable=import-error
16
17 from .processor import BaggageSpanProcessor
18 from .version import __version__
19
20 __all__ = ["BaggageSpanProcessor", "__version__"]
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py b/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py
--- a/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py
+++ b/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py
@@ -14,7 +14,7 @@
# pylint: disable=import-error
-from .processor import BaggageSpanProcessor
+from .processor import ALLOW_ALL_BAGGAGE_KEYS, BaggageSpanProcessor
from .version import __version__
-__all__ = ["BaggageSpanProcessor", "__version__"]
+__all__ = ["ALLOW_ALL_BAGGAGE_KEYS", "BaggageSpanProcessor", "__version__"]
diff --git a/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py b/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py
--- a/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py
+++ b/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py
@@ -12,13 +12,19 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from typing import Optional
+from typing import Callable, Optional
from opentelemetry.baggage import get_all as get_all_baggage
from opentelemetry.context import Context
from opentelemetry.sdk.trace.export import SpanProcessor
from opentelemetry.trace import Span
+# A BaggageKeyPredicate is a function that takes a baggage key and returns a boolean
+BaggageKeyPredicateT = Callable[[str], bool]
+
+# A BaggageKeyPredicate that always returns True, allowing all baggage keys to be added to spans
+ALLOW_ALL_BAGGAGE_KEYS: BaggageKeyPredicateT = lambda _: True
+
class BaggageSpanProcessor(SpanProcessor):
"""
@@ -44,12 +50,13 @@
"""
- def __init__(self) -> None:
- pass
+ def __init__(self, baggage_key_predicate: BaggageKeyPredicateT) -> None:
+ self._baggage_key_predicate = baggage_key_predicate
def on_start(
self, span: "Span", parent_context: Optional[Context] = None
) -> None:
baggage = get_all_baggage(parent_context)
for key, value in baggage.items():
- span.set_attribute(key, value)
+ if self._baggage_key_predicate(key):
+ span.set_attribute(key, value)
|
{"golden_diff": "diff --git a/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py b/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py\n--- a/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py\n+++ b/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py\n@@ -14,7 +14,7 @@\n \n # pylint: disable=import-error\n \n-from .processor import BaggageSpanProcessor\n+from .processor import ALLOW_ALL_BAGGAGE_KEYS, BaggageSpanProcessor\n from .version import __version__\n \n-__all__ = [\"BaggageSpanProcessor\", \"__version__\"]\n+__all__ = [\"ALLOW_ALL_BAGGAGE_KEYS\", \"BaggageSpanProcessor\", \"__version__\"]\ndiff --git a/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py b/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py\n--- a/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py\n+++ b/processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py\n@@ -12,13 +12,19 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-from typing import Optional\n+from typing import Callable, Optional\n \n from opentelemetry.baggage import get_all as get_all_baggage\n from opentelemetry.context import Context\n from opentelemetry.sdk.trace.export import SpanProcessor\n from opentelemetry.trace import Span\n \n+# A BaggageKeyPredicate is a function that takes a baggage key and returns a boolean\n+BaggageKeyPredicateT = Callable[[str], bool]\n+\n+# A BaggageKeyPredicate that always returns True, allowing all baggage keys to be added to spans\n+ALLOW_ALL_BAGGAGE_KEYS: BaggageKeyPredicateT = lambda _: True\n+\n \n class BaggageSpanProcessor(SpanProcessor):\n \"\"\"\n@@ -44,12 +50,13 @@\n \n \"\"\"\n \n- def __init__(self) -> None:\n- pass\n+ def __init__(self, baggage_key_predicate: BaggageKeyPredicateT) -> None:\n+ self._baggage_key_predicate = baggage_key_predicate\n \n def on_start(\n self, span: \"Span\", parent_context: Optional[Context] = None\n ) -> None:\n baggage = get_all_baggage(parent_context)\n for key, value in baggage.items():\n- span.set_attribute(key, value)\n+ if self._baggage_key_predicate(key):\n+ span.set_attribute(key, value)\n", "issue": "Baggage span processor - key predicate\nThis issue is to track adding a method of selecting what baggage key entries should be copied.\r\n\r\nFeedback in the JS contrib PR was to allow a user-provided predicate function. This puts the responsibility on the user to ensure sensitive baggage keys are not copied while also not prescribing how that is determined.\r\n- https://github.com/open-telemetry/opentelemetry-js-contrib/issues/2166\r\n\r\n\r\nWe had a similar feedback in the .NET contrib project but thought it was more complicated than just using a set of prefixes so created an issue to continue the discussion. The plain processor that copies all baggage entries (like using `*` in your example) is likely to be accepted first.\r\n- https://github.com/open-telemetry/opentelemetry-dotnet-contrib/issues/1695\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Optional\n\nfrom opentelemetry.baggage import get_all as get_all_baggage\nfrom opentelemetry.context import Context\nfrom opentelemetry.sdk.trace.export import SpanProcessor\nfrom opentelemetry.trace import Span\n\n\nclass BaggageSpanProcessor(SpanProcessor):\n \"\"\"\n The BaggageSpanProcessor reads entries stored in Baggage\n from the parent context and adds the baggage entries' keys and\n values to the span as attributes on span start.\n\n Add this span processor to a tracer provider.\n\n Keys and values added to Baggage will appear on subsequent child\n spans for a trace within this service *and* be propagated to external\n services in accordance with any configured propagation formats\n configured. If the external services also have a Baggage span\n processor, the keys and values will appear in those child spans as\n well.\n\n \u26a0 Warning \u26a0\ufe0f\n\n Do not put sensitive information in Baggage.\n\n To repeat: a consequence of adding data to Baggage is that the keys and\n values will appear in all outgoing HTTP headers from the application.\n\n \"\"\"\n\n def __init__(self) -> None:\n pass\n\n def on_start(\n self, span: \"Span\", parent_context: Optional[Context] = None\n ) -> None:\n baggage = get_all_baggage(parent_context)\n for key, value in baggage.items():\n span.set_attribute(key, value)\n", "path": "processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py"}, {"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pylint: disable=import-error\n\nfrom .processor import BaggageSpanProcessor\nfrom .version import __version__\n\n__all__ = [\"BaggageSpanProcessor\", \"__version__\"]\n", "path": "processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Callable, Optional\n\nfrom opentelemetry.baggage import get_all as get_all_baggage\nfrom opentelemetry.context import Context\nfrom opentelemetry.sdk.trace.export import SpanProcessor\nfrom opentelemetry.trace import Span\n\n# A BaggageKeyPredicate is a function that takes a baggage key and returns a boolean\nBaggageKeyPredicateT = Callable[[str], bool]\n\n# A BaggageKeyPredicate that always returns True, allowing all baggage keys to be added to spans\nALLOW_ALL_BAGGAGE_KEYS: BaggageKeyPredicateT = lambda _: True\n\n\nclass BaggageSpanProcessor(SpanProcessor):\n \"\"\"\n The BaggageSpanProcessor reads entries stored in Baggage\n from the parent context and adds the baggage entries' keys and\n values to the span as attributes on span start.\n\n Add this span processor to a tracer provider.\n\n Keys and values added to Baggage will appear on subsequent child\n spans for a trace within this service *and* be propagated to external\n services in accordance with any configured propagation formats\n configured. If the external services also have a Baggage span\n processor, the keys and values will appear in those child spans as\n well.\n\n \u26a0 Warning \u26a0\ufe0f\n\n Do not put sensitive information in Baggage.\n\n To repeat: a consequence of adding data to Baggage is that the keys and\n values will appear in all outgoing HTTP headers from the application.\n\n \"\"\"\n\n def __init__(self, baggage_key_predicate: BaggageKeyPredicateT) -> None:\n self._baggage_key_predicate = baggage_key_predicate\n\n def on_start(\n self, span: \"Span\", parent_context: Optional[Context] = None\n ) -> None:\n baggage = get_all_baggage(parent_context)\n for key, value in baggage.items():\n if self._baggage_key_predicate(key):\n span.set_attribute(key, value)\n", "path": "processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/processor.py"}, {"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pylint: disable=import-error\n\nfrom .processor import ALLOW_ALL_BAGGAGE_KEYS, BaggageSpanProcessor\nfrom .version import __version__\n\n__all__ = [\"ALLOW_ALL_BAGGAGE_KEYS\", \"BaggageSpanProcessor\", \"__version__\"]\n", "path": "processor/opentelemetry-processor-baggage/src/opentelemetry/processor/baggage/__init__.py"}]}
| 1,219 | 617 |
gh_patches_debug_12860
|
rasdani/github-patches
|
git_diff
|
googleapis__google-cloud-python-297
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DISCUSSION: Should dataset ID be set on datastore key?
This question came up in the review in #282 while trying to define the "correct" behavior of `datastore.Key.__eq__`.
The only remaining use of `Key._dataset_id` is in [`to_protobuf`](https://github.com/GoogleCloudPlatform/gcloud-python/blob/b6d3e74a48e8554804ea3d33f53385bbbdb5c4b7/gcloud/datastore/key.py#L53) but #121 seems to indicate that the dataset ID is not needed on a `Key`.
ISTM we should just remove `_dataset_id` from the `Key` class, even though it is returned in the protobuf after an entity is stored/retrieved. @pcostell WDYT?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gcloud/datastore/key.py`
Content:
```
1 """Create / interact with gcloud datastore keys."""
2
3 import copy
4 from itertools import izip
5
6 from gcloud.datastore import datastore_v1_pb2 as datastore_pb
7
8
9 class Key(object):
10 """An immutable representation of a datastore Key.
11
12 .. automethod:: __init__
13 """
14
15 def __init__(self, path=None, namespace=None, dataset_id=None):
16 """Constructor / initializer for a key.
17
18 :type namespace: :class:`str`
19 :param namespace: A namespace identifier for the key.
20
21 :type path: sequence of dicts
22 :param path: Each dict must have keys 'kind' (a string) and optionally
23 'name' (a string) or 'id' (an integer).
24
25 :type dataset_id: string
26 :param dataset: The dataset ID assigned by back-end for the key.
27 Leave as None for newly-created keys.
28 """
29 self._path = path or [{'kind': ''}]
30 self._namespace = namespace
31 self._dataset_id = dataset_id
32
33 def _clone(self):
34 """Duplicates the Key.
35
36 We make a shallow copy of the :class:`gcloud.datastore.dataset.Dataset`
37 because it holds a reference an authenticated connection,
38 which we don't want to lose.
39
40 :rtype: :class:`gcloud.datastore.key.Key`
41 :returns: a new `Key` instance
42 """
43 return copy.deepcopy(self)
44
45 def to_protobuf(self):
46 """Return a protobuf corresponding to the key.
47
48 :rtype: :class:`gcloud.datastore.datastore_v1_pb2.Key`
49 :returns: The Protobuf representing the key.
50 """
51 key = datastore_pb.Key()
52
53 if self._dataset_id is not None:
54 key.partition_id.dataset_id = self._dataset_id
55
56 if self._namespace:
57 key.partition_id.namespace = self._namespace
58
59 for item in self.path():
60 element = key.path_element.add()
61 if 'kind' in item:
62 element.kind = item['kind']
63 if 'id' in item:
64 element.id = item['id']
65 if 'name' in item:
66 element.name = item['name']
67
68 return key
69
70 @classmethod
71 def from_path(cls, *args, **kwargs):
72 """Factory method for creating a key based on a path.
73
74 :type args: :class:`tuple`
75 :param args: sequence of even length, where the first of each pair is a
76 string representing the 'kind' of the path element, and
77 the second of the pair is either a string (for the path
78 element's name) or an integer (for its id).
79
80 :type kwargs: :class:`dict`
81 :param kwargs: Other named parameters which can be passed to
82 :func:`Key.__init__`.
83
84 :rtype: :class:`gcloud.datastore.key.Key`
85 :returns: a new :class:`Key` instance
86 """
87 if len(args) % 2:
88 raise ValueError('Must pass an even number of args.')
89
90 path = []
91 items = iter(args)
92
93 for kind, id_or_name in izip(items, items):
94 entry = {'kind': kind}
95 if isinstance(id_or_name, basestring):
96 entry['name'] = id_or_name
97 else:
98 entry['id'] = id_or_name
99 path.append(entry)
100
101 kwargs['path'] = path
102 return cls(**kwargs)
103
104 def is_partial(self):
105 """Boolean test: is the key fully mapped onto a backend entity?
106
107 :rtype: :class:`bool`
108 :returns: True if the last element of the key's path does not have
109 an 'id' or a 'name'.
110 """
111 return self.id_or_name() is None
112
113 def namespace(self, namespace=None):
114 """Namespace setter / getter.
115
116 :type namespace: :class:`str`
117 :param namespace: A namespace identifier for the key.
118
119 :rtype: :class:`Key` (for setter); or :class:`str` (for getter)
120 :returns: a new key, cloned from self., with the given namespace
121 (setter); or self's namespace (getter).
122 """
123 if namespace:
124 clone = self._clone()
125 clone._namespace = namespace
126 return clone
127 else:
128 return self._namespace
129
130 def path(self, path=None):
131 """Path setter / getter.
132
133 :type path: sequence of dicts
134 :param path: Each dict must have keys 'kind' (a string) and optionally
135 'name' (a string) or 'id' (an integer).
136
137 :rtype: :class:`Key` (for setter); or :class:`str` (for getter)
138 :returns: a new key, cloned from self., with the given path (setter);
139 or self's path (getter).
140 """
141 if path:
142 clone = self._clone()
143 clone._path = path
144 return clone
145 else:
146 return self._path
147
148 def kind(self, kind=None):
149 """Kind setter / getter. Based on the last element of path.
150
151 :type kind: :class:`str`
152 :param kind: The new kind for the key.
153
154 :rtype: :class:`Key` (for setter); or :class:`str` (for getter)
155 :returns: a new key, cloned from self., with the given kind (setter);
156 or self's kind (getter).
157 """
158 if kind:
159 clone = self._clone()
160 clone._path[-1]['kind'] = kind
161 return clone
162 elif self.path():
163 return self._path[-1]['kind']
164
165 def id(self, id_to_set=None):
166 """ID setter / getter. Based on the last element of path.
167
168 :type id_to_set: :class:`int`
169 :param id_to_set: The new ID for the key.
170
171 :rtype: :class:`Key` (for setter); or :class:`int` (for getter)
172 :returns: a new key, cloned from self., with the given id (setter);
173 or self's id (getter).
174 """
175 if id_to_set:
176 clone = self._clone()
177 clone._path[-1]['id'] = id_to_set
178 return clone
179 elif self.path():
180 return self._path[-1].get('id')
181
182 def name(self, name=None):
183 """Name setter / getter. Based on the last element of path.
184
185 :type kind: :class:`str`
186 :param kind: The new name for the key.
187
188 :rtype: :class:`Key` (for setter); or :class:`str` (for getter)
189 :returns: a new key, cloned from self., with the given name (setter);
190 or self's name (getter).
191 """
192 if name:
193 clone = self._clone()
194 clone._path[-1]['name'] = name
195 return clone
196 elif self.path():
197 return self._path[-1].get('name')
198
199 def id_or_name(self):
200 """Getter. Based on the last element of path.
201
202 :rtype: :class:`int` (if 'id' is set); or :class:`str` (the 'name')
203 :returns: True if the last element of the key's path has either an 'id'
204 or a 'name'.
205 """
206 return self.id() or self.name()
207
208 def parent(self):
209 """Getter: return a new key for the next highest element in path.
210
211 :rtype: :class:`gcloud.datastore.key.Key`
212 :returns: a new `Key` instance, whose path consists of all but the last
213 element of self's path. If self has only one path element,
214 return None.
215 """
216 if len(self._path) <= 1:
217 return None
218 return self.path(self.path()[:-1])
219
220 def __repr__(self):
221 return '<Key%s>' % self.path()
222
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gcloud/datastore/key.py b/gcloud/datastore/key.py
--- a/gcloud/datastore/key.py
+++ b/gcloud/datastore/key.py
@@ -24,7 +24,15 @@
:type dataset_id: string
:param dataset: The dataset ID assigned by back-end for the key.
- Leave as None for newly-created keys.
+
+ .. note::
+
+ The key's ``_dataset_id`` field must be None for keys created
+ by application code. The
+ :func:`gcloud.datastore.helpers.key_from_protobuf` factory
+ will be set the field to an appropriate value for keys returned
+ from the datastore backend. The application **must** treat any
+ value set by the back-end as opaque.
"""
self._path = path or [{'kind': ''}]
self._namespace = namespace
|
{"golden_diff": "diff --git a/gcloud/datastore/key.py b/gcloud/datastore/key.py\n--- a/gcloud/datastore/key.py\n+++ b/gcloud/datastore/key.py\n@@ -24,7 +24,15 @@\n \n :type dataset_id: string\n :param dataset: The dataset ID assigned by back-end for the key.\n- Leave as None for newly-created keys.\n+\n+ .. note::\n+\n+ The key's ``_dataset_id`` field must be None for keys created\n+ by application code. The\n+ :func:`gcloud.datastore.helpers.key_from_protobuf` factory\n+ will be set the field to an appropriate value for keys returned\n+ from the datastore backend. The application **must** treat any\n+ value set by the back-end as opaque.\n \"\"\"\n self._path = path or [{'kind': ''}]\n self._namespace = namespace\n", "issue": "DISCUSSION: Should dataset ID be set on datastore key?\nThis question came up in the review in #282 while trying to define the \"correct\" behavior of `datastore.Key.__eq__`.\n\nThe only remaining use of `Key._dataset_id` is in [`to_protobuf`](https://github.com/GoogleCloudPlatform/gcloud-python/blob/b6d3e74a48e8554804ea3d33f53385bbbdb5c4b7/gcloud/datastore/key.py#L53) but #121 seems to indicate that the dataset ID is not needed on a `Key`.\n\nISTM we should just remove `_dataset_id` from the `Key` class, even though it is returned in the protobuf after an entity is stored/retrieved. @pcostell WDYT?\n\n", "before_files": [{"content": "\"\"\"Create / interact with gcloud datastore keys.\"\"\"\n\nimport copy\nfrom itertools import izip\n\nfrom gcloud.datastore import datastore_v1_pb2 as datastore_pb\n\n\nclass Key(object):\n \"\"\"An immutable representation of a datastore Key.\n\n .. automethod:: __init__\n \"\"\"\n\n def __init__(self, path=None, namespace=None, dataset_id=None):\n \"\"\"Constructor / initializer for a key.\n\n :type namespace: :class:`str`\n :param namespace: A namespace identifier for the key.\n\n :type path: sequence of dicts\n :param path: Each dict must have keys 'kind' (a string) and optionally\n 'name' (a string) or 'id' (an integer).\n\n :type dataset_id: string\n :param dataset: The dataset ID assigned by back-end for the key.\n Leave as None for newly-created keys.\n \"\"\"\n self._path = path or [{'kind': ''}]\n self._namespace = namespace\n self._dataset_id = dataset_id\n\n def _clone(self):\n \"\"\"Duplicates the Key.\n\n We make a shallow copy of the :class:`gcloud.datastore.dataset.Dataset`\n because it holds a reference an authenticated connection,\n which we don't want to lose.\n\n :rtype: :class:`gcloud.datastore.key.Key`\n :returns: a new `Key` instance\n \"\"\"\n return copy.deepcopy(self)\n\n def to_protobuf(self):\n \"\"\"Return a protobuf corresponding to the key.\n\n :rtype: :class:`gcloud.datastore.datastore_v1_pb2.Key`\n :returns: The Protobuf representing the key.\n \"\"\"\n key = datastore_pb.Key()\n\n if self._dataset_id is not None:\n key.partition_id.dataset_id = self._dataset_id\n\n if self._namespace:\n key.partition_id.namespace = self._namespace\n\n for item in self.path():\n element = key.path_element.add()\n if 'kind' in item:\n element.kind = item['kind']\n if 'id' in item:\n element.id = item['id']\n if 'name' in item:\n element.name = item['name']\n\n return key\n\n @classmethod\n def from_path(cls, *args, **kwargs):\n \"\"\"Factory method for creating a key based on a path.\n\n :type args: :class:`tuple`\n :param args: sequence of even length, where the first of each pair is a\n string representing the 'kind' of the path element, and\n the second of the pair is either a string (for the path\n element's name) or an integer (for its id).\n\n :type kwargs: :class:`dict`\n :param kwargs: Other named parameters which can be passed to\n :func:`Key.__init__`.\n\n :rtype: :class:`gcloud.datastore.key.Key`\n :returns: a new :class:`Key` instance\n \"\"\"\n if len(args) % 2:\n raise ValueError('Must pass an even number of args.')\n\n path = []\n items = iter(args)\n\n for kind, id_or_name in izip(items, items):\n entry = {'kind': kind}\n if isinstance(id_or_name, basestring):\n entry['name'] = id_or_name\n else:\n entry['id'] = id_or_name\n path.append(entry)\n\n kwargs['path'] = path\n return cls(**kwargs)\n\n def is_partial(self):\n \"\"\"Boolean test: is the key fully mapped onto a backend entity?\n\n :rtype: :class:`bool`\n :returns: True if the last element of the key's path does not have\n an 'id' or a 'name'.\n \"\"\"\n return self.id_or_name() is None\n\n def namespace(self, namespace=None):\n \"\"\"Namespace setter / getter.\n\n :type namespace: :class:`str`\n :param namespace: A namespace identifier for the key.\n\n :rtype: :class:`Key` (for setter); or :class:`str` (for getter)\n :returns: a new key, cloned from self., with the given namespace\n (setter); or self's namespace (getter).\n \"\"\"\n if namespace:\n clone = self._clone()\n clone._namespace = namespace\n return clone\n else:\n return self._namespace\n\n def path(self, path=None):\n \"\"\"Path setter / getter.\n\n :type path: sequence of dicts\n :param path: Each dict must have keys 'kind' (a string) and optionally\n 'name' (a string) or 'id' (an integer).\n\n :rtype: :class:`Key` (for setter); or :class:`str` (for getter)\n :returns: a new key, cloned from self., with the given path (setter);\n or self's path (getter).\n \"\"\"\n if path:\n clone = self._clone()\n clone._path = path\n return clone\n else:\n return self._path\n\n def kind(self, kind=None):\n \"\"\"Kind setter / getter. Based on the last element of path.\n\n :type kind: :class:`str`\n :param kind: The new kind for the key.\n\n :rtype: :class:`Key` (for setter); or :class:`str` (for getter)\n :returns: a new key, cloned from self., with the given kind (setter);\n or self's kind (getter).\n \"\"\"\n if kind:\n clone = self._clone()\n clone._path[-1]['kind'] = kind\n return clone\n elif self.path():\n return self._path[-1]['kind']\n\n def id(self, id_to_set=None):\n \"\"\"ID setter / getter. Based on the last element of path.\n\n :type id_to_set: :class:`int`\n :param id_to_set: The new ID for the key.\n\n :rtype: :class:`Key` (for setter); or :class:`int` (for getter)\n :returns: a new key, cloned from self., with the given id (setter);\n or self's id (getter).\n \"\"\"\n if id_to_set:\n clone = self._clone()\n clone._path[-1]['id'] = id_to_set\n return clone\n elif self.path():\n return self._path[-1].get('id')\n\n def name(self, name=None):\n \"\"\"Name setter / getter. Based on the last element of path.\n\n :type kind: :class:`str`\n :param kind: The new name for the key.\n\n :rtype: :class:`Key` (for setter); or :class:`str` (for getter)\n :returns: a new key, cloned from self., with the given name (setter);\n or self's name (getter).\n \"\"\"\n if name:\n clone = self._clone()\n clone._path[-1]['name'] = name\n return clone\n elif self.path():\n return self._path[-1].get('name')\n\n def id_or_name(self):\n \"\"\"Getter. Based on the last element of path.\n\n :rtype: :class:`int` (if 'id' is set); or :class:`str` (the 'name')\n :returns: True if the last element of the key's path has either an 'id'\n or a 'name'.\n \"\"\"\n return self.id() or self.name()\n\n def parent(self):\n \"\"\"Getter: return a new key for the next highest element in path.\n\n :rtype: :class:`gcloud.datastore.key.Key`\n :returns: a new `Key` instance, whose path consists of all but the last\n element of self's path. If self has only one path element,\n return None.\n \"\"\"\n if len(self._path) <= 1:\n return None\n return self.path(self.path()[:-1])\n\n def __repr__(self):\n return '<Key%s>' % self.path()\n", "path": "gcloud/datastore/key.py"}], "after_files": [{"content": "\"\"\"Create / interact with gcloud datastore keys.\"\"\"\n\nimport copy\nfrom itertools import izip\n\nfrom gcloud.datastore import datastore_v1_pb2 as datastore_pb\n\n\nclass Key(object):\n \"\"\"An immutable representation of a datastore Key.\n\n .. automethod:: __init__\n \"\"\"\n\n def __init__(self, path=None, namespace=None, dataset_id=None):\n \"\"\"Constructor / initializer for a key.\n\n :type namespace: :class:`str`\n :param namespace: A namespace identifier for the key.\n\n :type path: sequence of dicts\n :param path: Each dict must have keys 'kind' (a string) and optionally\n 'name' (a string) or 'id' (an integer).\n\n :type dataset_id: string\n :param dataset: The dataset ID assigned by back-end for the key.\n\n .. note::\n\n The key's ``_dataset_id`` field must be None for keys created\n by application code. The\n :func:`gcloud.datastore.helpers.key_from_protobuf` factory\n will be set the field to an appropriate value for keys returned\n from the datastore backend. The application **must** treat any\n value set by the back-end as opaque.\n \"\"\"\n self._path = path or [{'kind': ''}]\n self._namespace = namespace\n self._dataset_id = dataset_id\n\n def _clone(self):\n \"\"\"Duplicates the Key.\n\n We make a shallow copy of the :class:`gcloud.datastore.dataset.Dataset`\n because it holds a reference an authenticated connection,\n which we don't want to lose.\n\n :rtype: :class:`gcloud.datastore.key.Key`\n :returns: a new `Key` instance\n \"\"\"\n return copy.deepcopy(self)\n\n def to_protobuf(self):\n \"\"\"Return a protobuf corresponding to the key.\n\n :rtype: :class:`gcloud.datastore.datastore_v1_pb2.Key`\n :returns: The Protobuf representing the key.\n \"\"\"\n key = datastore_pb.Key()\n\n if self._dataset_id is not None:\n key.partition_id.dataset_id = self._dataset_id\n\n if self._namespace:\n key.partition_id.namespace = self._namespace\n\n for item in self.path():\n element = key.path_element.add()\n if 'kind' in item:\n element.kind = item['kind']\n if 'id' in item:\n element.id = item['id']\n if 'name' in item:\n element.name = item['name']\n\n return key\n\n @classmethod\n def from_path(cls, *args, **kwargs):\n \"\"\"Factory method for creating a key based on a path.\n\n :type args: :class:`tuple`\n :param args: sequence of even length, where the first of each pair is a\n string representing the 'kind' of the path element, and\n the second of the pair is either a string (for the path\n element's name) or an integer (for its id).\n\n :type kwargs: :class:`dict`\n :param kwargs: Other named parameters which can be passed to\n :func:`Key.__init__`.\n\n :rtype: :class:`gcloud.datastore.key.Key`\n :returns: a new :class:`Key` instance\n \"\"\"\n if len(args) % 2:\n raise ValueError('Must pass an even number of args.')\n\n path = []\n items = iter(args)\n\n for kind, id_or_name in izip(items, items):\n entry = {'kind': kind}\n if isinstance(id_or_name, basestring):\n entry['name'] = id_or_name\n else:\n entry['id'] = id_or_name\n path.append(entry)\n\n kwargs['path'] = path\n return cls(**kwargs)\n\n def is_partial(self):\n \"\"\"Boolean test: is the key fully mapped onto a backend entity?\n\n :rtype: :class:`bool`\n :returns: True if the last element of the key's path does not have\n an 'id' or a 'name'.\n \"\"\"\n return self.id_or_name() is None\n\n def namespace(self, namespace=None):\n \"\"\"Namespace setter / getter.\n\n :type namespace: :class:`str`\n :param namespace: A namespace identifier for the key.\n\n :rtype: :class:`Key` (for setter); or :class:`str` (for getter)\n :returns: a new key, cloned from self., with the given namespace\n (setter); or self's namespace (getter).\n \"\"\"\n if namespace:\n clone = self._clone()\n clone._namespace = namespace\n return clone\n else:\n return self._namespace\n\n def path(self, path=None):\n \"\"\"Path setter / getter.\n\n :type path: sequence of dicts\n :param path: Each dict must have keys 'kind' (a string) and optionally\n 'name' (a string) or 'id' (an integer).\n\n :rtype: :class:`Key` (for setter); or :class:`str` (for getter)\n :returns: a new key, cloned from self., with the given path (setter);\n or self's path (getter).\n \"\"\"\n if path:\n clone = self._clone()\n clone._path = path\n return clone\n else:\n return self._path\n\n def kind(self, kind=None):\n \"\"\"Kind setter / getter. Based on the last element of path.\n\n :type kind: :class:`str`\n :param kind: The new kind for the key.\n\n :rtype: :class:`Key` (for setter); or :class:`str` (for getter)\n :returns: a new key, cloned from self., with the given kind (setter);\n or self's kind (getter).\n \"\"\"\n if kind:\n clone = self._clone()\n clone._path[-1]['kind'] = kind\n return clone\n elif self.path():\n return self._path[-1]['kind']\n\n def id(self, id_to_set=None):\n \"\"\"ID setter / getter. Based on the last element of path.\n\n :type id_to_set: :class:`int`\n :param id_to_set: The new ID for the key.\n\n :rtype: :class:`Key` (for setter); or :class:`int` (for getter)\n :returns: a new key, cloned from self., with the given id (setter);\n or self's id (getter).\n \"\"\"\n if id_to_set:\n clone = self._clone()\n clone._path[-1]['id'] = id_to_set\n return clone\n elif self.path():\n return self._path[-1].get('id')\n\n def name(self, name=None):\n \"\"\"Name setter / getter. Based on the last element of path.\n\n :type kind: :class:`str`\n :param kind: The new name for the key.\n\n :rtype: :class:`Key` (for setter); or :class:`str` (for getter)\n :returns: a new key, cloned from self., with the given name (setter);\n or self's name (getter).\n \"\"\"\n if name:\n clone = self._clone()\n clone._path[-1]['name'] = name\n return clone\n elif self.path():\n return self._path[-1].get('name')\n\n def id_or_name(self):\n \"\"\"Getter. Based on the last element of path.\n\n :rtype: :class:`int` (if 'id' is set); or :class:`str` (the 'name')\n :returns: True if the last element of the key's path has either an 'id'\n or a 'name'.\n \"\"\"\n return self.id() or self.name()\n\n def parent(self):\n \"\"\"Getter: return a new key for the next highest element in path.\n\n :rtype: :class:`gcloud.datastore.key.Key`\n :returns: a new `Key` instance, whose path consists of all but the last\n element of self's path. If self has only one path element,\n return None.\n \"\"\"\n if len(self._path) <= 1:\n return None\n return self.path(self.path()[:-1])\n\n def __repr__(self):\n return '<Key%s>' % self.path()\n", "path": "gcloud/datastore/key.py"}]}
| 2,743 | 198 |
gh_patches_debug_19161
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-5810
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dm - use national sites
Is it possible to use the national sites for dm stores instead of the German one? The format is `dm.[country code]` for all countries except for Bulgaria, Bosnia and Italy (which use `dm-drogeriemarkt.[country code]`) and Slovakia (`mojadm.sk`).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/dm.py`
Content:
```
1 import scrapy
2
3 from locations.categories import Categories, apply_category
4 from locations.dict_parser import DictParser
5 from locations.hours import DAYS, OpeningHours
6
7
8 class DmSpider(scrapy.Spider):
9 name = "dm"
10 item_attributes = {"brand": "dm", "brand_wikidata": "Q266572"}
11 allowed_domains = ["store-data-service.services.dmtech.com"]
12 start_urls = ["https://store-data-service.services.dmtech.com/stores/bbox/89.999,-179.999,-89.999,179.999"]
13
14 @staticmethod
15 def parse_hours(store_hours: [dict]) -> OpeningHours:
16 opening_hours = OpeningHours()
17
18 for store_day in store_hours:
19 for times in store_day["timeRanges"]:
20 open_time = times["opening"]
21 close_time = times["closing"]
22
23 opening_hours.add_range(DAYS[store_day["weekDay"] - 1], open_time, close_time)
24
25 return opening_hours
26
27 def parse(self, response, **kwargs):
28 for location in response.json()["stores"]:
29 location["address"]["street_address"] = location["address"].pop("street")
30 location["address"]["country"] = location["countryCode"]
31 location["name"] = location["address"].get("name")
32 item = DictParser.parse(location)
33 item["website"] = f'https://www.dm.de/store{location["storeUrlPath"]}'
34 item["extras"]["check_date"] = location["updateTimeStamp"]
35 item["opening_hours"] = self.parse_hours(location["openingHours"])
36
37 apply_category(Categories.SHOP_CHEMIST, item)
38
39 yield item
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/locations/spiders/dm.py b/locations/spiders/dm.py
--- a/locations/spiders/dm.py
+++ b/locations/spiders/dm.py
@@ -30,7 +30,14 @@
location["address"]["country"] = location["countryCode"]
location["name"] = location["address"].get("name")
item = DictParser.parse(location)
- item["website"] = f'https://www.dm.de/store{location["storeUrlPath"]}'
+ if location["countryCode"] in ["BG", "BA", "IT"]:
+ item[
+ "website"
+ ] = f'https://www.dm-drogeriemarkt.{location["countryCode"].lower()}/store{location["storeUrlPath"]}'
+ elif location["countryCode"] == "SK":
+ item["website"] = f'https://www.mojadm.sk/store{location["storeUrlPath"]}'
+ else:
+ item["website"] = f'https://www.dm.{location["countryCode"].lower()}/store{location["storeUrlPath"]}'
item["extras"]["check_date"] = location["updateTimeStamp"]
item["opening_hours"] = self.parse_hours(location["openingHours"])
|
{"golden_diff": "diff --git a/locations/spiders/dm.py b/locations/spiders/dm.py\n--- a/locations/spiders/dm.py\n+++ b/locations/spiders/dm.py\n@@ -30,7 +30,14 @@\n location[\"address\"][\"country\"] = location[\"countryCode\"]\n location[\"name\"] = location[\"address\"].get(\"name\")\n item = DictParser.parse(location)\n- item[\"website\"] = f'https://www.dm.de/store{location[\"storeUrlPath\"]}'\n+ if location[\"countryCode\"] in [\"BG\", \"BA\", \"IT\"]:\n+ item[\n+ \"website\"\n+ ] = f'https://www.dm-drogeriemarkt.{location[\"countryCode\"].lower()}/store{location[\"storeUrlPath\"]}'\n+ elif location[\"countryCode\"] == \"SK\":\n+ item[\"website\"] = f'https://www.mojadm.sk/store{location[\"storeUrlPath\"]}'\n+ else:\n+ item[\"website\"] = f'https://www.dm.{location[\"countryCode\"].lower()}/store{location[\"storeUrlPath\"]}'\n item[\"extras\"][\"check_date\"] = location[\"updateTimeStamp\"]\n item[\"opening_hours\"] = self.parse_hours(location[\"openingHours\"])\n", "issue": "dm - use national sites\nIs it possible to use the national sites for dm stores instead of the German one? The format is `dm.[country code]` for all countries except for Bulgaria, Bosnia and Italy (which use `dm-drogeriemarkt.[country code]`) and Slovakia (`mojadm.sk`).\n", "before_files": [{"content": "import scrapy\n\nfrom locations.categories import Categories, apply_category\nfrom locations.dict_parser import DictParser\nfrom locations.hours import DAYS, OpeningHours\n\n\nclass DmSpider(scrapy.Spider):\n name = \"dm\"\n item_attributes = {\"brand\": \"dm\", \"brand_wikidata\": \"Q266572\"}\n allowed_domains = [\"store-data-service.services.dmtech.com\"]\n start_urls = [\"https://store-data-service.services.dmtech.com/stores/bbox/89.999,-179.999,-89.999,179.999\"]\n\n @staticmethod\n def parse_hours(store_hours: [dict]) -> OpeningHours:\n opening_hours = OpeningHours()\n\n for store_day in store_hours:\n for times in store_day[\"timeRanges\"]:\n open_time = times[\"opening\"]\n close_time = times[\"closing\"]\n\n opening_hours.add_range(DAYS[store_day[\"weekDay\"] - 1], open_time, close_time)\n\n return opening_hours\n\n def parse(self, response, **kwargs):\n for location in response.json()[\"stores\"]:\n location[\"address\"][\"street_address\"] = location[\"address\"].pop(\"street\")\n location[\"address\"][\"country\"] = location[\"countryCode\"]\n location[\"name\"] = location[\"address\"].get(\"name\")\n item = DictParser.parse(location)\n item[\"website\"] = f'https://www.dm.de/store{location[\"storeUrlPath\"]}'\n item[\"extras\"][\"check_date\"] = location[\"updateTimeStamp\"]\n item[\"opening_hours\"] = self.parse_hours(location[\"openingHours\"])\n\n apply_category(Categories.SHOP_CHEMIST, item)\n\n yield item\n", "path": "locations/spiders/dm.py"}], "after_files": [{"content": "import scrapy\n\nfrom locations.categories import Categories, apply_category\nfrom locations.dict_parser import DictParser\nfrom locations.hours import DAYS, OpeningHours\n\n\nclass DmSpider(scrapy.Spider):\n name = \"dm\"\n item_attributes = {\"brand\": \"dm\", \"brand_wikidata\": \"Q266572\"}\n allowed_domains = [\"store-data-service.services.dmtech.com\"]\n start_urls = [\"https://store-data-service.services.dmtech.com/stores/bbox/89.999,-179.999,-89.999,179.999\"]\n\n @staticmethod\n def parse_hours(store_hours: [dict]) -> OpeningHours:\n opening_hours = OpeningHours()\n\n for store_day in store_hours:\n for times in store_day[\"timeRanges\"]:\n open_time = times[\"opening\"]\n close_time = times[\"closing\"]\n\n opening_hours.add_range(DAYS[store_day[\"weekDay\"] - 1], open_time, close_time)\n\n return opening_hours\n\n def parse(self, response, **kwargs):\n for location in response.json()[\"stores\"]:\n location[\"address\"][\"street_address\"] = location[\"address\"].pop(\"street\")\n location[\"address\"][\"country\"] = location[\"countryCode\"]\n location[\"name\"] = location[\"address\"].get(\"name\")\n item = DictParser.parse(location)\n if location[\"countryCode\"] in [\"BG\", \"BA\", \"IT\"]:\n item[\n \"website\"\n ] = f'https://www.dm-drogeriemarkt.{location[\"countryCode\"].lower()}/store{location[\"storeUrlPath\"]}'\n elif location[\"countryCode\"] == \"SK\":\n item[\"website\"] = f'https://www.mojadm.sk/store{location[\"storeUrlPath\"]}'\n else:\n item[\"website\"] = f'https://www.dm.{location[\"countryCode\"].lower()}/store{location[\"storeUrlPath\"]}'\n item[\"extras\"][\"check_date\"] = location[\"updateTimeStamp\"]\n item[\"opening_hours\"] = self.parse_hours(location[\"openingHours\"])\n\n apply_category(Categories.SHOP_CHEMIST, item)\n\n yield item\n", "path": "locations/spiders/dm.py"}]}
| 767 | 269 |
gh_patches_debug_7973
|
rasdani/github-patches
|
git_diff
|
celery__celery-5870
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Continuous memory leak
There is a memory leak in the parent process of Celery's worker.
It is not a child process executing a task.
It happens suddenly every few days.
Unless you stop Celery, it consumes server memory in tens of hours.
This problem happens at least in Celery 4.1, and it also occurs in Celery 4.2.
Celery is running on Ubuntu 16 and brokers use RabbitMQ.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `celery/events/receiver.py`
Content:
```
1 """Event receiver implementation."""
2 from __future__ import absolute_import, unicode_literals
3
4 import time
5 from operator import itemgetter
6
7 from kombu import Queue
8 from kombu.connection import maybe_channel
9 from kombu.mixins import ConsumerMixin
10
11 from celery import uuid
12 from celery.app import app_or_default
13 from celery.utils.time import adjust_timestamp
14
15 from .event import get_exchange
16
17 __all__ = ('EventReceiver',)
18
19 CLIENT_CLOCK_SKEW = -1
20
21 _TZGETTER = itemgetter('utcoffset', 'timestamp')
22
23
24 class EventReceiver(ConsumerMixin):
25 """Capture events.
26
27 Arguments:
28 connection (kombu.Connection): Connection to the broker.
29 handlers (Mapping[Callable]): Event handlers.
30 This is a map of event type names and their handlers.
31 The special handler `"*"` captures all events that don't have a
32 handler.
33 """
34
35 app = None
36
37 def __init__(self, channel, handlers=None, routing_key='#',
38 node_id=None, app=None, queue_prefix=None,
39 accept=None, queue_ttl=None, queue_expires=None):
40 self.app = app_or_default(app or self.app)
41 self.channel = maybe_channel(channel)
42 self.handlers = {} if handlers is None else handlers
43 self.routing_key = routing_key
44 self.node_id = node_id or uuid()
45 self.queue_prefix = queue_prefix or self.app.conf.event_queue_prefix
46 self.exchange = get_exchange(
47 self.connection or self.app.connection_for_write(),
48 name=self.app.conf.event_exchange)
49 if queue_ttl is None:
50 queue_ttl = self.app.conf.event_queue_ttl
51 if queue_expires is None:
52 queue_expires = self.app.conf.event_queue_expires
53 self.queue = Queue(
54 '.'.join([self.queue_prefix, self.node_id]),
55 exchange=self.exchange,
56 routing_key=self.routing_key,
57 auto_delete=True, durable=False,
58 message_ttl=queue_ttl,
59 expires=queue_expires,
60 )
61 self.clock = self.app.clock
62 self.adjust_clock = self.clock.adjust
63 self.forward_clock = self.clock.forward
64 if accept is None:
65 accept = {self.app.conf.event_serializer, 'json'}
66 self.accept = accept
67
68 def process(self, type, event):
69 """Process event by dispatching to configured handler."""
70 handler = self.handlers.get(type) or self.handlers.get('*')
71 handler and handler(event)
72
73 def get_consumers(self, Consumer, channel):
74 return [Consumer(queues=[self.queue],
75 callbacks=[self._receive], no_ack=True,
76 accept=self.accept)]
77
78 def on_consume_ready(self, connection, channel, consumers,
79 wakeup=True, **kwargs):
80 if wakeup:
81 self.wakeup_workers(channel=channel)
82
83 def itercapture(self, limit=None, timeout=None, wakeup=True):
84 return self.consume(limit=limit, timeout=timeout, wakeup=wakeup)
85
86 def capture(self, limit=None, timeout=None, wakeup=True):
87 """Open up a consumer capturing events.
88
89 This has to run in the main process, and it will never stop
90 unless :attr:`EventDispatcher.should_stop` is set to True, or
91 forced via :exc:`KeyboardInterrupt` or :exc:`SystemExit`.
92 """
93 return list(self.consume(limit=limit, timeout=timeout, wakeup=wakeup))
94
95 def wakeup_workers(self, channel=None):
96 self.app.control.broadcast('heartbeat',
97 connection=self.connection,
98 channel=channel)
99
100 def event_from_message(self, body, localize=True,
101 now=time.time, tzfields=_TZGETTER,
102 adjust_timestamp=adjust_timestamp,
103 CLIENT_CLOCK_SKEW=CLIENT_CLOCK_SKEW):
104 type = body['type']
105 if type == 'task-sent':
106 # clients never sync so cannot use their clock value
107 _c = body['clock'] = (self.clock.value or 1) + CLIENT_CLOCK_SKEW
108 self.adjust_clock(_c)
109 else:
110 try:
111 clock = body['clock']
112 except KeyError:
113 body['clock'] = self.forward_clock()
114 else:
115 self.adjust_clock(clock)
116
117 if localize:
118 try:
119 offset, timestamp = tzfields(body)
120 except KeyError:
121 pass
122 else:
123 body['timestamp'] = adjust_timestamp(timestamp, offset)
124 body['local_received'] = now()
125 return type, body
126
127 def _receive(self, body, message, list=list, isinstance=isinstance):
128 if isinstance(body, list): # celery 4.0: List of events
129 process, from_message = self.process, self.event_from_message
130 [process(*from_message(event)) for event in body]
131 else:
132 self.process(*self.event_from_message(body))
133
134 @property
135 def connection(self):
136 return self.channel.connection.client if self.channel else None
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/celery/events/receiver.py b/celery/events/receiver.py
--- a/celery/events/receiver.py
+++ b/celery/events/receiver.py
@@ -90,7 +90,8 @@
unless :attr:`EventDispatcher.should_stop` is set to True, or
forced via :exc:`KeyboardInterrupt` or :exc:`SystemExit`.
"""
- return list(self.consume(limit=limit, timeout=timeout, wakeup=wakeup))
+ for _ in self.consume(limit=limit, timeout=timeout, wakeup=wakeup):
+ pass
def wakeup_workers(self, channel=None):
self.app.control.broadcast('heartbeat',
|
{"golden_diff": "diff --git a/celery/events/receiver.py b/celery/events/receiver.py\n--- a/celery/events/receiver.py\n+++ b/celery/events/receiver.py\n@@ -90,7 +90,8 @@\n unless :attr:`EventDispatcher.should_stop` is set to True, or\n forced via :exc:`KeyboardInterrupt` or :exc:`SystemExit`.\n \"\"\"\n- return list(self.consume(limit=limit, timeout=timeout, wakeup=wakeup))\n+ for _ in self.consume(limit=limit, timeout=timeout, wakeup=wakeup):\n+ pass\n \n def wakeup_workers(self, channel=None):\n self.app.control.broadcast('heartbeat',\n", "issue": "Continuous memory leak\nThere is a memory leak in the parent process of Celery's worker.\nIt is not a child process executing a task.\nIt happens suddenly every few days.\nUnless you stop Celery, it consumes server memory in tens of hours.\n\nThis problem happens at least in Celery 4.1, and it also occurs in Celery 4.2.\nCelery is running on Ubuntu 16 and brokers use RabbitMQ.\n\n\n\n\n", "before_files": [{"content": "\"\"\"Event receiver implementation.\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport time\nfrom operator import itemgetter\n\nfrom kombu import Queue\nfrom kombu.connection import maybe_channel\nfrom kombu.mixins import ConsumerMixin\n\nfrom celery import uuid\nfrom celery.app import app_or_default\nfrom celery.utils.time import adjust_timestamp\n\nfrom .event import get_exchange\n\n__all__ = ('EventReceiver',)\n\nCLIENT_CLOCK_SKEW = -1\n\n_TZGETTER = itemgetter('utcoffset', 'timestamp')\n\n\nclass EventReceiver(ConsumerMixin):\n \"\"\"Capture events.\n\n Arguments:\n connection (kombu.Connection): Connection to the broker.\n handlers (Mapping[Callable]): Event handlers.\n This is a map of event type names and their handlers.\n The special handler `\"*\"` captures all events that don't have a\n handler.\n \"\"\"\n\n app = None\n\n def __init__(self, channel, handlers=None, routing_key='#',\n node_id=None, app=None, queue_prefix=None,\n accept=None, queue_ttl=None, queue_expires=None):\n self.app = app_or_default(app or self.app)\n self.channel = maybe_channel(channel)\n self.handlers = {} if handlers is None else handlers\n self.routing_key = routing_key\n self.node_id = node_id or uuid()\n self.queue_prefix = queue_prefix or self.app.conf.event_queue_prefix\n self.exchange = get_exchange(\n self.connection or self.app.connection_for_write(),\n name=self.app.conf.event_exchange)\n if queue_ttl is None:\n queue_ttl = self.app.conf.event_queue_ttl\n if queue_expires is None:\n queue_expires = self.app.conf.event_queue_expires\n self.queue = Queue(\n '.'.join([self.queue_prefix, self.node_id]),\n exchange=self.exchange,\n routing_key=self.routing_key,\n auto_delete=True, durable=False,\n message_ttl=queue_ttl,\n expires=queue_expires,\n )\n self.clock = self.app.clock\n self.adjust_clock = self.clock.adjust\n self.forward_clock = self.clock.forward\n if accept is None:\n accept = {self.app.conf.event_serializer, 'json'}\n self.accept = accept\n\n def process(self, type, event):\n \"\"\"Process event by dispatching to configured handler.\"\"\"\n handler = self.handlers.get(type) or self.handlers.get('*')\n handler and handler(event)\n\n def get_consumers(self, Consumer, channel):\n return [Consumer(queues=[self.queue],\n callbacks=[self._receive], no_ack=True,\n accept=self.accept)]\n\n def on_consume_ready(self, connection, channel, consumers,\n wakeup=True, **kwargs):\n if wakeup:\n self.wakeup_workers(channel=channel)\n\n def itercapture(self, limit=None, timeout=None, wakeup=True):\n return self.consume(limit=limit, timeout=timeout, wakeup=wakeup)\n\n def capture(self, limit=None, timeout=None, wakeup=True):\n \"\"\"Open up a consumer capturing events.\n\n This has to run in the main process, and it will never stop\n unless :attr:`EventDispatcher.should_stop` is set to True, or\n forced via :exc:`KeyboardInterrupt` or :exc:`SystemExit`.\n \"\"\"\n return list(self.consume(limit=limit, timeout=timeout, wakeup=wakeup))\n\n def wakeup_workers(self, channel=None):\n self.app.control.broadcast('heartbeat',\n connection=self.connection,\n channel=channel)\n\n def event_from_message(self, body, localize=True,\n now=time.time, tzfields=_TZGETTER,\n adjust_timestamp=adjust_timestamp,\n CLIENT_CLOCK_SKEW=CLIENT_CLOCK_SKEW):\n type = body['type']\n if type == 'task-sent':\n # clients never sync so cannot use their clock value\n _c = body['clock'] = (self.clock.value or 1) + CLIENT_CLOCK_SKEW\n self.adjust_clock(_c)\n else:\n try:\n clock = body['clock']\n except KeyError:\n body['clock'] = self.forward_clock()\n else:\n self.adjust_clock(clock)\n\n if localize:\n try:\n offset, timestamp = tzfields(body)\n except KeyError:\n pass\n else:\n body['timestamp'] = adjust_timestamp(timestamp, offset)\n body['local_received'] = now()\n return type, body\n\n def _receive(self, body, message, list=list, isinstance=isinstance):\n if isinstance(body, list): # celery 4.0: List of events\n process, from_message = self.process, self.event_from_message\n [process(*from_message(event)) for event in body]\n else:\n self.process(*self.event_from_message(body))\n\n @property\n def connection(self):\n return self.channel.connection.client if self.channel else None\n", "path": "celery/events/receiver.py"}], "after_files": [{"content": "\"\"\"Event receiver implementation.\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport time\nfrom operator import itemgetter\n\nfrom kombu import Queue\nfrom kombu.connection import maybe_channel\nfrom kombu.mixins import ConsumerMixin\n\nfrom celery import uuid\nfrom celery.app import app_or_default\nfrom celery.utils.time import adjust_timestamp\n\nfrom .event import get_exchange\n\n__all__ = ('EventReceiver',)\n\nCLIENT_CLOCK_SKEW = -1\n\n_TZGETTER = itemgetter('utcoffset', 'timestamp')\n\n\nclass EventReceiver(ConsumerMixin):\n \"\"\"Capture events.\n\n Arguments:\n connection (kombu.Connection): Connection to the broker.\n handlers (Mapping[Callable]): Event handlers.\n This is a map of event type names and their handlers.\n The special handler `\"*\"` captures all events that don't have a\n handler.\n \"\"\"\n\n app = None\n\n def __init__(self, channel, handlers=None, routing_key='#',\n node_id=None, app=None, queue_prefix=None,\n accept=None, queue_ttl=None, queue_expires=None):\n self.app = app_or_default(app or self.app)\n self.channel = maybe_channel(channel)\n self.handlers = {} if handlers is None else handlers\n self.routing_key = routing_key\n self.node_id = node_id or uuid()\n self.queue_prefix = queue_prefix or self.app.conf.event_queue_prefix\n self.exchange = get_exchange(\n self.connection or self.app.connection_for_write(),\n name=self.app.conf.event_exchange)\n if queue_ttl is None:\n queue_ttl = self.app.conf.event_queue_ttl\n if queue_expires is None:\n queue_expires = self.app.conf.event_queue_expires\n self.queue = Queue(\n '.'.join([self.queue_prefix, self.node_id]),\n exchange=self.exchange,\n routing_key=self.routing_key,\n auto_delete=True, durable=False,\n message_ttl=queue_ttl,\n expires=queue_expires,\n )\n self.clock = self.app.clock\n self.adjust_clock = self.clock.adjust\n self.forward_clock = self.clock.forward\n if accept is None:\n accept = {self.app.conf.event_serializer, 'json'}\n self.accept = accept\n\n def process(self, type, event):\n \"\"\"Process event by dispatching to configured handler.\"\"\"\n handler = self.handlers.get(type) or self.handlers.get('*')\n handler and handler(event)\n\n def get_consumers(self, Consumer, channel):\n return [Consumer(queues=[self.queue],\n callbacks=[self._receive], no_ack=True,\n accept=self.accept)]\n\n def on_consume_ready(self, connection, channel, consumers,\n wakeup=True, **kwargs):\n if wakeup:\n self.wakeup_workers(channel=channel)\n\n def itercapture(self, limit=None, timeout=None, wakeup=True):\n return self.consume(limit=limit, timeout=timeout, wakeup=wakeup)\n\n def capture(self, limit=None, timeout=None, wakeup=True):\n \"\"\"Open up a consumer capturing events.\n\n This has to run in the main process, and it will never stop\n unless :attr:`EventDispatcher.should_stop` is set to True, or\n forced via :exc:`KeyboardInterrupt` or :exc:`SystemExit`.\n \"\"\"\n for _ in self.consume(limit=limit, timeout=timeout, wakeup=wakeup):\n pass\n\n def wakeup_workers(self, channel=None):\n self.app.control.broadcast('heartbeat',\n connection=self.connection,\n channel=channel)\n\n def event_from_message(self, body, localize=True,\n now=time.time, tzfields=_TZGETTER,\n adjust_timestamp=adjust_timestamp,\n CLIENT_CLOCK_SKEW=CLIENT_CLOCK_SKEW):\n type = body['type']\n if type == 'task-sent':\n # clients never sync so cannot use their clock value\n _c = body['clock'] = (self.clock.value or 1) + CLIENT_CLOCK_SKEW\n self.adjust_clock(_c)\n else:\n try:\n clock = body['clock']\n except KeyError:\n body['clock'] = self.forward_clock()\n else:\n self.adjust_clock(clock)\n\n if localize:\n try:\n offset, timestamp = tzfields(body)\n except KeyError:\n pass\n else:\n body['timestamp'] = adjust_timestamp(timestamp, offset)\n body['local_received'] = now()\n return type, body\n\n def _receive(self, body, message, list=list, isinstance=isinstance):\n if isinstance(body, list): # celery 4.0: List of events\n process, from_message = self.process, self.event_from_message\n [process(*from_message(event)) for event in body]\n else:\n self.process(*self.event_from_message(body))\n\n @property\n def connection(self):\n return self.channel.connection.client if self.channel else None\n", "path": "celery/events/receiver.py"}]}
| 1,748 | 145 |
gh_patches_debug_32116
|
rasdani/github-patches
|
git_diff
|
google__mobly-238
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Properly report skipped test classes
Now that we have a way to reliably get all the tests in a class, including the generated tests, the test report for an aborted class should include entries for all the tests requested, instead of only one entry for the class.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mobly/records.py`
Content:
```
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """This module is where all the record definitions and record containers live.
15 """
16
17 import json
18 import logging
19 import pprint
20 import sys
21 import traceback
22
23 from mobly import signals
24 from mobly import utils
25
26
27 class TestResultEnums(object):
28 """Enums used for TestResultRecord class.
29
30 Includes the tokens to mark test result with, and the string names for each
31 field in TestResultRecord.
32 """
33
34 RECORD_NAME = 'Test Name'
35 RECORD_CLASS = 'Test Class'
36 RECORD_BEGIN_TIME = 'Begin Time'
37 RECORD_END_TIME = 'End Time'
38 RECORD_RESULT = 'Result'
39 RECORD_UID = 'UID'
40 RECORD_EXTRAS = 'Extras'
41 RECORD_EXTRA_ERRORS = 'Extra Errors'
42 RECORD_DETAILS = 'Details'
43 RECORD_STACKTRACE = 'Stacktrace'
44 TEST_RESULT_PASS = 'PASS'
45 TEST_RESULT_FAIL = 'FAIL'
46 TEST_RESULT_SKIP = 'SKIP'
47 TEST_RESULT_ERROR = 'ERROR'
48
49
50 class TestResultRecord(object):
51 """A record that holds the information of a test execution.
52
53 Attributes:
54 test_name: A string representing the name of the test method.
55 begin_time: Epoch timestamp of when the test started.
56 end_time: Epoch timestamp of when the test ended.
57 self.uid: Unique identifier of a test.
58 self.result: Test result, PASS/FAIL/SKIP.
59 self.extras: User defined extra information of the test result.
60 self.details: A string explaining the details of the test.
61 """
62
63 def __init__(self, t_name, t_class=None):
64 self.test_name = t_name
65 self.test_class = t_class
66 self.begin_time = None
67 self.end_time = None
68 self.uid = None
69 self.result = None
70 self.extras = None
71 self.details = None
72 self.stacktrace = None
73 self.extra_errors = {}
74
75 def test_begin(self):
76 """Call this when the test begins execution.
77
78 Sets the begin_time of this record.
79 """
80 self.begin_time = utils.get_current_epoch_time()
81
82 def _test_end(self, result, e):
83 """Class internal function to signal the end of a test execution.
84
85 Args:
86 result: One of the TEST_RESULT enums in TestResultEnums.
87 e: A test termination signal (usually an exception object). It can
88 be any exception instance or of any subclass of
89 mobly.signals.TestSignal.
90 """
91 self.end_time = utils.get_current_epoch_time()
92 self.result = result
93 if self.extra_errors:
94 self.result = TestResultEnums.TEST_RESULT_ERROR
95 if isinstance(e, signals.TestSignal):
96 self.details = e.details
97 _, _, exc_traceback = sys.exc_info()
98 if exc_traceback:
99 self.stacktrace = ''.join(traceback.format_tb(exc_traceback))
100 self.extras = e.extras
101 elif isinstance(e, Exception):
102 self.details = str(e)
103 _, _, exc_traceback = sys.exc_info()
104 if exc_traceback:
105 self.stacktrace = ''.join(traceback.format_tb(exc_traceback))
106
107 def test_pass(self, e=None):
108 """To mark the test as passed in this record.
109
110 Args:
111 e: An instance of mobly.signals.TestPass.
112 """
113 self._test_end(TestResultEnums.TEST_RESULT_PASS, e)
114
115 def test_fail(self, e=None):
116 """To mark the test as failed in this record.
117
118 Only test_fail does instance check because we want 'assert xxx' to also
119 fail the test same way assert_true does.
120
121 Args:
122 e: An exception object. It can be an instance of AssertionError or
123 mobly.base_test.TestFailure.
124 """
125 self._test_end(TestResultEnums.TEST_RESULT_FAIL, e)
126
127 def test_skip(self, e=None):
128 """To mark the test as skipped in this record.
129
130 Args:
131 e: An instance of mobly.signals.TestSkip.
132 """
133 self._test_end(TestResultEnums.TEST_RESULT_SKIP, e)
134
135 def test_error(self, e=None):
136 """To mark the test as error in this record.
137
138 Args:
139 e: An exception object.
140 """
141 self._test_end(TestResultEnums.TEST_RESULT_ERROR, e)
142
143 def add_error(self, tag, e):
144 """Add extra error happened during a test mark the test result as
145 ERROR.
146
147 If an error is added the test record, the record's result is equivalent
148 to the case where an uncaught exception happened.
149
150 Args:
151 tag: A string describing where this error came from, e.g. 'on_pass'.
152 e: An exception object.
153 """
154 self.result = TestResultEnums.TEST_RESULT_ERROR
155 self.extra_errors[tag] = str(e)
156
157 def __str__(self):
158 d = self.to_dict()
159 l = ['%s = %s' % (k, v) for k, v in d.items()]
160 s = ', '.join(l)
161 return s
162
163 def __repr__(self):
164 """This returns a short string representation of the test record."""
165 t = utils.epoch_to_human_time(self.begin_time)
166 return '%s %s %s' % (t, self.test_name, self.result)
167
168 def to_dict(self):
169 """Gets a dictionary representating the content of this class.
170
171 Returns:
172 A dictionary representating the content of this class.
173 """
174 d = {}
175 d[TestResultEnums.RECORD_NAME] = self.test_name
176 d[TestResultEnums.RECORD_CLASS] = self.test_class
177 d[TestResultEnums.RECORD_BEGIN_TIME] = self.begin_time
178 d[TestResultEnums.RECORD_END_TIME] = self.end_time
179 d[TestResultEnums.RECORD_RESULT] = self.result
180 d[TestResultEnums.RECORD_UID] = self.uid
181 d[TestResultEnums.RECORD_EXTRAS] = self.extras
182 d[TestResultEnums.RECORD_DETAILS] = self.details
183 d[TestResultEnums.RECORD_EXTRA_ERRORS] = self.extra_errors
184 d[TestResultEnums.RECORD_STACKTRACE] = self.stacktrace
185 return d
186
187 def json_str(self):
188 """Converts this test record to a string in json format.
189
190 Format of the json string is:
191 {
192 'Test Name': <test name>,
193 'Begin Time': <epoch timestamp>,
194 'Details': <details>,
195 ...
196 }
197
198 Returns:
199 A json-format string representing the test record.
200 """
201 return json.dumps(self.to_dict())
202
203
204 class TestResult(object):
205 """A class that contains metrics of a test run.
206
207 This class is essentially a container of TestResultRecord objects.
208
209 Attributes:
210 self.requested: A list of strings, each is the name of a test requested
211 by user.
212 self.failed: A list of records for tests failed.
213 self.executed: A list of records for tests that were actually executed.
214 self.passed: A list of records for tests passed.
215 self.skipped: A list of records for tests skipped.
216 self.error: A list of records for tests with error result token.
217 """
218
219 def __init__(self):
220 self.requested = []
221 self.failed = []
222 self.executed = []
223 self.passed = []
224 self.skipped = []
225 self.error = []
226 self.controller_info = {}
227
228 def __add__(self, r):
229 """Overrides '+' operator for TestResult class.
230
231 The add operator merges two TestResult objects by concatenating all of
232 their lists together.
233
234 Args:
235 r: another instance of TestResult to be added
236
237 Returns:
238 A TestResult instance that's the sum of two TestResult instances.
239 """
240 if not isinstance(r, TestResult):
241 raise TypeError('Operand %s of type %s is not a TestResult.' %
242 (r, type(r)))
243 sum_result = TestResult()
244 for name in sum_result.__dict__:
245 r_value = getattr(r, name)
246 l_value = getattr(self, name)
247 if isinstance(r_value, list):
248 setattr(sum_result, name, l_value + r_value)
249 elif isinstance(r_value, dict):
250 # '+' operator for TestResult is only valid when multiple
251 # TestResult objs were created in the same test run, which means
252 # the controller info would be the same across all of them.
253 # TODO(angli): have a better way to validate this situation.
254 setattr(sum_result, name, l_value)
255 return sum_result
256
257 def add_record(self, record):
258 """Adds a test record to test result.
259
260 A record is considered executed once it's added to the test result.
261
262 Args:
263 record: A test record object to add.
264 """
265 self.executed.append(record)
266 if record.result == TestResultEnums.TEST_RESULT_FAIL:
267 self.failed.append(record)
268 elif record.result == TestResultEnums.TEST_RESULT_SKIP:
269 self.skipped.append(record)
270 elif record.result == TestResultEnums.TEST_RESULT_PASS:
271 self.passed.append(record)
272 else:
273 self.error.append(record)
274
275 def add_controller_info(self, name, info):
276 try:
277 json.dumps(info)
278 except TypeError:
279 logging.warning('Controller info for %s is not JSON serializable!'
280 ' Coercing it to string.' % name)
281 self.controller_info[name] = str(info)
282 return
283 self.controller_info[name] = info
284
285 def fail_class(self, test_record):
286 """Add a record to indicate a test class setup has failed and no test
287 in the class was executed.
288
289 Args:
290 test_record: A TestResultRecord object for the test class.
291 """
292 self.executed.append(test_record)
293 self.failed.append(test_record)
294
295 @property
296 def is_all_pass(self):
297 """True if no tests failed or threw errors, False otherwise."""
298 num_of_failures = len(self.failed) + len(self.error)
299 if num_of_failures == 0:
300 return True
301 return False
302
303 def json_str(self):
304 """Converts this test result to a string in json format.
305
306 Format of the json string is:
307 {
308 'Results': [
309 {<executed test record 1>},
310 {<executed test record 2>},
311 ...
312 ],
313 'Summary': <summary dict>
314 }
315
316 Returns:
317 A json-format string representing the test results.
318 """
319 d = {}
320 d['ControllerInfo'] = self.controller_info
321 d['Results'] = [record.to_dict() for record in self.executed]
322 d['Summary'] = self.summary_dict()
323 json_str = json.dumps(d, indent=4, sort_keys=True)
324 return json_str
325
326 def summary_str(self):
327 """Gets a string that summarizes the stats of this test result.
328
329 The summary provides the counts of how many tests fall into each
330 category, like 'Passed', 'Failed' etc.
331
332 Format of the string is:
333 Requested <int>, Executed <int>, ...
334
335 Returns:
336 A summary string of this test result.
337 """
338 l = ['%s %d' % (k, v) for k, v in self.summary_dict().items()]
339 # Sort the list so the order is the same every time.
340 msg = ', '.join(sorted(l))
341 return msg
342
343 def summary_dict(self):
344 """Gets a dictionary that summarizes the stats of this test result.
345
346 The summary provides the counts of how many tests fall into each
347 category, like 'Passed', 'Failed' etc.
348
349 Returns:
350 A dictionary with the stats of this test result.
351 """
352 d = {}
353 d['Requested'] = len(self.requested)
354 d['Executed'] = len(self.executed)
355 d['Passed'] = len(self.passed)
356 d['Failed'] = len(self.failed)
357 d['Skipped'] = len(self.skipped)
358 d['Error'] = len(self.error)
359 return d
360
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mobly/records.py b/mobly/records.py
--- a/mobly/records.py
+++ b/mobly/records.py
@@ -262,11 +262,12 @@
Args:
record: A test record object to add.
"""
+ if record.result == TestResultEnums.TEST_RESULT_SKIP:
+ self.skipped.append(record)
+ return
self.executed.append(record)
if record.result == TestResultEnums.TEST_RESULT_FAIL:
self.failed.append(record)
- elif record.result == TestResultEnums.TEST_RESULT_SKIP:
- self.skipped.append(record)
elif record.result == TestResultEnums.TEST_RESULT_PASS:
self.passed.append(record)
else:
@@ -283,14 +284,32 @@
self.controller_info[name] = info
def fail_class(self, test_record):
- """Add a record to indicate a test class setup has failed and no test
- in the class was executed.
+ """Add a record to indicate a test class has failed before any test
+ could execute.
+
+ This is only called before any test is actually executed. So it only
+ adds an error entry that describes why the class failed to the tally
+ and does not affect the total number of tests requrested or exedcuted.
Args:
test_record: A TestResultRecord object for the test class.
"""
- self.executed.append(test_record)
- self.failed.append(test_record)
+ self.error.append(test_record)
+
+ def is_test_executed(self, test_name):
+ """Checks if a specific test has been executed.
+
+ Args:
+ test_name: string, the name of the test to check.
+
+ Returns:
+ True if the test has been executed according to the test result,
+ False otherwise.
+ """
+ for record in self.executed:
+ if record.test_name == test_name:
+ return True
+ return False
@property
def is_all_pass(self):
|
{"golden_diff": "diff --git a/mobly/records.py b/mobly/records.py\n--- a/mobly/records.py\n+++ b/mobly/records.py\n@@ -262,11 +262,12 @@\n Args:\n record: A test record object to add.\n \"\"\"\n+ if record.result == TestResultEnums.TEST_RESULT_SKIP:\n+ self.skipped.append(record)\n+ return\n self.executed.append(record)\n if record.result == TestResultEnums.TEST_RESULT_FAIL:\n self.failed.append(record)\n- elif record.result == TestResultEnums.TEST_RESULT_SKIP:\n- self.skipped.append(record)\n elif record.result == TestResultEnums.TEST_RESULT_PASS:\n self.passed.append(record)\n else:\n@@ -283,14 +284,32 @@\n self.controller_info[name] = info\n \n def fail_class(self, test_record):\n- \"\"\"Add a record to indicate a test class setup has failed and no test\n- in the class was executed.\n+ \"\"\"Add a record to indicate a test class has failed before any test\n+ could execute.\n+\n+ This is only called before any test is actually executed. So it only\n+ adds an error entry that describes why the class failed to the tally\n+ and does not affect the total number of tests requrested or exedcuted.\n \n Args:\n test_record: A TestResultRecord object for the test class.\n \"\"\"\n- self.executed.append(test_record)\n- self.failed.append(test_record)\n+ self.error.append(test_record)\n+\n+ def is_test_executed(self, test_name):\n+ \"\"\"Checks if a specific test has been executed.\n+\n+ Args:\n+ test_name: string, the name of the test to check.\n+\n+ Returns:\n+ True if the test has been executed according to the test result,\n+ False otherwise.\n+ \"\"\"\n+ for record in self.executed:\n+ if record.test_name == test_name:\n+ return True\n+ return False\n \n @property\n def is_all_pass(self):\n", "issue": "Properly report skipped test classes\nNow that we have a way to reliably get all the tests in a class, including the generated tests, the test report for an aborted class should include entries for all the tests requested, instead of only one entry for the class.\n", "before_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"This module is where all the record definitions and record containers live.\n\"\"\"\n\nimport json\nimport logging\nimport pprint\nimport sys\nimport traceback\n\nfrom mobly import signals\nfrom mobly import utils\n\n\nclass TestResultEnums(object):\n \"\"\"Enums used for TestResultRecord class.\n\n Includes the tokens to mark test result with, and the string names for each\n field in TestResultRecord.\n \"\"\"\n\n RECORD_NAME = 'Test Name'\n RECORD_CLASS = 'Test Class'\n RECORD_BEGIN_TIME = 'Begin Time'\n RECORD_END_TIME = 'End Time'\n RECORD_RESULT = 'Result'\n RECORD_UID = 'UID'\n RECORD_EXTRAS = 'Extras'\n RECORD_EXTRA_ERRORS = 'Extra Errors'\n RECORD_DETAILS = 'Details'\n RECORD_STACKTRACE = 'Stacktrace'\n TEST_RESULT_PASS = 'PASS'\n TEST_RESULT_FAIL = 'FAIL'\n TEST_RESULT_SKIP = 'SKIP'\n TEST_RESULT_ERROR = 'ERROR'\n\n\nclass TestResultRecord(object):\n \"\"\"A record that holds the information of a test execution.\n\n Attributes:\n test_name: A string representing the name of the test method.\n begin_time: Epoch timestamp of when the test started.\n end_time: Epoch timestamp of when the test ended.\n self.uid: Unique identifier of a test.\n self.result: Test result, PASS/FAIL/SKIP.\n self.extras: User defined extra information of the test result.\n self.details: A string explaining the details of the test.\n \"\"\"\n\n def __init__(self, t_name, t_class=None):\n self.test_name = t_name\n self.test_class = t_class\n self.begin_time = None\n self.end_time = None\n self.uid = None\n self.result = None\n self.extras = None\n self.details = None\n self.stacktrace = None\n self.extra_errors = {}\n\n def test_begin(self):\n \"\"\"Call this when the test begins execution.\n\n Sets the begin_time of this record.\n \"\"\"\n self.begin_time = utils.get_current_epoch_time()\n\n def _test_end(self, result, e):\n \"\"\"Class internal function to signal the end of a test execution.\n\n Args:\n result: One of the TEST_RESULT enums in TestResultEnums.\n e: A test termination signal (usually an exception object). It can\n be any exception instance or of any subclass of\n mobly.signals.TestSignal.\n \"\"\"\n self.end_time = utils.get_current_epoch_time()\n self.result = result\n if self.extra_errors:\n self.result = TestResultEnums.TEST_RESULT_ERROR\n if isinstance(e, signals.TestSignal):\n self.details = e.details\n _, _, exc_traceback = sys.exc_info()\n if exc_traceback:\n self.stacktrace = ''.join(traceback.format_tb(exc_traceback))\n self.extras = e.extras\n elif isinstance(e, Exception):\n self.details = str(e)\n _, _, exc_traceback = sys.exc_info()\n if exc_traceback:\n self.stacktrace = ''.join(traceback.format_tb(exc_traceback))\n\n def test_pass(self, e=None):\n \"\"\"To mark the test as passed in this record.\n\n Args:\n e: An instance of mobly.signals.TestPass.\n \"\"\"\n self._test_end(TestResultEnums.TEST_RESULT_PASS, e)\n\n def test_fail(self, e=None):\n \"\"\"To mark the test as failed in this record.\n\n Only test_fail does instance check because we want 'assert xxx' to also\n fail the test same way assert_true does.\n\n Args:\n e: An exception object. It can be an instance of AssertionError or\n mobly.base_test.TestFailure.\n \"\"\"\n self._test_end(TestResultEnums.TEST_RESULT_FAIL, e)\n\n def test_skip(self, e=None):\n \"\"\"To mark the test as skipped in this record.\n\n Args:\n e: An instance of mobly.signals.TestSkip.\n \"\"\"\n self._test_end(TestResultEnums.TEST_RESULT_SKIP, e)\n\n def test_error(self, e=None):\n \"\"\"To mark the test as error in this record.\n\n Args:\n e: An exception object.\n \"\"\"\n self._test_end(TestResultEnums.TEST_RESULT_ERROR, e)\n\n def add_error(self, tag, e):\n \"\"\"Add extra error happened during a test mark the test result as\n ERROR.\n\n If an error is added the test record, the record's result is equivalent\n to the case where an uncaught exception happened.\n\n Args:\n tag: A string describing where this error came from, e.g. 'on_pass'.\n e: An exception object.\n \"\"\"\n self.result = TestResultEnums.TEST_RESULT_ERROR\n self.extra_errors[tag] = str(e)\n\n def __str__(self):\n d = self.to_dict()\n l = ['%s = %s' % (k, v) for k, v in d.items()]\n s = ', '.join(l)\n return s\n\n def __repr__(self):\n \"\"\"This returns a short string representation of the test record.\"\"\"\n t = utils.epoch_to_human_time(self.begin_time)\n return '%s %s %s' % (t, self.test_name, self.result)\n\n def to_dict(self):\n \"\"\"Gets a dictionary representating the content of this class.\n\n Returns:\n A dictionary representating the content of this class.\n \"\"\"\n d = {}\n d[TestResultEnums.RECORD_NAME] = self.test_name\n d[TestResultEnums.RECORD_CLASS] = self.test_class\n d[TestResultEnums.RECORD_BEGIN_TIME] = self.begin_time\n d[TestResultEnums.RECORD_END_TIME] = self.end_time\n d[TestResultEnums.RECORD_RESULT] = self.result\n d[TestResultEnums.RECORD_UID] = self.uid\n d[TestResultEnums.RECORD_EXTRAS] = self.extras\n d[TestResultEnums.RECORD_DETAILS] = self.details\n d[TestResultEnums.RECORD_EXTRA_ERRORS] = self.extra_errors\n d[TestResultEnums.RECORD_STACKTRACE] = self.stacktrace\n return d\n\n def json_str(self):\n \"\"\"Converts this test record to a string in json format.\n\n Format of the json string is:\n {\n 'Test Name': <test name>,\n 'Begin Time': <epoch timestamp>,\n 'Details': <details>,\n ...\n }\n\n Returns:\n A json-format string representing the test record.\n \"\"\"\n return json.dumps(self.to_dict())\n\n\nclass TestResult(object):\n \"\"\"A class that contains metrics of a test run.\n\n This class is essentially a container of TestResultRecord objects.\n\n Attributes:\n self.requested: A list of strings, each is the name of a test requested\n by user.\n self.failed: A list of records for tests failed.\n self.executed: A list of records for tests that were actually executed.\n self.passed: A list of records for tests passed.\n self.skipped: A list of records for tests skipped.\n self.error: A list of records for tests with error result token.\n \"\"\"\n\n def __init__(self):\n self.requested = []\n self.failed = []\n self.executed = []\n self.passed = []\n self.skipped = []\n self.error = []\n self.controller_info = {}\n\n def __add__(self, r):\n \"\"\"Overrides '+' operator for TestResult class.\n\n The add operator merges two TestResult objects by concatenating all of\n their lists together.\n\n Args:\n r: another instance of TestResult to be added\n\n Returns:\n A TestResult instance that's the sum of two TestResult instances.\n \"\"\"\n if not isinstance(r, TestResult):\n raise TypeError('Operand %s of type %s is not a TestResult.' %\n (r, type(r)))\n sum_result = TestResult()\n for name in sum_result.__dict__:\n r_value = getattr(r, name)\n l_value = getattr(self, name)\n if isinstance(r_value, list):\n setattr(sum_result, name, l_value + r_value)\n elif isinstance(r_value, dict):\n # '+' operator for TestResult is only valid when multiple\n # TestResult objs were created in the same test run, which means\n # the controller info would be the same across all of them.\n # TODO(angli): have a better way to validate this situation.\n setattr(sum_result, name, l_value)\n return sum_result\n\n def add_record(self, record):\n \"\"\"Adds a test record to test result.\n\n A record is considered executed once it's added to the test result.\n\n Args:\n record: A test record object to add.\n \"\"\"\n self.executed.append(record)\n if record.result == TestResultEnums.TEST_RESULT_FAIL:\n self.failed.append(record)\n elif record.result == TestResultEnums.TEST_RESULT_SKIP:\n self.skipped.append(record)\n elif record.result == TestResultEnums.TEST_RESULT_PASS:\n self.passed.append(record)\n else:\n self.error.append(record)\n\n def add_controller_info(self, name, info):\n try:\n json.dumps(info)\n except TypeError:\n logging.warning('Controller info for %s is not JSON serializable!'\n ' Coercing it to string.' % name)\n self.controller_info[name] = str(info)\n return\n self.controller_info[name] = info\n\n def fail_class(self, test_record):\n \"\"\"Add a record to indicate a test class setup has failed and no test\n in the class was executed.\n\n Args:\n test_record: A TestResultRecord object for the test class.\n \"\"\"\n self.executed.append(test_record)\n self.failed.append(test_record)\n\n @property\n def is_all_pass(self):\n \"\"\"True if no tests failed or threw errors, False otherwise.\"\"\"\n num_of_failures = len(self.failed) + len(self.error)\n if num_of_failures == 0:\n return True\n return False\n\n def json_str(self):\n \"\"\"Converts this test result to a string in json format.\n\n Format of the json string is:\n {\n 'Results': [\n {<executed test record 1>},\n {<executed test record 2>},\n ...\n ],\n 'Summary': <summary dict>\n }\n\n Returns:\n A json-format string representing the test results.\n \"\"\"\n d = {}\n d['ControllerInfo'] = self.controller_info\n d['Results'] = [record.to_dict() for record in self.executed]\n d['Summary'] = self.summary_dict()\n json_str = json.dumps(d, indent=4, sort_keys=True)\n return json_str\n\n def summary_str(self):\n \"\"\"Gets a string that summarizes the stats of this test result.\n\n The summary provides the counts of how many tests fall into each\n category, like 'Passed', 'Failed' etc.\n\n Format of the string is:\n Requested <int>, Executed <int>, ...\n\n Returns:\n A summary string of this test result.\n \"\"\"\n l = ['%s %d' % (k, v) for k, v in self.summary_dict().items()]\n # Sort the list so the order is the same every time.\n msg = ', '.join(sorted(l))\n return msg\n\n def summary_dict(self):\n \"\"\"Gets a dictionary that summarizes the stats of this test result.\n\n The summary provides the counts of how many tests fall into each\n category, like 'Passed', 'Failed' etc.\n\n Returns:\n A dictionary with the stats of this test result.\n \"\"\"\n d = {}\n d['Requested'] = len(self.requested)\n d['Executed'] = len(self.executed)\n d['Passed'] = len(self.passed)\n d['Failed'] = len(self.failed)\n d['Skipped'] = len(self.skipped)\n d['Error'] = len(self.error)\n return d\n", "path": "mobly/records.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"This module is where all the record definitions and record containers live.\n\"\"\"\n\nimport json\nimport logging\nimport pprint\nimport sys\nimport traceback\n\nfrom mobly import signals\nfrom mobly import utils\n\n\nclass TestResultEnums(object):\n \"\"\"Enums used for TestResultRecord class.\n\n Includes the tokens to mark test result with, and the string names for each\n field in TestResultRecord.\n \"\"\"\n\n RECORD_NAME = 'Test Name'\n RECORD_CLASS = 'Test Class'\n RECORD_BEGIN_TIME = 'Begin Time'\n RECORD_END_TIME = 'End Time'\n RECORD_RESULT = 'Result'\n RECORD_UID = 'UID'\n RECORD_EXTRAS = 'Extras'\n RECORD_EXTRA_ERRORS = 'Extra Errors'\n RECORD_DETAILS = 'Details'\n RECORD_STACKTRACE = 'Stacktrace'\n TEST_RESULT_PASS = 'PASS'\n TEST_RESULT_FAIL = 'FAIL'\n TEST_RESULT_SKIP = 'SKIP'\n TEST_RESULT_ERROR = 'ERROR'\n\n\nclass TestResultRecord(object):\n \"\"\"A record that holds the information of a test execution.\n\n Attributes:\n test_name: A string representing the name of the test method.\n begin_time: Epoch timestamp of when the test started.\n end_time: Epoch timestamp of when the test ended.\n self.uid: Unique identifier of a test.\n self.result: Test result, PASS/FAIL/SKIP.\n self.extras: User defined extra information of the test result.\n self.details: A string explaining the details of the test.\n \"\"\"\n\n def __init__(self, t_name, t_class=None):\n self.test_name = t_name\n self.test_class = t_class\n self.begin_time = None\n self.end_time = None\n self.uid = None\n self.result = None\n self.extras = None\n self.details = None\n self.stacktrace = None\n self.extra_errors = {}\n\n def test_begin(self):\n \"\"\"Call this when the test begins execution.\n\n Sets the begin_time of this record.\n \"\"\"\n self.begin_time = utils.get_current_epoch_time()\n\n def _test_end(self, result, e):\n \"\"\"Class internal function to signal the end of a test execution.\n\n Args:\n result: One of the TEST_RESULT enums in TestResultEnums.\n e: A test termination signal (usually an exception object). It can\n be any exception instance or of any subclass of\n mobly.signals.TestSignal.\n \"\"\"\n self.end_time = utils.get_current_epoch_time()\n self.result = result\n if self.extra_errors:\n self.result = TestResultEnums.TEST_RESULT_ERROR\n if isinstance(e, signals.TestSignal):\n self.details = e.details\n _, _, exc_traceback = sys.exc_info()\n if exc_traceback:\n self.stacktrace = ''.join(traceback.format_tb(exc_traceback))\n self.extras = e.extras\n elif isinstance(e, Exception):\n self.details = str(e)\n _, _, exc_traceback = sys.exc_info()\n if exc_traceback:\n self.stacktrace = ''.join(traceback.format_tb(exc_traceback))\n\n def test_pass(self, e=None):\n \"\"\"To mark the test as passed in this record.\n\n Args:\n e: An instance of mobly.signals.TestPass.\n \"\"\"\n self._test_end(TestResultEnums.TEST_RESULT_PASS, e)\n\n def test_fail(self, e=None):\n \"\"\"To mark the test as failed in this record.\n\n Only test_fail does instance check because we want 'assert xxx' to also\n fail the test same way assert_true does.\n\n Args:\n e: An exception object. It can be an instance of AssertionError or\n mobly.base_test.TestFailure.\n \"\"\"\n self._test_end(TestResultEnums.TEST_RESULT_FAIL, e)\n\n def test_skip(self, e=None):\n \"\"\"To mark the test as skipped in this record.\n\n Args:\n e: An instance of mobly.signals.TestSkip.\n \"\"\"\n self._test_end(TestResultEnums.TEST_RESULT_SKIP, e)\n\n def test_error(self, e=None):\n \"\"\"To mark the test as error in this record.\n\n Args:\n e: An exception object.\n \"\"\"\n self._test_end(TestResultEnums.TEST_RESULT_ERROR, e)\n\n def add_error(self, tag, e):\n \"\"\"Add extra error happened during a test mark the test result as\n ERROR.\n\n If an error is added the test record, the record's result is equivalent\n to the case where an uncaught exception happened.\n\n Args:\n tag: A string describing where this error came from, e.g. 'on_pass'.\n e: An exception object.\n \"\"\"\n self.result = TestResultEnums.TEST_RESULT_ERROR\n self.extra_errors[tag] = str(e)\n\n def __str__(self):\n d = self.to_dict()\n l = ['%s = %s' % (k, v) for k, v in d.items()]\n s = ', '.join(l)\n return s\n\n def __repr__(self):\n \"\"\"This returns a short string representation of the test record.\"\"\"\n t = utils.epoch_to_human_time(self.begin_time)\n return '%s %s %s' % (t, self.test_name, self.result)\n\n def to_dict(self):\n \"\"\"Gets a dictionary representating the content of this class.\n\n Returns:\n A dictionary representating the content of this class.\n \"\"\"\n d = {}\n d[TestResultEnums.RECORD_NAME] = self.test_name\n d[TestResultEnums.RECORD_CLASS] = self.test_class\n d[TestResultEnums.RECORD_BEGIN_TIME] = self.begin_time\n d[TestResultEnums.RECORD_END_TIME] = self.end_time\n d[TestResultEnums.RECORD_RESULT] = self.result\n d[TestResultEnums.RECORD_UID] = self.uid\n d[TestResultEnums.RECORD_EXTRAS] = self.extras\n d[TestResultEnums.RECORD_DETAILS] = self.details\n d[TestResultEnums.RECORD_EXTRA_ERRORS] = self.extra_errors\n d[TestResultEnums.RECORD_STACKTRACE] = self.stacktrace\n return d\n\n def json_str(self):\n \"\"\"Converts this test record to a string in json format.\n\n Format of the json string is:\n {\n 'Test Name': <test name>,\n 'Begin Time': <epoch timestamp>,\n 'Details': <details>,\n ...\n }\n\n Returns:\n A json-format string representing the test record.\n \"\"\"\n return json.dumps(self.to_dict())\n\n\nclass TestResult(object):\n \"\"\"A class that contains metrics of a test run.\n\n This class is essentially a container of TestResultRecord objects.\n\n Attributes:\n self.requested: A list of strings, each is the name of a test requested\n by user.\n self.failed: A list of records for tests failed.\n self.executed: A list of records for tests that were actually executed.\n self.passed: A list of records for tests passed.\n self.skipped: A list of records for tests skipped.\n self.error: A list of records for tests with error result token.\n \"\"\"\n\n def __init__(self):\n self.requested = []\n self.failed = []\n self.executed = []\n self.passed = []\n self.skipped = []\n self.error = []\n self.controller_info = {}\n\n def __add__(self, r):\n \"\"\"Overrides '+' operator for TestResult class.\n\n The add operator merges two TestResult objects by concatenating all of\n their lists together.\n\n Args:\n r: another instance of TestResult to be added\n\n Returns:\n A TestResult instance that's the sum of two TestResult instances.\n \"\"\"\n if not isinstance(r, TestResult):\n raise TypeError('Operand %s of type %s is not a TestResult.' %\n (r, type(r)))\n sum_result = TestResult()\n for name in sum_result.__dict__:\n r_value = getattr(r, name)\n l_value = getattr(self, name)\n if isinstance(r_value, list):\n setattr(sum_result, name, l_value + r_value)\n elif isinstance(r_value, dict):\n # '+' operator for TestResult is only valid when multiple\n # TestResult objs were created in the same test run, which means\n # the controller info would be the same across all of them.\n # TODO(angli): have a better way to validate this situation.\n setattr(sum_result, name, l_value)\n return sum_result\n\n def add_record(self, record):\n \"\"\"Adds a test record to test result.\n\n A record is considered executed once it's added to the test result.\n\n Args:\n record: A test record object to add.\n \"\"\"\n if record.result == TestResultEnums.TEST_RESULT_SKIP:\n self.skipped.append(record)\n return\n self.executed.append(record)\n if record.result == TestResultEnums.TEST_RESULT_FAIL:\n self.failed.append(record)\n elif record.result == TestResultEnums.TEST_RESULT_PASS:\n self.passed.append(record)\n else:\n self.error.append(record)\n\n def add_controller_info(self, name, info):\n try:\n json.dumps(info)\n except TypeError:\n logging.warning('Controller info for %s is not JSON serializable!'\n ' Coercing it to string.' % name)\n self.controller_info[name] = str(info)\n return\n self.controller_info[name] = info\n\n def fail_class(self, test_record):\n \"\"\"Add a record to indicate a test class has failed before any test\n could execute.\n\n This is only called before any test is actually executed. So it only\n adds an error entry that describes why the class failed to the tally\n and does not affect the total number of tests requrested or exedcuted.\n\n Args:\n test_record: A TestResultRecord object for the test class.\n \"\"\"\n self.error.append(test_record)\n\n def is_test_executed(self, test_name):\n \"\"\"Checks if a specific test has been executed.\n\n Args:\n test_name: string, the name of the test to check.\n\n Returns:\n True if the test has been executed according to the test result,\n False otherwise.\n \"\"\"\n for record in self.executed:\n if record.test_name == test_name:\n return True\n return False\n\n @property\n def is_all_pass(self):\n \"\"\"True if no tests failed or threw errors, False otherwise.\"\"\"\n num_of_failures = len(self.failed) + len(self.error)\n if num_of_failures == 0:\n return True\n return False\n\n def json_str(self):\n \"\"\"Converts this test result to a string in json format.\n\n Format of the json string is:\n {\n 'Results': [\n {<executed test record 1>},\n {<executed test record 2>},\n ...\n ],\n 'Summary': <summary dict>\n }\n\n Returns:\n A json-format string representing the test results.\n \"\"\"\n d = {}\n d['ControllerInfo'] = self.controller_info\n d['Results'] = [record.to_dict() for record in self.executed]\n d['Summary'] = self.summary_dict()\n json_str = json.dumps(d, indent=4, sort_keys=True)\n return json_str\n\n def summary_str(self):\n \"\"\"Gets a string that summarizes the stats of this test result.\n\n The summary provides the counts of how many tests fall into each\n category, like 'Passed', 'Failed' etc.\n\n Format of the string is:\n Requested <int>, Executed <int>, ...\n\n Returns:\n A summary string of this test result.\n \"\"\"\n l = ['%s %d' % (k, v) for k, v in self.summary_dict().items()]\n # Sort the list so the order is the same every time.\n msg = ', '.join(sorted(l))\n return msg\n\n def summary_dict(self):\n \"\"\"Gets a dictionary that summarizes the stats of this test result.\n\n The summary provides the counts of how many tests fall into each\n category, like 'Passed', 'Failed' etc.\n\n Returns:\n A dictionary with the stats of this test result.\n \"\"\"\n d = {}\n d['Requested'] = len(self.requested)\n d['Executed'] = len(self.executed)\n d['Passed'] = len(self.passed)\n d['Failed'] = len(self.failed)\n d['Skipped'] = len(self.skipped)\n d['Error'] = len(self.error)\n return d\n", "path": "mobly/records.py"}]}
| 3,994 | 454 |
gh_patches_debug_24430
|
rasdani/github-patches
|
git_diff
|
opentensor__bittensor-1974
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove nest_asyncio from bittensor to allow uvloop support
### Is your feature request related to a problem? Please describe.
Uvloop, which provides supperior speed, does not allow loop nesting.
It is also the case that uvloop is pulled in by popular packages, which [forces some subnets develop hacks to combat this](https://github.com/synapsec-ai/llm-defender-subnet/blob/6c37925c4f34a298607c97dfceebcc01fb74d562/scripts/run_neuron.sh#L140-L146).
And perhaps more importantly, https://github.com/erdewit/nest_asyncio seems to have been abandodend
### Describe the solution you'd like
Remove nest_asyncio, and let bittensor users decide which asyncio loop they want to run. Perhaps even suggest (not mandate) running uvloop, since it consistently shows better results in benchmarks than CPython asyncio stdlib loop.
Seems like there was some attempt of this in the past https://github.com/opentensor/bittensor/pull/1501 for some reason (?)
### Describe alternatives you've considered
_No response_
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bittensor/__init__.py`
Content:
```
1 # The MIT License (MIT)
2 # Copyright © 2021 Yuma Rao
3 # Copyright © 2022-2023 Opentensor Foundation
4 # Copyright © 2023 Opentensor Technologies Inc
5
6 # Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
7 # documentation files (the “Software”), to deal in the Software without restriction, including without limitation
8 # the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
9 # and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
10
11 # The above copyright notice and this permission notice shall be included in all copies or substantial portions of
12 # the Software.
13
14 # THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
15 # THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
16 # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
17 # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
18 # DEALINGS IN THE SOFTWARE.
19
20 from rich.console import Console
21 from rich.traceback import install
22
23 # Install and apply nest asyncio to allow the async functions
24 # to run in a .ipynb
25 import nest_asyncio
26
27 nest_asyncio.apply()
28
29 # Bittensor code and protocol version.
30 __version__ = "7.0.0"
31
32 version_split = __version__.split(".")
33 __version_as_int__: int = (
34 (100 * int(version_split[0]))
35 + (10 * int(version_split[1]))
36 + (1 * int(version_split[2]))
37 )
38 __new_signature_version__ = 360
39
40 # Rich console.
41 __console__ = Console()
42 __use_console__ = True
43
44 # Remove overdue locals in debug training.
45 install(show_locals=False)
46
47
48 def turn_console_off():
49 global __use_console__
50 global __console__
51 from io import StringIO
52
53 __use_console__ = False
54 __console__ = Console(file=StringIO(), stderr=False)
55
56
57 def turn_console_on():
58 global __use_console__
59 global __console__
60 __use_console__ = True
61 __console__ = Console()
62
63
64 turn_console_off()
65
66
67 # Logging helpers.
68 def trace(on: bool = True):
69 logging.set_trace(on)
70
71
72 def debug(on: bool = True):
73 logging.set_debug(on)
74
75
76 # Substrate chain block time (seconds).
77 __blocktime__ = 12
78
79 # Pip address for versioning
80 __pipaddress__ = "https://pypi.org/pypi/bittensor/json"
81
82 # Raw GitHub url for delegates registry file
83 __delegates_details_url__: str = "https://raw.githubusercontent.com/opentensor/bittensor-delegates/main/public/delegates.json"
84
85 # Substrate ss58_format
86 __ss58_format__ = 42
87
88 # Wallet ss58 address length
89 __ss58_address_length__ = 48
90
91 __networks__ = ["local", "finney", "test", "archive"]
92
93 __finney_entrypoint__ = "wss://entrypoint-finney.opentensor.ai:443"
94
95 __finney_test_entrypoint__ = "wss://test.finney.opentensor.ai:443/"
96
97 __archive_entrypoint__ = "wss://archive.chain.opentensor.ai:443/"
98
99 # Needs to use wss://
100 __bellagene_entrypoint__ = "wss://parachain.opentensor.ai:443"
101
102 __local_entrypoint__ = "ws://127.0.0.1:9944"
103
104 __tao_symbol__: str = chr(0x03C4)
105
106 __rao_symbol__: str = chr(0x03C1)
107
108 # Block Explorers map network to explorer url
109 # Must all be polkadotjs explorer urls
110 __network_explorer_map__ = {
111 "opentensor": {
112 "local": "https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fentrypoint-finney.opentensor.ai%3A443#/explorer",
113 "endpoint": "https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fentrypoint-finney.opentensor.ai%3A443#/explorer",
114 "finney": "https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fentrypoint-finney.opentensor.ai%3A443#/explorer",
115 },
116 "taostats": {
117 "local": "https://x.taostats.io",
118 "endpoint": "https://x.taostats.io",
119 "finney": "https://x.taostats.io",
120 },
121 }
122
123 # --- Type Registry ---
124 __type_registry__ = {
125 "types": {
126 "Balance": "u64", # Need to override default u128
127 },
128 "runtime_api": {
129 "NeuronInfoRuntimeApi": {
130 "methods": {
131 "get_neuron_lite": {
132 "params": [
133 {
134 "name": "netuid",
135 "type": "u16",
136 },
137 {
138 "name": "uid",
139 "type": "u16",
140 },
141 ],
142 "type": "Vec<u8>",
143 },
144 "get_neurons_lite": {
145 "params": [
146 {
147 "name": "netuid",
148 "type": "u16",
149 },
150 ],
151 "type": "Vec<u8>",
152 },
153 }
154 },
155 "StakeInfoRuntimeApi": {
156 "methods": {
157 "get_stake_info_for_coldkey": {
158 "params": [
159 {
160 "name": "coldkey_account_vec",
161 "type": "Vec<u8>",
162 },
163 ],
164 "type": "Vec<u8>",
165 },
166 "get_stake_info_for_coldkeys": {
167 "params": [
168 {
169 "name": "coldkey_account_vecs",
170 "type": "Vec<Vec<u8>>",
171 },
172 ],
173 "type": "Vec<u8>",
174 },
175 },
176 },
177 "ValidatorIPRuntimeApi": {
178 "methods": {
179 "get_associated_validator_ip_info_for_subnet": {
180 "params": [
181 {
182 "name": "netuid",
183 "type": "u16",
184 },
185 ],
186 "type": "Vec<u8>",
187 },
188 },
189 },
190 "SubnetInfoRuntimeApi": {
191 "methods": {
192 "get_subnet_hyperparams": {
193 "params": [
194 {
195 "name": "netuid",
196 "type": "u16",
197 },
198 ],
199 "type": "Vec<u8>",
200 }
201 }
202 },
203 "SubnetRegistrationRuntimeApi": {
204 "methods": {"get_network_registration_cost": {"params": [], "type": "u64"}}
205 },
206 },
207 }
208
209 from .errors import (
210 BlacklistedException,
211 ChainConnectionError,
212 ChainError,
213 ChainQueryError,
214 ChainTransactionError,
215 IdentityError,
216 InternalServerError,
217 InvalidRequestNameError,
218 KeyFileError,
219 MetadataError,
220 NominationError,
221 NotDelegateError,
222 NotRegisteredError,
223 NotVerifiedException,
224 PostProcessException,
225 PriorityException,
226 RegistrationError,
227 RunException,
228 StakeError,
229 SynapseDendriteNoneException,
230 SynapseParsingError,
231 TransferError,
232 UnknownSynapseError,
233 UnstakeError,
234 )
235
236 from substrateinterface import Keypair # noqa: F401
237 from .config import InvalidConfigFile, DefaultConfig, config, T
238 from .keyfile import (
239 serialized_keypair_to_keyfile_data,
240 deserialize_keypair_from_keyfile_data,
241 validate_password,
242 ask_password_to_encrypt,
243 keyfile_data_is_encrypted_nacl,
244 keyfile_data_is_encrypted_ansible,
245 keyfile_data_is_encrypted_legacy,
246 keyfile_data_is_encrypted,
247 keyfile_data_encryption_method,
248 legacy_encrypt_keyfile_data,
249 encrypt_keyfile_data,
250 get_coldkey_password_from_environment,
251 decrypt_keyfile_data,
252 keyfile,
253 Mockkeyfile,
254 )
255 from .wallet import display_mnemonic_msg, wallet
256
257 from .utils import (
258 ss58_to_vec_u8,
259 unbiased_topk,
260 version_checking,
261 strtobool,
262 strtobool_with_default,
263 get_explorer_root_url_by_network_from_map,
264 get_explorer_root_url_by_network_from_map,
265 get_explorer_url_for_network,
266 ss58_address_to_bytes,
267 U16_NORMALIZED_FLOAT,
268 U64_NORMALIZED_FLOAT,
269 u8_key_to_ss58,
270 hash,
271 wallet_utils,
272 )
273
274 from .utils.balance import Balance as Balance
275 from .chain_data import (
276 AxonInfo,
277 NeuronInfo,
278 NeuronInfoLite,
279 PrometheusInfo,
280 DelegateInfo,
281 DelegateInfoLite,
282 StakeInfo,
283 SubnetInfo,
284 SubnetHyperparameters,
285 IPInfo,
286 ProposalCallData,
287 ProposalVoteData,
288 )
289
290 # Allows avoiding name spacing conflicts and continue access to the `subtensor` module with `subtensor_module` name
291 from . import subtensor as subtensor_module
292
293 # Double import allows using class `Subtensor` by referencing `bittensor.Subtensor` and `bittensor.subtensor`.
294 # This will be available for a while until we remove reference `bittensor.subtensor`
295 from .subtensor import Subtensor
296 from .subtensor import Subtensor as subtensor
297
298 from .cli import cli as cli, COMMANDS as ALL_COMMANDS
299 from .btlogging import logging
300 from .metagraph import metagraph as metagraph
301 from .threadpool import PriorityThreadPoolExecutor as PriorityThreadPoolExecutor
302
303 from .synapse import TerminalInfo, Synapse
304 from .stream import StreamingSynapse
305 from .tensor import tensor, Tensor
306 from .axon import axon as axon
307 from .dendrite import dendrite as dendrite
308
309 from .mock.keyfile_mock import MockKeyfile as MockKeyfile
310 from .mock.subtensor_mock import MockSubtensor as MockSubtensor
311 from .mock.wallet_mock import MockWallet as MockWallet
312
313 from .subnets import SubnetsAPI as SubnetsAPI
314
315 configs = [
316 axon.config(),
317 subtensor.config(),
318 PriorityThreadPoolExecutor.config(),
319 wallet.config(),
320 logging.get_config(),
321 ]
322 defaults = config.merge_all(configs)
323
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bittensor/__init__.py b/bittensor/__init__.py
--- a/bittensor/__init__.py
+++ b/bittensor/__init__.py
@@ -16,15 +16,28 @@
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
+import os
+import warnings
from rich.console import Console
from rich.traceback import install
-# Install and apply nest asyncio to allow the async functions
-# to run in a .ipynb
-import nest_asyncio
-nest_asyncio.apply()
+if (NEST_ASYNCIO_ENV := os.getenv("NEST_ASYNCIO")) in ("1", None):
+ if NEST_ASYNCIO_ENV is None:
+ warnings.warn(
+ "NEST_ASYNCIO implicitly set to '1'. In the future, the default value will be '0'."
+ "If you use `nest_asyncio` make sure to add it explicitly to your project dependencies,"
+ "as it will be removed from `bittensor` package dependencies in the future."
+ "To silence this warning, explicitly set the environment variable, e.g. `export NEST_ASYNCIO=0`.",
+ DeprecationWarning,
+ )
+ # Install and apply nest asyncio to allow the async functions
+ # to run in a .ipynb
+ import nest_asyncio
+
+ nest_asyncio.apply()
+
# Bittensor code and protocol version.
__version__ = "7.0.0"
|
{"golden_diff": "diff --git a/bittensor/__init__.py b/bittensor/__init__.py\n--- a/bittensor/__init__.py\n+++ b/bittensor/__init__.py\n@@ -16,15 +16,28 @@\n # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n # DEALINGS IN THE SOFTWARE.\n+import os\n+import warnings\n \n from rich.console import Console\n from rich.traceback import install\n \n-# Install and apply nest asyncio to allow the async functions\n-# to run in a .ipynb\n-import nest_asyncio\n \n-nest_asyncio.apply()\n+if (NEST_ASYNCIO_ENV := os.getenv(\"NEST_ASYNCIO\")) in (\"1\", None):\n+ if NEST_ASYNCIO_ENV is None:\n+ warnings.warn(\n+ \"NEST_ASYNCIO implicitly set to '1'. In the future, the default value will be '0'.\"\n+ \"If you use `nest_asyncio` make sure to add it explicitly to your project dependencies,\"\n+ \"as it will be removed from `bittensor` package dependencies in the future.\"\n+ \"To silence this warning, explicitly set the environment variable, e.g. `export NEST_ASYNCIO=0`.\",\n+ DeprecationWarning,\n+ )\n+ # Install and apply nest asyncio to allow the async functions\n+ # to run in a .ipynb\n+ import nest_asyncio\n+\n+ nest_asyncio.apply()\n+\n \n # Bittensor code and protocol version.\n __version__ = \"7.0.0\"\n", "issue": "Remove nest_asyncio from bittensor to allow uvloop support\n### Is your feature request related to a problem? Please describe.\r\n\r\nUvloop, which provides supperior speed, does not allow loop nesting.\r\n\r\nIt is also the case that uvloop is pulled in by popular packages, which [forces some subnets develop hacks to combat this](https://github.com/synapsec-ai/llm-defender-subnet/blob/6c37925c4f34a298607c97dfceebcc01fb74d562/scripts/run_neuron.sh#L140-L146).\r\n\r\nAnd perhaps more importantly, https://github.com/erdewit/nest_asyncio seems to have been abandodend \r\n\r\n### Describe the solution you'd like\r\n\r\nRemove nest_asyncio, and let bittensor users decide which asyncio loop they want to run. Perhaps even suggest (not mandate) running uvloop, since it consistently shows better results in benchmarks than CPython asyncio stdlib loop.\r\n\r\nSeems like there was some attempt of this in the past https://github.com/opentensor/bittensor/pull/1501 for some reason (?) \r\n\r\n### Describe alternatives you've considered\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\n_No response_\n", "before_files": [{"content": "# The MIT License (MIT)\n# Copyright \u00a9 2021 Yuma Rao\n# Copyright \u00a9 2022-2023 Opentensor Foundation\n# Copyright \u00a9 2023 Opentensor Technologies Inc\n\n# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated\n# documentation files (the \u201cSoftware\u201d), to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,\n# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\n# The above copyright notice and this permission notice shall be included in all copies or substantial portions of\n# the Software.\n\n# THE SOFTWARE IS PROVIDED \u201cAS IS\u201d, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO\n# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nfrom rich.console import Console\nfrom rich.traceback import install\n\n# Install and apply nest asyncio to allow the async functions\n# to run in a .ipynb\nimport nest_asyncio\n\nnest_asyncio.apply()\n\n# Bittensor code and protocol version.\n__version__ = \"7.0.0\"\n\nversion_split = __version__.split(\".\")\n__version_as_int__: int = (\n (100 * int(version_split[0]))\n + (10 * int(version_split[1]))\n + (1 * int(version_split[2]))\n)\n__new_signature_version__ = 360\n\n# Rich console.\n__console__ = Console()\n__use_console__ = True\n\n# Remove overdue locals in debug training.\ninstall(show_locals=False)\n\n\ndef turn_console_off():\n global __use_console__\n global __console__\n from io import StringIO\n\n __use_console__ = False\n __console__ = Console(file=StringIO(), stderr=False)\n\n\ndef turn_console_on():\n global __use_console__\n global __console__\n __use_console__ = True\n __console__ = Console()\n\n\nturn_console_off()\n\n\n# Logging helpers.\ndef trace(on: bool = True):\n logging.set_trace(on)\n\n\ndef debug(on: bool = True):\n logging.set_debug(on)\n\n\n# Substrate chain block time (seconds).\n__blocktime__ = 12\n\n# Pip address for versioning\n__pipaddress__ = \"https://pypi.org/pypi/bittensor/json\"\n\n# Raw GitHub url for delegates registry file\n__delegates_details_url__: str = \"https://raw.githubusercontent.com/opentensor/bittensor-delegates/main/public/delegates.json\"\n\n# Substrate ss58_format\n__ss58_format__ = 42\n\n# Wallet ss58 address length\n__ss58_address_length__ = 48\n\n__networks__ = [\"local\", \"finney\", \"test\", \"archive\"]\n\n__finney_entrypoint__ = \"wss://entrypoint-finney.opentensor.ai:443\"\n\n__finney_test_entrypoint__ = \"wss://test.finney.opentensor.ai:443/\"\n\n__archive_entrypoint__ = \"wss://archive.chain.opentensor.ai:443/\"\n\n# Needs to use wss://\n__bellagene_entrypoint__ = \"wss://parachain.opentensor.ai:443\"\n\n__local_entrypoint__ = \"ws://127.0.0.1:9944\"\n\n__tao_symbol__: str = chr(0x03C4)\n\n__rao_symbol__: str = chr(0x03C1)\n\n# Block Explorers map network to explorer url\n# Must all be polkadotjs explorer urls\n__network_explorer_map__ = {\n \"opentensor\": {\n \"local\": \"https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fentrypoint-finney.opentensor.ai%3A443#/explorer\",\n \"endpoint\": \"https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fentrypoint-finney.opentensor.ai%3A443#/explorer\",\n \"finney\": \"https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fentrypoint-finney.opentensor.ai%3A443#/explorer\",\n },\n \"taostats\": {\n \"local\": \"https://x.taostats.io\",\n \"endpoint\": \"https://x.taostats.io\",\n \"finney\": \"https://x.taostats.io\",\n },\n}\n\n# --- Type Registry ---\n__type_registry__ = {\n \"types\": {\n \"Balance\": \"u64\", # Need to override default u128\n },\n \"runtime_api\": {\n \"NeuronInfoRuntimeApi\": {\n \"methods\": {\n \"get_neuron_lite\": {\n \"params\": [\n {\n \"name\": \"netuid\",\n \"type\": \"u16\",\n },\n {\n \"name\": \"uid\",\n \"type\": \"u16\",\n },\n ],\n \"type\": \"Vec<u8>\",\n },\n \"get_neurons_lite\": {\n \"params\": [\n {\n \"name\": \"netuid\",\n \"type\": \"u16\",\n },\n ],\n \"type\": \"Vec<u8>\",\n },\n }\n },\n \"StakeInfoRuntimeApi\": {\n \"methods\": {\n \"get_stake_info_for_coldkey\": {\n \"params\": [\n {\n \"name\": \"coldkey_account_vec\",\n \"type\": \"Vec<u8>\",\n },\n ],\n \"type\": \"Vec<u8>\",\n },\n \"get_stake_info_for_coldkeys\": {\n \"params\": [\n {\n \"name\": \"coldkey_account_vecs\",\n \"type\": \"Vec<Vec<u8>>\",\n },\n ],\n \"type\": \"Vec<u8>\",\n },\n },\n },\n \"ValidatorIPRuntimeApi\": {\n \"methods\": {\n \"get_associated_validator_ip_info_for_subnet\": {\n \"params\": [\n {\n \"name\": \"netuid\",\n \"type\": \"u16\",\n },\n ],\n \"type\": \"Vec<u8>\",\n },\n },\n },\n \"SubnetInfoRuntimeApi\": {\n \"methods\": {\n \"get_subnet_hyperparams\": {\n \"params\": [\n {\n \"name\": \"netuid\",\n \"type\": \"u16\",\n },\n ],\n \"type\": \"Vec<u8>\",\n }\n }\n },\n \"SubnetRegistrationRuntimeApi\": {\n \"methods\": {\"get_network_registration_cost\": {\"params\": [], \"type\": \"u64\"}}\n },\n },\n}\n\nfrom .errors import (\n BlacklistedException,\n ChainConnectionError,\n ChainError,\n ChainQueryError,\n ChainTransactionError,\n IdentityError,\n InternalServerError,\n InvalidRequestNameError,\n KeyFileError,\n MetadataError,\n NominationError,\n NotDelegateError,\n NotRegisteredError,\n NotVerifiedException,\n PostProcessException,\n PriorityException,\n RegistrationError,\n RunException,\n StakeError,\n SynapseDendriteNoneException,\n SynapseParsingError,\n TransferError,\n UnknownSynapseError,\n UnstakeError,\n)\n\nfrom substrateinterface import Keypair # noqa: F401\nfrom .config import InvalidConfigFile, DefaultConfig, config, T\nfrom .keyfile import (\n serialized_keypair_to_keyfile_data,\n deserialize_keypair_from_keyfile_data,\n validate_password,\n ask_password_to_encrypt,\n keyfile_data_is_encrypted_nacl,\n keyfile_data_is_encrypted_ansible,\n keyfile_data_is_encrypted_legacy,\n keyfile_data_is_encrypted,\n keyfile_data_encryption_method,\n legacy_encrypt_keyfile_data,\n encrypt_keyfile_data,\n get_coldkey_password_from_environment,\n decrypt_keyfile_data,\n keyfile,\n Mockkeyfile,\n)\nfrom .wallet import display_mnemonic_msg, wallet\n\nfrom .utils import (\n ss58_to_vec_u8,\n unbiased_topk,\n version_checking,\n strtobool,\n strtobool_with_default,\n get_explorer_root_url_by_network_from_map,\n get_explorer_root_url_by_network_from_map,\n get_explorer_url_for_network,\n ss58_address_to_bytes,\n U16_NORMALIZED_FLOAT,\n U64_NORMALIZED_FLOAT,\n u8_key_to_ss58,\n hash,\n wallet_utils,\n)\n\nfrom .utils.balance import Balance as Balance\nfrom .chain_data import (\n AxonInfo,\n NeuronInfo,\n NeuronInfoLite,\n PrometheusInfo,\n DelegateInfo,\n DelegateInfoLite,\n StakeInfo,\n SubnetInfo,\n SubnetHyperparameters,\n IPInfo,\n ProposalCallData,\n ProposalVoteData,\n)\n\n# Allows avoiding name spacing conflicts and continue access to the `subtensor` module with `subtensor_module` name\nfrom . import subtensor as subtensor_module\n\n# Double import allows using class `Subtensor` by referencing `bittensor.Subtensor` and `bittensor.subtensor`.\n# This will be available for a while until we remove reference `bittensor.subtensor`\nfrom .subtensor import Subtensor\nfrom .subtensor import Subtensor as subtensor\n\nfrom .cli import cli as cli, COMMANDS as ALL_COMMANDS\nfrom .btlogging import logging\nfrom .metagraph import metagraph as metagraph\nfrom .threadpool import PriorityThreadPoolExecutor as PriorityThreadPoolExecutor\n\nfrom .synapse import TerminalInfo, Synapse\nfrom .stream import StreamingSynapse\nfrom .tensor import tensor, Tensor\nfrom .axon import axon as axon\nfrom .dendrite import dendrite as dendrite\n\nfrom .mock.keyfile_mock import MockKeyfile as MockKeyfile\nfrom .mock.subtensor_mock import MockSubtensor as MockSubtensor\nfrom .mock.wallet_mock import MockWallet as MockWallet\n\nfrom .subnets import SubnetsAPI as SubnetsAPI\n\nconfigs = [\n axon.config(),\n subtensor.config(),\n PriorityThreadPoolExecutor.config(),\n wallet.config(),\n logging.get_config(),\n]\ndefaults = config.merge_all(configs)\n", "path": "bittensor/__init__.py"}], "after_files": [{"content": "# The MIT License (MIT)\n# Copyright \u00a9 2021 Yuma Rao\n# Copyright \u00a9 2022-2023 Opentensor Foundation\n# Copyright \u00a9 2023 Opentensor Technologies Inc\n\n# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated\n# documentation files (the \u201cSoftware\u201d), to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,\n# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\n# The above copyright notice and this permission notice shall be included in all copies or substantial portions of\n# the Software.\n\n# THE SOFTWARE IS PROVIDED \u201cAS IS\u201d, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO\n# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\nimport os\nimport warnings\n\nfrom rich.console import Console\nfrom rich.traceback import install\n\n\nif (NEST_ASYNCIO_ENV := os.getenv(\"NEST_ASYNCIO\")) in (\"1\", None):\n if NEST_ASYNCIO_ENV is None:\n warnings.warn(\n \"NEST_ASYNCIO implicitly set to '1'. In the future, the default value will be '0'.\"\n \"If you use `nest_asyncio` make sure to add it explicitly to your project dependencies,\"\n \"as it will be removed from `bittensor` package dependencies in the future.\"\n \"To silence this warning, explicitly set the environment variable, e.g. `export NEST_ASYNCIO=0`.\",\n DeprecationWarning,\n )\n # Install and apply nest asyncio to allow the async functions\n # to run in a .ipynb\n import nest_asyncio\n\n nest_asyncio.apply()\n\n\n# Bittensor code and protocol version.\n__version__ = \"7.0.0\"\n\nversion_split = __version__.split(\".\")\n__version_as_int__: int = (\n (100 * int(version_split[0]))\n + (10 * int(version_split[1]))\n + (1 * int(version_split[2]))\n)\n__new_signature_version__ = 360\n\n# Rich console.\n__console__ = Console()\n__use_console__ = True\n\n# Remove overdue locals in debug training.\ninstall(show_locals=False)\n\n\ndef turn_console_off():\n global __use_console__\n global __console__\n from io import StringIO\n\n __use_console__ = False\n __console__ = Console(file=StringIO(), stderr=False)\n\n\ndef turn_console_on():\n global __use_console__\n global __console__\n __use_console__ = True\n __console__ = Console()\n\n\nturn_console_off()\n\n\n# Logging helpers.\ndef trace(on: bool = True):\n logging.set_trace(on)\n\n\ndef debug(on: bool = True):\n logging.set_debug(on)\n\n\n# Substrate chain block time (seconds).\n__blocktime__ = 12\n\n# Pip address for versioning\n__pipaddress__ = \"https://pypi.org/pypi/bittensor/json\"\n\n# Raw GitHub url for delegates registry file\n__delegates_details_url__: str = \"https://raw.githubusercontent.com/opentensor/bittensor-delegates/main/public/delegates.json\"\n\n# Substrate ss58_format\n__ss58_format__ = 42\n\n# Wallet ss58 address length\n__ss58_address_length__ = 48\n\n__networks__ = [\"local\", \"finney\", \"test\", \"archive\"]\n\n__finney_entrypoint__ = \"wss://entrypoint-finney.opentensor.ai:443\"\n\n__finney_test_entrypoint__ = \"wss://test.finney.opentensor.ai:443/\"\n\n__archive_entrypoint__ = \"wss://archive.chain.opentensor.ai:443/\"\n\n# Needs to use wss://\n__bellagene_entrypoint__ = \"wss://parachain.opentensor.ai:443\"\n\n__local_entrypoint__ = \"ws://127.0.0.1:9944\"\n\n__tao_symbol__: str = chr(0x03C4)\n\n__rao_symbol__: str = chr(0x03C1)\n\n# Block Explorers map network to explorer url\n# Must all be polkadotjs explorer urls\n__network_explorer_map__ = {\n \"opentensor\": {\n \"local\": \"https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fentrypoint-finney.opentensor.ai%3A443#/explorer\",\n \"endpoint\": \"https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fentrypoint-finney.opentensor.ai%3A443#/explorer\",\n \"finney\": \"https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fentrypoint-finney.opentensor.ai%3A443#/explorer\",\n },\n \"taostats\": {\n \"local\": \"https://x.taostats.io\",\n \"endpoint\": \"https://x.taostats.io\",\n \"finney\": \"https://x.taostats.io\",\n },\n}\n\n# --- Type Registry ---\n__type_registry__ = {\n \"types\": {\n \"Balance\": \"u64\", # Need to override default u128\n },\n \"runtime_api\": {\n \"NeuronInfoRuntimeApi\": {\n \"methods\": {\n \"get_neuron_lite\": {\n \"params\": [\n {\n \"name\": \"netuid\",\n \"type\": \"u16\",\n },\n {\n \"name\": \"uid\",\n \"type\": \"u16\",\n },\n ],\n \"type\": \"Vec<u8>\",\n },\n \"get_neurons_lite\": {\n \"params\": [\n {\n \"name\": \"netuid\",\n \"type\": \"u16\",\n },\n ],\n \"type\": \"Vec<u8>\",\n },\n }\n },\n \"StakeInfoRuntimeApi\": {\n \"methods\": {\n \"get_stake_info_for_coldkey\": {\n \"params\": [\n {\n \"name\": \"coldkey_account_vec\",\n \"type\": \"Vec<u8>\",\n },\n ],\n \"type\": \"Vec<u8>\",\n },\n \"get_stake_info_for_coldkeys\": {\n \"params\": [\n {\n \"name\": \"coldkey_account_vecs\",\n \"type\": \"Vec<Vec<u8>>\",\n },\n ],\n \"type\": \"Vec<u8>\",\n },\n },\n },\n \"ValidatorIPRuntimeApi\": {\n \"methods\": {\n \"get_associated_validator_ip_info_for_subnet\": {\n \"params\": [\n {\n \"name\": \"netuid\",\n \"type\": \"u16\",\n },\n ],\n \"type\": \"Vec<u8>\",\n },\n },\n },\n \"SubnetInfoRuntimeApi\": {\n \"methods\": {\n \"get_subnet_hyperparams\": {\n \"params\": [\n {\n \"name\": \"netuid\",\n \"type\": \"u16\",\n },\n ],\n \"type\": \"Vec<u8>\",\n }\n }\n },\n \"SubnetRegistrationRuntimeApi\": {\n \"methods\": {\"get_network_registration_cost\": {\"params\": [], \"type\": \"u64\"}}\n },\n },\n}\n\nfrom .errors import (\n BlacklistedException,\n ChainConnectionError,\n ChainError,\n ChainQueryError,\n ChainTransactionError,\n IdentityError,\n InternalServerError,\n InvalidRequestNameError,\n KeyFileError,\n MetadataError,\n NominationError,\n NotDelegateError,\n NotRegisteredError,\n NotVerifiedException,\n PostProcessException,\n PriorityException,\n RegistrationError,\n RunException,\n StakeError,\n SynapseDendriteNoneException,\n SynapseParsingError,\n TransferError,\n UnknownSynapseError,\n UnstakeError,\n)\n\nfrom substrateinterface import Keypair # noqa: F401\nfrom .config import InvalidConfigFile, DefaultConfig, config, T\nfrom .keyfile import (\n serialized_keypair_to_keyfile_data,\n deserialize_keypair_from_keyfile_data,\n validate_password,\n ask_password_to_encrypt,\n keyfile_data_is_encrypted_nacl,\n keyfile_data_is_encrypted_ansible,\n keyfile_data_is_encrypted_legacy,\n keyfile_data_is_encrypted,\n keyfile_data_encryption_method,\n legacy_encrypt_keyfile_data,\n encrypt_keyfile_data,\n get_coldkey_password_from_environment,\n decrypt_keyfile_data,\n keyfile,\n Mockkeyfile,\n)\nfrom .wallet import display_mnemonic_msg, wallet\n\nfrom .utils import (\n ss58_to_vec_u8,\n unbiased_topk,\n version_checking,\n strtobool,\n strtobool_with_default,\n get_explorer_root_url_by_network_from_map,\n get_explorer_root_url_by_network_from_map,\n get_explorer_url_for_network,\n ss58_address_to_bytes,\n U16_NORMALIZED_FLOAT,\n U64_NORMALIZED_FLOAT,\n u8_key_to_ss58,\n hash,\n wallet_utils,\n)\n\nfrom .utils.balance import Balance as Balance\nfrom .chain_data import (\n AxonInfo,\n NeuronInfo,\n NeuronInfoLite,\n PrometheusInfo,\n DelegateInfo,\n DelegateInfoLite,\n StakeInfo,\n SubnetInfo,\n SubnetHyperparameters,\n IPInfo,\n ProposalCallData,\n ProposalVoteData,\n)\n\n# Allows avoiding name spacing conflicts and continue access to the `subtensor` module with `subtensor_module` name\nfrom . import subtensor as subtensor_module\n\n# Double import allows using class `Subtensor` by referencing `bittensor.Subtensor` and `bittensor.subtensor`.\n# This will be available for a while until we remove reference `bittensor.subtensor`\nfrom .subtensor import Subtensor\nfrom .subtensor import Subtensor as subtensor\n\nfrom .cli import cli as cli, COMMANDS as ALL_COMMANDS\nfrom .btlogging import logging\nfrom .metagraph import metagraph as metagraph\nfrom .threadpool import PriorityThreadPoolExecutor as PriorityThreadPoolExecutor\n\nfrom .synapse import TerminalInfo, Synapse\nfrom .stream import StreamingSynapse\nfrom .tensor import tensor, Tensor\nfrom .axon import axon as axon\nfrom .dendrite import dendrite as dendrite\n\nfrom .mock.keyfile_mock import MockKeyfile as MockKeyfile\nfrom .mock.subtensor_mock import MockSubtensor as MockSubtensor\nfrom .mock.wallet_mock import MockWallet as MockWallet\n\nfrom .subnets import SubnetsAPI as SubnetsAPI\n\nconfigs = [\n axon.config(),\n subtensor.config(),\n PriorityThreadPoolExecutor.config(),\n wallet.config(),\n logging.get_config(),\n]\ndefaults = config.merge_all(configs)\n", "path": "bittensor/__init__.py"}]}
| 3,711 | 370 |
gh_patches_debug_23730
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-1276
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Prevent event registrations cancelling when registration is paid (or where you were present)
### Is your feature request related to a problem? Please describe.
Currently it is technically possible to pay for an event and afterwards cancel your registration. That should not be possible (at least not by users themselves). This never was a real problem in practice but now with Thalia Pay the problem appears.
### Describe the solution you'd like
When a payment exists for an event registration, don't allow cancelling. Also when paying with Thalia Pay we might want to make explicit that after paying you can't cancel anymore (without maybe contacting the board).
### Motivation
### Describe alternatives you've considered
- Only really creating the payment at the date of the event → too complex (code)
- As proposed here, but do allow cancellation when the payment is a TPay payment that is not processed yet → too complex (for users)
- Keeping things as it is (allow cancelling and accept people may pay without joining - it could be considered a feature) → felt undesirable, also without TPay
### Additional context
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/events/services.py`
Content:
```
1 from collections import OrderedDict
2
3 from django.utils import timezone
4 from django.utils.datetime_safe import date
5 from django.utils.translation import gettext_lazy as _, get_language
6
7 from events import emails
8 from events.exceptions import RegistrationError
9 from events.models import EventRegistration, RegistrationInformationField, Event
10 from payments.api.fields import PaymentTypeField
11 from payments.services import create_payment, delete_payment
12 from utils.snippets import datetime_to_lectureyear
13
14
15 def is_user_registered(member, event):
16 """
17 Returns if the user is registered for the specified event
18
19 :param member: the user
20 :param event: the event
21 :return: None if registration is not required or no member else True/False
22 """
23 if not event.registration_required or not member.is_authenticated:
24 return None
25
26 return event.registrations.filter(member=member, date_cancelled=None).count() > 0
27
28
29 def event_permissions(member, event, name=None):
30 """
31 Returns a dictionary with the available event permissions of the user
32
33 :param member: the user
34 :param event: the event
35 :param name: the name of a non member registration
36 :return: the permission dictionary
37 """
38 perms = {
39 "create_registration": False,
40 "cancel_registration": False,
41 "update_registration": False,
42 }
43 if not member:
44 return perms
45 if not (member.is_authenticated or name):
46 return perms
47
48 registration = None
49 try:
50 registration = EventRegistration.objects.get(
51 event=event, member=member, name=name
52 )
53 except EventRegistration.DoesNotExist:
54 pass
55
56 perms["create_registration"] = (
57 (registration is None or registration.date_cancelled is not None)
58 and event.registration_allowed
59 and (name or member.can_attend_events)
60 )
61 perms["cancel_registration"] = (
62 registration is not None
63 and registration.date_cancelled is None
64 and (event.cancellation_allowed or name)
65 )
66 perms["update_registration"] = (
67 registration is not None
68 and registration.date_cancelled is None
69 and event.has_fields()
70 and event.registration_allowed
71 and (name or member.can_attend_events)
72 )
73 return perms
74
75
76 def is_organiser(member, event):
77 if member and member.is_authenticated:
78 if member.is_superuser or member.has_perm("events.override_organiser"):
79 return True
80
81 if event:
82 return member.get_member_groups().filter(pk=event.organiser.pk).count() != 0
83
84 return False
85
86
87 def create_registration(member, event):
88 """
89 Creates a new user registration for an event
90
91 :param member: the user
92 :param event: the event
93 :return: returns the registration if successful
94 """
95 if event_permissions(member, event)["create_registration"]:
96 registration = None
97 try:
98 registration = EventRegistration.objects.get(event=event, member=member)
99 except EventRegistration.DoesNotExist:
100 pass
101
102 if registration is None:
103 return EventRegistration.objects.create(event=event, member=member)
104 elif registration.date_cancelled is not None:
105 if registration.is_late_cancellation():
106 raise RegistrationError(
107 _(
108 "You cannot re-register anymore "
109 "since you've cancelled after the "
110 "deadline."
111 )
112 )
113 else:
114 registration.date = timezone.now()
115 registration.date_cancelled = None
116 registration.save()
117
118 return registration
119 elif event_permissions(member, event)["cancel_registration"]:
120 raise RegistrationError(_("You were already registered."))
121 else:
122 raise RegistrationError(_("You may not register."))
123
124
125 def cancel_registration(member, event):
126 """
127 Cancel a user registration for an event
128
129 :param member: the user
130 :param event: the event
131 """
132 registration = None
133 try:
134 registration = EventRegistration.objects.get(event=event, member=member)
135 except EventRegistration.DoesNotExist:
136 pass
137
138 if event_permissions(member, event)["cancel_registration"] and registration:
139 if registration.payment is not None:
140 delete_payment(registration)
141 if registration.queue_position == 0:
142 emails.notify_first_waiting(event)
143
144 if event.send_cancel_email and event.after_cancel_deadline:
145 emails.notify_organiser(event, registration)
146
147 # Note that this doesn"t remove the values for the
148 # information fields that the user entered upon registering.
149 # But this is regarded as a feature, not a bug. Especially
150 # since the values will still appear in the backend.
151 registration.date_cancelled = timezone.now()
152 registration.save()
153 else:
154 raise RegistrationError(_("You are not registered for this event."))
155
156
157 def update_registration(
158 member=None, event=None, name=None, registration=None, field_values=None
159 ):
160 """
161 Updates a user registration of an event
162
163 :param request: http request
164 :param member: the user
165 :param event: the event
166 :param name: the name of a registration not associated with a user
167 :param registration: the registration
168 :param field_values: values for the information fields
169 """
170 if not registration:
171 try:
172 registration = EventRegistration.objects.get(
173 event=event, member=member, name=name
174 )
175 except EventRegistration.DoesNotExist as error:
176 raise RegistrationError(
177 _("You are not registered for this event.")
178 ) from error
179 else:
180 member = registration.member
181 event = registration.event
182 name = registration.name
183
184 if (
185 not event_permissions(member, event, name)["update_registration"]
186 or not field_values
187 ):
188 return
189
190 for field_id, field_value in field_values:
191 field = RegistrationInformationField.objects.get(
192 id=field_id.replace("info_field_", "")
193 )
194
195 if (
196 field.type == RegistrationInformationField.INTEGER_FIELD
197 and field_value is None
198 ):
199 field_value = 0
200 elif (
201 field.type == RegistrationInformationField.BOOLEAN_FIELD
202 and field_value is None
203 ):
204 field_value = False
205 elif (
206 field.type == RegistrationInformationField.TEXT_FIELD
207 and field_value is None
208 ):
209 field_value = ""
210
211 field.set_value_for(registration, field_value)
212
213
214 def registration_fields(request, member=None, event=None, registration=None, name=None):
215 """
216 Returns information about the registration fields of a registration
217
218 :param member: the user (optional if registration provided)
219 :param name: the name of a non member registration
220 (optional if registration provided)
221 :param event: the event (optional if registration provided)
222 :param registration: the registration (optional if member & event provided)
223 :return: the fields
224 """
225
226 if registration is None:
227 try:
228 registration = EventRegistration.objects.get(
229 event=event, member=member, name=name
230 )
231 except EventRegistration.DoesNotExist as error:
232 raise RegistrationError(
233 _("You are not registered for this event.")
234 ) from error
235 except EventRegistration.MultipleObjectsReturned as error:
236 raise RegistrationError(
237 _("Unable to find the right registration.")
238 ) from error
239 else:
240 member = registration.member
241 event = registration.event
242 name = registration.name
243
244 perms = event_permissions(member, event, name)[
245 "update_registration"
246 ] or is_organiser(request.member, event)
247 if perms and registration:
248 information_fields = registration.information_fields
249 fields = OrderedDict()
250
251 for information_field in information_fields:
252 field = information_field["field"]
253
254 fields["info_field_{}".format(field.id)] = {
255 "type": field.type,
256 "label": getattr(field, "{}_{}".format("name", get_language())),
257 "description": getattr(
258 field, "{}_{}".format("description", get_language())
259 ),
260 "value": information_field["value"],
261 "required": field.required,
262 }
263
264 return fields
265 else:
266 raise RegistrationError(_("You are not allowed to update this registration."))
267
268
269 def update_registration_by_organiser(registration, member, data):
270 if not is_organiser(member, registration.event):
271 raise RegistrationError(_("You are not allowed to update this registration."))
272
273 if "payment" in data:
274 if data["payment"]["type"] == PaymentTypeField.NO_PAYMENT:
275 if registration.payment is not None:
276 delete_payment(registration)
277 else:
278 registration.payment = create_payment(
279 payable=registration,
280 processed_by=member,
281 pay_type=data["payment"]["type"],
282 )
283
284 if "present" in data:
285 registration.present = data["present"]
286
287 registration.save()
288
289
290 def generate_category_statistics():
291 """
292 Generate statistics about events, number of events per category
293 :return: Dict with key, value resp. being category, event count.
294 """
295 year = datetime_to_lectureyear(timezone.now())
296
297 data = {}
298 for i in range(5):
299 year_start = date(year=year - i, month=9, day=1)
300 year_end = date(year=year - i + 1, month=9, day=1)
301 data[str(year - i)] = {
302 str(display): Event.objects.filter(
303 category=key, start__gte=year_start, end__lte=year_end
304 ).count()
305 for key, display in Event.EVENT_CATEGORIES
306 }
307
308 return data
309
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/website/events/services.py b/website/events/services.py
--- a/website/events/services.py
+++ b/website/events/services.py
@@ -62,6 +62,7 @@
registration is not None
and registration.date_cancelled is None
and (event.cancellation_allowed or name)
+ and registration.payment is None
)
perms["update_registration"] = (
registration is not None
@@ -136,8 +137,6 @@
pass
if event_permissions(member, event)["cancel_registration"] and registration:
- if registration.payment is not None:
- delete_payment(registration)
if registration.queue_position == 0:
emails.notify_first_waiting(event)
@@ -151,7 +150,7 @@
registration.date_cancelled = timezone.now()
registration.save()
else:
- raise RegistrationError(_("You are not registered for this event."))
+ raise RegistrationError(_("You are not allowed to deregister for this event."))
def update_registration(
|
{"golden_diff": "diff --git a/website/events/services.py b/website/events/services.py\n--- a/website/events/services.py\n+++ b/website/events/services.py\n@@ -62,6 +62,7 @@\n registration is not None\n and registration.date_cancelled is None\n and (event.cancellation_allowed or name)\n+ and registration.payment is None\n )\n perms[\"update_registration\"] = (\n registration is not None\n@@ -136,8 +137,6 @@\n pass\n \n if event_permissions(member, event)[\"cancel_registration\"] and registration:\n- if registration.payment is not None:\n- delete_payment(registration)\n if registration.queue_position == 0:\n emails.notify_first_waiting(event)\n \n@@ -151,7 +150,7 @@\n registration.date_cancelled = timezone.now()\n registration.save()\n else:\n- raise RegistrationError(_(\"You are not registered for this event.\"))\n+ raise RegistrationError(_(\"You are not allowed to deregister for this event.\"))\n \n \n def update_registration(\n", "issue": "Prevent event registrations cancelling when registration is paid (or where you were present)\n### Is your feature request related to a problem? Please describe.\r\nCurrently it is technically possible to pay for an event and afterwards cancel your registration. That should not be possible (at least not by users themselves). This never was a real problem in practice but now with Thalia Pay the problem appears.\r\n\r\n### Describe the solution you'd like\r\nWhen a payment exists for an event registration, don't allow cancelling. Also when paying with Thalia Pay we might want to make explicit that after paying you can't cancel anymore (without maybe contacting the board). \r\n\r\n### Motivation\r\n\r\n### Describe alternatives you've considered\r\n- Only really creating the payment at the date of the event \u2192 too complex (code)\r\n- As proposed here, but do allow cancellation when the payment is a TPay payment that is not processed yet \u2192 too complex (for users)\r\n- Keeping things as it is (allow cancelling and accept people may pay without joining - it could be considered a feature) \u2192 felt undesirable, also without TPay\r\n\r\n### Additional context\r\n\n", "before_files": [{"content": "from collections import OrderedDict\n\nfrom django.utils import timezone\nfrom django.utils.datetime_safe import date\nfrom django.utils.translation import gettext_lazy as _, get_language\n\nfrom events import emails\nfrom events.exceptions import RegistrationError\nfrom events.models import EventRegistration, RegistrationInformationField, Event\nfrom payments.api.fields import PaymentTypeField\nfrom payments.services import create_payment, delete_payment\nfrom utils.snippets import datetime_to_lectureyear\n\n\ndef is_user_registered(member, event):\n \"\"\"\n Returns if the user is registered for the specified event\n\n :param member: the user\n :param event: the event\n :return: None if registration is not required or no member else True/False\n \"\"\"\n if not event.registration_required or not member.is_authenticated:\n return None\n\n return event.registrations.filter(member=member, date_cancelled=None).count() > 0\n\n\ndef event_permissions(member, event, name=None):\n \"\"\"\n Returns a dictionary with the available event permissions of the user\n\n :param member: the user\n :param event: the event\n :param name: the name of a non member registration\n :return: the permission dictionary\n \"\"\"\n perms = {\n \"create_registration\": False,\n \"cancel_registration\": False,\n \"update_registration\": False,\n }\n if not member:\n return perms\n if not (member.is_authenticated or name):\n return perms\n\n registration = None\n try:\n registration = EventRegistration.objects.get(\n event=event, member=member, name=name\n )\n except EventRegistration.DoesNotExist:\n pass\n\n perms[\"create_registration\"] = (\n (registration is None or registration.date_cancelled is not None)\n and event.registration_allowed\n and (name or member.can_attend_events)\n )\n perms[\"cancel_registration\"] = (\n registration is not None\n and registration.date_cancelled is None\n and (event.cancellation_allowed or name)\n )\n perms[\"update_registration\"] = (\n registration is not None\n and registration.date_cancelled is None\n and event.has_fields()\n and event.registration_allowed\n and (name or member.can_attend_events)\n )\n return perms\n\n\ndef is_organiser(member, event):\n if member and member.is_authenticated:\n if member.is_superuser or member.has_perm(\"events.override_organiser\"):\n return True\n\n if event:\n return member.get_member_groups().filter(pk=event.organiser.pk).count() != 0\n\n return False\n\n\ndef create_registration(member, event):\n \"\"\"\n Creates a new user registration for an event\n\n :param member: the user\n :param event: the event\n :return: returns the registration if successful\n \"\"\"\n if event_permissions(member, event)[\"create_registration\"]:\n registration = None\n try:\n registration = EventRegistration.objects.get(event=event, member=member)\n except EventRegistration.DoesNotExist:\n pass\n\n if registration is None:\n return EventRegistration.objects.create(event=event, member=member)\n elif registration.date_cancelled is not None:\n if registration.is_late_cancellation():\n raise RegistrationError(\n _(\n \"You cannot re-register anymore \"\n \"since you've cancelled after the \"\n \"deadline.\"\n )\n )\n else:\n registration.date = timezone.now()\n registration.date_cancelled = None\n registration.save()\n\n return registration\n elif event_permissions(member, event)[\"cancel_registration\"]:\n raise RegistrationError(_(\"You were already registered.\"))\n else:\n raise RegistrationError(_(\"You may not register.\"))\n\n\ndef cancel_registration(member, event):\n \"\"\"\n Cancel a user registration for an event\n\n :param member: the user\n :param event: the event\n \"\"\"\n registration = None\n try:\n registration = EventRegistration.objects.get(event=event, member=member)\n except EventRegistration.DoesNotExist:\n pass\n\n if event_permissions(member, event)[\"cancel_registration\"] and registration:\n if registration.payment is not None:\n delete_payment(registration)\n if registration.queue_position == 0:\n emails.notify_first_waiting(event)\n\n if event.send_cancel_email and event.after_cancel_deadline:\n emails.notify_organiser(event, registration)\n\n # Note that this doesn\"t remove the values for the\n # information fields that the user entered upon registering.\n # But this is regarded as a feature, not a bug. Especially\n # since the values will still appear in the backend.\n registration.date_cancelled = timezone.now()\n registration.save()\n else:\n raise RegistrationError(_(\"You are not registered for this event.\"))\n\n\ndef update_registration(\n member=None, event=None, name=None, registration=None, field_values=None\n):\n \"\"\"\n Updates a user registration of an event\n\n :param request: http request\n :param member: the user\n :param event: the event\n :param name: the name of a registration not associated with a user\n :param registration: the registration\n :param field_values: values for the information fields\n \"\"\"\n if not registration:\n try:\n registration = EventRegistration.objects.get(\n event=event, member=member, name=name\n )\n except EventRegistration.DoesNotExist as error:\n raise RegistrationError(\n _(\"You are not registered for this event.\")\n ) from error\n else:\n member = registration.member\n event = registration.event\n name = registration.name\n\n if (\n not event_permissions(member, event, name)[\"update_registration\"]\n or not field_values\n ):\n return\n\n for field_id, field_value in field_values:\n field = RegistrationInformationField.objects.get(\n id=field_id.replace(\"info_field_\", \"\")\n )\n\n if (\n field.type == RegistrationInformationField.INTEGER_FIELD\n and field_value is None\n ):\n field_value = 0\n elif (\n field.type == RegistrationInformationField.BOOLEAN_FIELD\n and field_value is None\n ):\n field_value = False\n elif (\n field.type == RegistrationInformationField.TEXT_FIELD\n and field_value is None\n ):\n field_value = \"\"\n\n field.set_value_for(registration, field_value)\n\n\ndef registration_fields(request, member=None, event=None, registration=None, name=None):\n \"\"\"\n Returns information about the registration fields of a registration\n\n :param member: the user (optional if registration provided)\n :param name: the name of a non member registration\n (optional if registration provided)\n :param event: the event (optional if registration provided)\n :param registration: the registration (optional if member & event provided)\n :return: the fields\n \"\"\"\n\n if registration is None:\n try:\n registration = EventRegistration.objects.get(\n event=event, member=member, name=name\n )\n except EventRegistration.DoesNotExist as error:\n raise RegistrationError(\n _(\"You are not registered for this event.\")\n ) from error\n except EventRegistration.MultipleObjectsReturned as error:\n raise RegistrationError(\n _(\"Unable to find the right registration.\")\n ) from error\n else:\n member = registration.member\n event = registration.event\n name = registration.name\n\n perms = event_permissions(member, event, name)[\n \"update_registration\"\n ] or is_organiser(request.member, event)\n if perms and registration:\n information_fields = registration.information_fields\n fields = OrderedDict()\n\n for information_field in information_fields:\n field = information_field[\"field\"]\n\n fields[\"info_field_{}\".format(field.id)] = {\n \"type\": field.type,\n \"label\": getattr(field, \"{}_{}\".format(\"name\", get_language())),\n \"description\": getattr(\n field, \"{}_{}\".format(\"description\", get_language())\n ),\n \"value\": information_field[\"value\"],\n \"required\": field.required,\n }\n\n return fields\n else:\n raise RegistrationError(_(\"You are not allowed to update this registration.\"))\n\n\ndef update_registration_by_organiser(registration, member, data):\n if not is_organiser(member, registration.event):\n raise RegistrationError(_(\"You are not allowed to update this registration.\"))\n\n if \"payment\" in data:\n if data[\"payment\"][\"type\"] == PaymentTypeField.NO_PAYMENT:\n if registration.payment is not None:\n delete_payment(registration)\n else:\n registration.payment = create_payment(\n payable=registration,\n processed_by=member,\n pay_type=data[\"payment\"][\"type\"],\n )\n\n if \"present\" in data:\n registration.present = data[\"present\"]\n\n registration.save()\n\n\ndef generate_category_statistics():\n \"\"\"\n Generate statistics about events, number of events per category\n :return: Dict with key, value resp. being category, event count.\n \"\"\"\n year = datetime_to_lectureyear(timezone.now())\n\n data = {}\n for i in range(5):\n year_start = date(year=year - i, month=9, day=1)\n year_end = date(year=year - i + 1, month=9, day=1)\n data[str(year - i)] = {\n str(display): Event.objects.filter(\n category=key, start__gte=year_start, end__lte=year_end\n ).count()\n for key, display in Event.EVENT_CATEGORIES\n }\n\n return data\n", "path": "website/events/services.py"}], "after_files": [{"content": "from collections import OrderedDict\n\nfrom django.utils import timezone\nfrom django.utils.datetime_safe import date\nfrom django.utils.translation import gettext_lazy as _, get_language\n\nfrom events import emails\nfrom events.exceptions import RegistrationError\nfrom events.models import EventRegistration, RegistrationInformationField, Event\nfrom payments.api.fields import PaymentTypeField\nfrom payments.services import create_payment, delete_payment\nfrom utils.snippets import datetime_to_lectureyear\n\n\ndef is_user_registered(member, event):\n \"\"\"\n Returns if the user is registered for the specified event\n\n :param member: the user\n :param event: the event\n :return: None if registration is not required or no member else True/False\n \"\"\"\n if not event.registration_required or not member.is_authenticated:\n return None\n\n return event.registrations.filter(member=member, date_cancelled=None).count() > 0\n\n\ndef event_permissions(member, event, name=None):\n \"\"\"\n Returns a dictionary with the available event permissions of the user\n\n :param member: the user\n :param event: the event\n :param name: the name of a non member registration\n :return: the permission dictionary\n \"\"\"\n perms = {\n \"create_registration\": False,\n \"cancel_registration\": False,\n \"update_registration\": False,\n }\n if not member:\n return perms\n if not (member.is_authenticated or name):\n return perms\n\n registration = None\n try:\n registration = EventRegistration.objects.get(\n event=event, member=member, name=name\n )\n except EventRegistration.DoesNotExist:\n pass\n\n perms[\"create_registration\"] = (\n (registration is None or registration.date_cancelled is not None)\n and event.registration_allowed\n and (name or member.can_attend_events)\n )\n perms[\"cancel_registration\"] = (\n registration is not None\n and registration.date_cancelled is None\n and (event.cancellation_allowed or name)\n and registration.payment is None\n )\n perms[\"update_registration\"] = (\n registration is not None\n and registration.date_cancelled is None\n and event.has_fields()\n and event.registration_allowed\n and (name or member.can_attend_events)\n )\n return perms\n\n\ndef is_organiser(member, event):\n if member and member.is_authenticated:\n if member.is_superuser or member.has_perm(\"events.override_organiser\"):\n return True\n\n if event:\n return member.get_member_groups().filter(pk=event.organiser.pk).count() != 0\n\n return False\n\n\ndef create_registration(member, event):\n \"\"\"\n Creates a new user registration for an event\n\n :param member: the user\n :param event: the event\n :return: returns the registration if successful\n \"\"\"\n if event_permissions(member, event)[\"create_registration\"]:\n registration = None\n try:\n registration = EventRegistration.objects.get(event=event, member=member)\n except EventRegistration.DoesNotExist:\n pass\n\n if registration is None:\n return EventRegistration.objects.create(event=event, member=member)\n elif registration.date_cancelled is not None:\n if registration.is_late_cancellation():\n raise RegistrationError(\n _(\n \"You cannot re-register anymore \"\n \"since you've cancelled after the \"\n \"deadline.\"\n )\n )\n else:\n registration.date = timezone.now()\n registration.date_cancelled = None\n registration.save()\n\n return registration\n elif event_permissions(member, event)[\"cancel_registration\"]:\n raise RegistrationError(_(\"You were already registered.\"))\n else:\n raise RegistrationError(_(\"You may not register.\"))\n\n\ndef cancel_registration(member, event):\n \"\"\"\n Cancel a user registration for an event\n\n :param member: the user\n :param event: the event\n \"\"\"\n registration = None\n try:\n registration = EventRegistration.objects.get(event=event, member=member)\n except EventRegistration.DoesNotExist:\n pass\n\n if event_permissions(member, event)[\"cancel_registration\"] and registration:\n if registration.queue_position == 0:\n emails.notify_first_waiting(event)\n\n if event.send_cancel_email and event.after_cancel_deadline:\n emails.notify_organiser(event, registration)\n\n # Note that this doesn\"t remove the values for the\n # information fields that the user entered upon registering.\n # But this is regarded as a feature, not a bug. Especially\n # since the values will still appear in the backend.\n registration.date_cancelled = timezone.now()\n registration.save()\n else:\n raise RegistrationError(_(\"You are not allowed to deregister for this event.\"))\n\n\ndef update_registration(\n member=None, event=None, name=None, registration=None, field_values=None\n):\n \"\"\"\n Updates a user registration of an event\n\n :param request: http request\n :param member: the user\n :param event: the event\n :param name: the name of a registration not associated with a user\n :param registration: the registration\n :param field_values: values for the information fields\n \"\"\"\n if not registration:\n try:\n registration = EventRegistration.objects.get(\n event=event, member=member, name=name\n )\n except EventRegistration.DoesNotExist as error:\n raise RegistrationError(\n _(\"You are not registered for this event.\")\n ) from error\n else:\n member = registration.member\n event = registration.event\n name = registration.name\n\n if (\n not event_permissions(member, event, name)[\"update_registration\"]\n or not field_values\n ):\n return\n\n for field_id, field_value in field_values:\n field = RegistrationInformationField.objects.get(\n id=field_id.replace(\"info_field_\", \"\")\n )\n\n if (\n field.type == RegistrationInformationField.INTEGER_FIELD\n and field_value is None\n ):\n field_value = 0\n elif (\n field.type == RegistrationInformationField.BOOLEAN_FIELD\n and field_value is None\n ):\n field_value = False\n elif (\n field.type == RegistrationInformationField.TEXT_FIELD\n and field_value is None\n ):\n field_value = \"\"\n\n field.set_value_for(registration, field_value)\n\n\ndef registration_fields(request, member=None, event=None, registration=None, name=None):\n \"\"\"\n Returns information about the registration fields of a registration\n\n :param member: the user (optional if registration provided)\n :param name: the name of a non member registration\n (optional if registration provided)\n :param event: the event (optional if registration provided)\n :param registration: the registration (optional if member & event provided)\n :return: the fields\n \"\"\"\n\n if registration is None:\n try:\n registration = EventRegistration.objects.get(\n event=event, member=member, name=name\n )\n except EventRegistration.DoesNotExist as error:\n raise RegistrationError(\n _(\"You are not registered for this event.\")\n ) from error\n except EventRegistration.MultipleObjectsReturned as error:\n raise RegistrationError(\n _(\"Unable to find the right registration.\")\n ) from error\n else:\n member = registration.member\n event = registration.event\n name = registration.name\n\n perms = event_permissions(member, event, name)[\n \"update_registration\"\n ] or is_organiser(request.member, event)\n if perms and registration:\n information_fields = registration.information_fields\n fields = OrderedDict()\n\n for information_field in information_fields:\n field = information_field[\"field\"]\n\n fields[\"info_field_{}\".format(field.id)] = {\n \"type\": field.type,\n \"label\": getattr(field, \"{}_{}\".format(\"name\", get_language())),\n \"description\": getattr(\n field, \"{}_{}\".format(\"description\", get_language())\n ),\n \"value\": information_field[\"value\"],\n \"required\": field.required,\n }\n\n return fields\n else:\n raise RegistrationError(_(\"You are not allowed to update this registration.\"))\n\n\ndef update_registration_by_organiser(registration, member, data):\n if not is_organiser(member, registration.event):\n raise RegistrationError(_(\"You are not allowed to update this registration.\"))\n\n if \"payment\" in data:\n if data[\"payment\"][\"type\"] == PaymentTypeField.NO_PAYMENT:\n if registration.payment is not None:\n delete_payment(registration)\n else:\n registration.payment = create_payment(\n payable=registration,\n processed_by=member,\n pay_type=data[\"payment\"][\"type\"],\n )\n\n if \"present\" in data:\n registration.present = data[\"present\"]\n\n registration.save()\n\n\ndef generate_category_statistics():\n \"\"\"\n Generate statistics about events, number of events per category\n :return: Dict with key, value resp. being category, event count.\n \"\"\"\n year = datetime_to_lectureyear(timezone.now())\n\n data = {}\n for i in range(5):\n year_start = date(year=year - i, month=9, day=1)\n year_end = date(year=year - i + 1, month=9, day=1)\n data[str(year - i)] = {\n str(display): Event.objects.filter(\n category=key, start__gte=year_start, end__lte=year_end\n ).count()\n for key, display in Event.EVENT_CATEGORIES\n }\n\n return data\n", "path": "website/events/services.py"}]}
| 3,306 | 225 |
gh_patches_debug_12091
|
rasdani/github-patches
|
git_diff
|
projectmesa__mesa-281
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug with building docs - Readme is not correct
I was just trying to follow the [Readme.md](https://github.com/projectmesa/mesa/blob/master/docs/README.md) to build the docs, and I get the following error:
```
mesa/docs [git fix-docs] $ make html
sphinx-build -b html -d _build/doctrees . _build/html
Running Sphinx v1.3.6
Recursion error:
maximum recursion depth exceeded while calling a Python object
This can happen with very large or deeply nested source files. You can carefully increase the default Python recursion limit of 1000 in conf.py with e.g.:
import sys; sys.setrecursionlimit(1500)
make: *** [html] Error 1
```
Not sure why I am running into this. I feel like I have come across this before, but I can't remember how I fixed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 #!/usr/bin/env python3
2 # -*- coding: utf-8 -*-
3 #
4 # Mesa documentation build configuration file, created by
5 # sphinx-quickstart on Sun Jan 4 23:34:09 2015.
6 #
7 # This file is execfile()d with the current directory set to its
8 # containing dir.
9 #
10 # Note that not all possible configuration values are present in this
11 # autogenerated file.
12 #
13 # All configuration values have a default; values that are commented out
14 # serve to show the default.
15
16 import sys
17 import os
18
19
20 # Adding mock imports to see if this builds
21 from unittest.mock import MagicMock
22
23 class Mock(MagicMock):
24 @classmethod
25 def __getattr__(cls, name):
26 return Mock()
27
28 MOCK_MODULES = ['numpy', 'pandas']
29 sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)
30
31 # End of mock
32
33 # If extensions (or modules to document with autodoc) are in another directory,
34 # add these directories to sys.path here. If the directory is relative to the
35 # documentation root, use os.path.abspath to make it absolute, like shown here.
36 sys.path.insert(0, os.path.abspath('.'))
37 sys.path.insert(0, "../examples")
38 sys.path.insert(0, "../mesa")
39
40
41 # -- General configuration ------------------------------------------------
42
43 # If your documentation needs a minimal Sphinx version, state it here.
44 #needs_sphinx = '1.0'
45
46 # Add any Sphinx extension module names here, as strings. They can be
47 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
48 # ones.
49 extensions = [
50 'sphinx.ext.autodoc',
51 'sphinx.ext.doctest',
52 'sphinx.ext.intersphinx',
53 'sphinx.ext.todo',
54 'sphinx.ext.coverage',
55 'sphinx.ext.mathjax',
56 'sphinx.ext.ifconfig',
57 'sphinx.ext.viewcode',
58 ]
59
60 # Add any paths that contain templates here, relative to this directory.
61 templates_path = ['_templates']
62
63 # The suffix of source filenames.
64 source_suffix = '.rst'
65
66 # The encoding of source files.
67 #source_encoding = 'utf-8-sig'
68
69 # The master toctree document.
70 master_doc = 'index'
71
72 # General information about the project.
73 project = 'Mesa'
74 copyright = '2016, Project Mesa Team'
75
76 # The version info for the project you're documenting, acts as replacement for
77 # |version| and |release|, also used in various other places throughout the
78 # built documents.
79 #
80 # The short X.Y version.
81 version = '0.5'
82 # The full version, including alpha/beta/rc tags.
83 release = '.1'
84
85 # The language for content autogenerated by Sphinx. Refer to documentation
86 # for a list of supported languages.
87 #language = None
88
89 # There are two options for replacing |today|: either, you set today to some
90 # non-false value, then it is used:
91 #today = ''
92 # Else, today_fmt is used as the format for a strftime call.
93 #today_fmt = '%B %d, %Y'
94
95 # List of patterns, relative to source directory, that match files and
96 # directories to ignore when looking for source files.
97 exclude_patterns = ['_build']
98
99 # The reST default role (used for this markup: `text`) to use for all
100 # documents.
101 #default_role = None
102
103 # If true, '()' will be appended to :func: etc. cross-reference text.
104 #add_function_parentheses = True
105
106 # If true, the current module name will be prepended to all description
107 # unit titles (such as .. function::).
108 #add_module_names = True
109
110 # If true, sectionauthor and moduleauthor directives will be shown in the
111 # output. They are ignored by default.
112 #show_authors = False
113
114 # The name of the Pygments (syntax highlighting) style to use.
115 pygments_style = 'sphinx'
116
117 # A list of ignored prefixes for module index sorting.
118 #modindex_common_prefix = []
119
120 # If true, keep warnings as "system message" paragraphs in the built documents.
121 #keep_warnings = False
122
123
124 # -- Options for HTML output ----------------------------------------------
125
126 # The theme to use for HTML and HTML Help pages. See the documentation for
127 # a list of builtin themes.
128 html_theme = 'default'
129
130 # Theme options are theme-specific and customize the look and feel of a theme
131 # further. For a list of options available for each theme, see the
132 # documentation.
133 #html_theme_options = {}
134
135 # Add any paths that contain custom themes here, relative to this directory.
136 #html_theme_path = []
137
138 # The name for this set of Sphinx documents. If None, it defaults to
139 # "<project> v<release> documentation".
140 #html_title = None
141
142 # A shorter title for the navigation bar. Default is the same as html_title.
143 #html_short_title = None
144
145 # The name of an image file (relative to this directory) to place at the top
146 # of the sidebar.
147 html_logo = "images/mesa_logo.png"
148
149 # The name of an image file (within the static path) to use as favicon of the
150 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
151 # pixels large.
152 html_favicon = "images/mesa_logo.ico"
153
154 # Add any paths that contain custom static files (such as style sheets) here,
155 # relative to this directory. They are copied after the builtin static files,
156 # so a file named "default.css" will overwrite the builtin "default.css".
157 html_static_path = ['_static']
158
159 # Add any extra paths that contain custom files (such as robots.txt or
160 # .htaccess) here, relative to this directory. These files are copied
161 # directly to the root of the documentation.
162 #html_extra_path = []
163
164 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
165 # using the given strftime format.
166 #html_last_updated_fmt = '%b %d, %Y'
167
168 # If true, SmartyPants will be used to convert quotes and dashes to
169 # typographically correct entities.
170 #html_use_smartypants = True
171
172 # Custom sidebar templates, maps document names to template names.
173 #html_sidebars = {}
174
175 # Additional templates that should be rendered to pages, maps page names to
176 # template names.
177 #html_additional_pages = {}
178
179 # If false, no module index is generated.
180 #html_domain_indices = True
181
182 # If false, no index is generated.
183 #html_use_index = True
184
185 # If true, the index is split into individual pages for each letter.
186 #html_split_index = False
187
188 # If true, links to the reST sources are added to the pages.
189 #html_show_sourcelink = True
190
191 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
192 html_show_sphinx = False
193
194 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
195 #html_show_copyright = True
196
197 # If true, an OpenSearch description file will be output, and all pages will
198 # contain a <link> tag referring to it. The value of this option must be the
199 # base URL from which the finished HTML is served.
200 #html_use_opensearch = ''
201
202 # This is the file name suffix for HTML files (e.g. ".xhtml").
203 #html_file_suffix = None
204
205 # Output file base name for HTML help builder.
206 htmlhelp_basename = 'Mesadoc'
207
208
209 # -- Options for LaTeX output ---------------------------------------------
210
211 latex_elements = {
212 # The paper size ('letterpaper' or 'a4paper').
213 #'papersize': 'letterpaper',
214
215 # The font size ('10pt', '11pt' or '12pt').
216 #'pointsize': '10pt',
217
218 # Additional stuff for the LaTeX preamble.
219 #'preamble': '',
220 }
221
222 # Grouping the document tree into LaTeX files. List of tuples
223 # (source start file, target name, title,
224 # author, documentclass [howto, manual, or own class]).
225 latex_documents = [
226 ('index', 'Mesa.tex', 'Mesa Documentation',
227 'Project Mesa Team', 'manual'),
228 ]
229
230 # The name of an image file (relative to this directory) to place at the top of
231 # the title page.
232 #latex_logo = None
233
234 # For "manual" documents, if this is true, then toplevel headings are parts,
235 # not chapters.
236 #latex_use_parts = False
237
238 # If true, show page references after internal links.
239 #latex_show_pagerefs = False
240
241 # If true, show URL addresses after external links.
242 #latex_show_urls = False
243
244 # Documents to append as an appendix to all manuals.
245 #latex_appendices = []
246
247 # If false, no module index is generated.
248 #latex_domain_indices = True
249
250
251 # -- Options for manual page output ---------------------------------------
252
253 # One entry per manual page. List of tuples
254 # (source start file, name, description, authors, manual section).
255 man_pages = [
256 ('index', 'mesa', 'Mesa Documentation',
257 ['Project Mesa Team'], 1)
258 ]
259
260 # If true, show URL addresses after external links.
261 #man_show_urls = False
262
263
264 # -- Options for Texinfo output -------------------------------------------
265
266 # Grouping the document tree into Texinfo files. List of tuples
267 # (source start file, target name, title, author,
268 # dir menu entry, description, category)
269 texinfo_documents = [
270 ('index', 'Mesa', 'Mesa Documentation',
271 'Project Mesa Team', 'Mesa', 'One line description of project.',
272 'Miscellaneous'),
273 ]
274
275 # Documents to append as an appendix to all manuals.
276 #texinfo_appendices = []
277
278 # If false, no module index is generated.
279 #texinfo_domain_indices = True
280
281 # How to display URL addresses: 'footnote', 'no', or 'inline'.
282 #texinfo_show_urls = 'footnote'
283
284 # If true, do not generate a @detailmenu in the "Top" node's menu.
285 #texinfo_no_detailmenu = False
286
287
288 # Example configuration for intersphinx: refer to the Python standard library.
289 intersphinx_mapping = {'http://docs.python.org/': None}
290
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -17,19 +17,6 @@
import os
-# Adding mock imports to see if this builds
-from unittest.mock import MagicMock
-
-class Mock(MagicMock):
- @classmethod
- def __getattr__(cls, name):
- return Mock()
-
-MOCK_MODULES = ['numpy', 'pandas']
-sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)
-
-# End of mock
-
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -17,19 +17,6 @@\n import os\n \n \n-# Adding mock imports to see if this builds\n-from unittest.mock import MagicMock\n-\n-class Mock(MagicMock):\n- @classmethod\n- def __getattr__(cls, name):\n- return Mock()\n-\n-MOCK_MODULES = ['numpy', 'pandas']\n-sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)\n-\n-# End of mock\n-\n # If extensions (or modules to document with autodoc) are in another directory,\n # add these directories to sys.path here. If the directory is relative to the\n # documentation root, use os.path.abspath to make it absolute, like shown here.\n", "issue": "Bug with building docs - Readme is not correct\nI was just trying to follow the [Readme.md](https://github.com/projectmesa/mesa/blob/master/docs/README.md) to build the docs, and I get the following error: \n\n```\nmesa/docs [git fix-docs] $ make html\nsphinx-build -b html -d _build/doctrees . _build/html\nRunning Sphinx v1.3.6\n\nRecursion error:\nmaximum recursion depth exceeded while calling a Python object\n\nThis can happen with very large or deeply nested source files. You can carefully increase the default Python recursion limit of 1000 in conf.py with e.g.:\n import sys; sys.setrecursionlimit(1500)\nmake: *** [html] Error 1\n```\n\nNot sure why I am running into this. I feel like I have come across this before, but I can't remember how I fixed. \n\n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# Mesa documentation build configuration file, created by\n# sphinx-quickstart on Sun Jan 4 23:34:09 2015.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys\nimport os\n\n\n# Adding mock imports to see if this builds\nfrom unittest.mock import MagicMock\n\nclass Mock(MagicMock):\n @classmethod\n def __getattr__(cls, name):\n return Mock()\n\nMOCK_MODULES = ['numpy', 'pandas']\nsys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)\n\n# End of mock\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath('.'))\nsys.path.insert(0, \"../examples\")\nsys.path.insert(0, \"../mesa\")\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.todo',\n 'sphinx.ext.coverage',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.ifconfig',\n 'sphinx.ext.viewcode',\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'Mesa'\ncopyright = '2016, Project Mesa Team'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '0.5'\n# The full version, including alpha/beta/rc tags.\nrelease = '.1'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = 'default'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = \"images/mesa_logo.png\"\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = \"images/mesa_logo.ico\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\nhtml_show_sphinx = False\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Mesadoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n ('index', 'Mesa.tex', 'Mesa Documentation',\n 'Project Mesa Team', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'mesa', 'Mesa Documentation',\n ['Project Mesa Team'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'Mesa', 'Mesa Documentation',\n 'Project Mesa Team', 'Mesa', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {'http://docs.python.org/': None}\n", "path": "docs/conf.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# Mesa documentation build configuration file, created by\n# sphinx-quickstart on Sun Jan 4 23:34:09 2015.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys\nimport os\n\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath('.'))\nsys.path.insert(0, \"../examples\")\nsys.path.insert(0, \"../mesa\")\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.todo',\n 'sphinx.ext.coverage',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.ifconfig',\n 'sphinx.ext.viewcode',\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'Mesa'\ncopyright = '2015, Project Mesa Team'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '0.5'\n# The full version, including alpha/beta/rc tags.\nrelease = '.1'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = 'default'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = \"images/mesa_logo.png\"\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = \"images/mesa_logo.ico\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\nhtml_show_sphinx = False\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Mesadoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n ('index', 'Mesa.tex', 'Mesa Documentation',\n 'Project Mesa Team', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'mesa', 'Mesa Documentation',\n ['Project Mesa Team'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'Mesa', 'Mesa Documentation',\n 'Project Mesa Team', 'Mesa', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {'http://docs.python.org/': None}\n", "path": "docs/conf.py"}]}
| 3,455 | 174 |
gh_patches_debug_20144
|
rasdani/github-patches
|
git_diff
|
openfun__richie-1715
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
On the search page, the "more options" feature is broken on the "contributors" filter
## Bug Report
**Problematic behavior/code**
The "more options" feature on the "contributors" filter on the search page is broken.
**Expected Behavior**
When we click on "more options" on the "contributors" filter on the search page, we expect to see a list of more contributors and be able to type a search request to refine the search and find a specific contributor by his/her first/lastname.
**Steps to Reproduce**
1. Go to the search page: https://www.fun-mooc.fr/en/courses/
2. Click "more options" on the "contributors" filter
**Environment**
- Richie version: 2.5.0
- Platform: docker
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/richie/apps/search/defaults.py`
Content:
```
1 """
2 Import custom settings and set up defaults for values the Search app needs
3 """
4 from django.conf import settings
5 from django.utils.functional import lazy
6 from django.utils.translation import gettext_lazy as _
7
8 # Elasticsearch
9 ES_CHUNK_SIZE = 500
10 ES_PAGE_SIZE = 10
11
12 # Use a lazy to enable easier testing by not defining the value at bootstrap time
13 ES_INDICES_PREFIX = lazy(
14 lambda: getattr(settings, "RICHIE_ES_INDICES_PREFIX", "richie")
15 )()
16
17 # Define which analyzer should be used for each language
18 QUERY_ANALYZERS = getattr(
19 settings, "RICHIE_QUERY_ANALYZERS", {"en": "english", "fr": "french"}
20 )
21
22 # Define the scoring boost (in ElasticSearch) related value names receive when using
23 # full-text search.
24 # For example, when a user searches for "Science" in full-text, it should match any
25 # course whose category contains "Science" or a related word, albeit with a lower
26 # score than courses that include it in their title or description.
27 # This lower score factor is the boost value we get or set here.
28 RELATED_CONTENT_BOOST = 0.05
29
30 FACET_SORTING_DEFAULT = "count"
31
32 FACET_COUNTS_DEFAULT_LIMIT = getattr(settings, "RICHIE_FACET_COUNTS_DEFAULT_LIMIT", 10)
33 FACET_COUNTS_MAX_LIMIT = getattr(settings, "RICHIE_FACET_COUNTS_MAX_LIMIT", 50)
34
35 ES_STATE_WEIGHTS = getattr(settings, "RICHIE_ES_STATE_WEIGHTS", None) or [
36 80, # ONGOING_OPEN
37 70, # FUTURE_OPEN
38 60, # ARCHIVED_OPEN
39 30, # FUTURE_NOT_YET_OPEN
40 6, # FUTURE_CLOSED
41 5, # ONGOING_CLOSED
42 1, # ARCHIVED_CLOSED
43 ]
44
45 FILTERS_CONFIGURATION = [
46 (
47 "richie.apps.search.filter_definitions.StaticChoicesFilterDefinition",
48 {
49 "fragment_map": {"new": [{"term": {"is_new": True}}]},
50 "human_name": _("New courses"),
51 "min_doc_count": 0,
52 "name": "new",
53 "position": 0,
54 "sorting": "conf",
55 "values": {"new": _("First session")},
56 },
57 ),
58 (
59 "richie.apps.search.filter_definitions.NestingWrapper",
60 {
61 "name": "course_runs",
62 "filters": [
63 (
64 "richie.apps.search.filter_definitions.AvailabilityFilterDefinition",
65 {
66 "human_name": _("Availability"),
67 "is_drilldown": True,
68 "min_doc_count": 0,
69 "name": "availability",
70 "position": 1,
71 "sorting": "conf",
72 },
73 ),
74 (
75 "richie.apps.search.filter_definitions.LanguagesFilterDefinition",
76 {
77 "human_name": _("Languages"),
78 # There are too many available languages to show them all, all the time.
79 # Eg. 200 languages, 190+ of which will have 0 matching courses.
80 "min_doc_count": 1,
81 "name": "languages",
82 "position": 5,
83 },
84 ),
85 ],
86 },
87 ),
88 (
89 "richie.apps.search.filter_definitions.IndexableHierarchicalFilterDefinition",
90 {
91 "human_name": _("Subjects"),
92 "is_autocompletable": True,
93 "is_searchable": True,
94 "min_doc_count": 0,
95 "name": "subjects",
96 "position": 2,
97 "reverse_id": "subjects",
98 "term": "categories",
99 },
100 ),
101 (
102 "richie.apps.search.filter_definitions.IndexableHierarchicalFilterDefinition",
103 {
104 "human_name": _("Levels"),
105 "is_autocompletable": True,
106 "is_searchable": True,
107 "min_doc_count": 0,
108 "name": "levels",
109 "position": 3,
110 "reverse_id": "levels",
111 "term": "categories",
112 },
113 ),
114 (
115 "richie.apps.search.filter_definitions.IndexableHierarchicalFilterDefinition",
116 {
117 "human_name": _("Organizations"),
118 "is_autocompletable": True,
119 "is_searchable": True,
120 "min_doc_count": 0,
121 "name": "organizations",
122 "position": 4,
123 "reverse_id": "organizations",
124 },
125 ),
126 (
127 "richie.apps.search.filter_definitions.IndexableFilterDefinition",
128 {
129 "human_name": _("Persons"),
130 "is_autocompletable": True,
131 "is_searchable": True,
132 "min_doc_count": 0,
133 "name": "persons",
134 "position": 5,
135 "reverse_id": "persons",
136 },
137 ),
138 ]
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/richie/apps/search/defaults.py b/src/richie/apps/search/defaults.py
--- a/src/richie/apps/search/defaults.py
+++ b/src/richie/apps/search/defaults.py
@@ -118,6 +118,8 @@
"is_autocompletable": True,
"is_searchable": True,
"min_doc_count": 0,
+ # Note: this is a special name that connects the filter to Organization objects
+ # in Richie as well was the corresponding indexer and API endpoint.
"name": "organizations",
"position": 4,
"reverse_id": "organizations",
@@ -130,6 +132,8 @@
"is_autocompletable": True,
"is_searchable": True,
"min_doc_count": 0,
+ # Note: this is a special name that connects the filter to Person objects
+ # in Richie as well was the corresponding indexer and API endpoint.
"name": "persons",
"position": 5,
"reverse_id": "persons",
|
{"golden_diff": "diff --git a/src/richie/apps/search/defaults.py b/src/richie/apps/search/defaults.py\n--- a/src/richie/apps/search/defaults.py\n+++ b/src/richie/apps/search/defaults.py\n@@ -118,6 +118,8 @@\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n+ # Note: this is a special name that connects the filter to Organization objects\n+ # in Richie as well was the corresponding indexer and API endpoint.\n \"name\": \"organizations\",\n \"position\": 4,\n \"reverse_id\": \"organizations\",\n@@ -130,6 +132,8 @@\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n+ # Note: this is a special name that connects the filter to Person objects\n+ # in Richie as well was the corresponding indexer and API endpoint.\n \"name\": \"persons\",\n \"position\": 5,\n \"reverse_id\": \"persons\",\n", "issue": "On the search page, the \"more options\" feature is broken on the \"contributors\" filter\n## Bug Report\r\n\r\n**Problematic behavior/code**\r\nThe \"more options\" feature on the \"contributors\" filter on the search page is broken.\r\n\r\n**Expected Behavior**\r\nWhen we click on \"more options\" on the \"contributors\" filter on the search page, we expect to see a list of more contributors and be able to type a search request to refine the search and find a specific contributor by his/her first/lastname.\r\n\r\n**Steps to Reproduce**\r\n1. Go to the search page: https://www.fun-mooc.fr/en/courses/\r\n2. Click \"more options\" on the \"contributors\" filter\r\n\r\n**Environment**\r\n- Richie version: 2.5.0\r\n- Platform: docker\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nImport custom settings and set up defaults for values the Search app needs\n\"\"\"\nfrom django.conf import settings\nfrom django.utils.functional import lazy\nfrom django.utils.translation import gettext_lazy as _\n\n# Elasticsearch\nES_CHUNK_SIZE = 500\nES_PAGE_SIZE = 10\n\n# Use a lazy to enable easier testing by not defining the value at bootstrap time\nES_INDICES_PREFIX = lazy(\n lambda: getattr(settings, \"RICHIE_ES_INDICES_PREFIX\", \"richie\")\n)()\n\n# Define which analyzer should be used for each language\nQUERY_ANALYZERS = getattr(\n settings, \"RICHIE_QUERY_ANALYZERS\", {\"en\": \"english\", \"fr\": \"french\"}\n)\n\n# Define the scoring boost (in ElasticSearch) related value names receive when using\n# full-text search.\n# For example, when a user searches for \"Science\" in full-text, it should match any\n# course whose category contains \"Science\" or a related word, albeit with a lower\n# score than courses that include it in their title or description.\n# This lower score factor is the boost value we get or set here.\nRELATED_CONTENT_BOOST = 0.05\n\nFACET_SORTING_DEFAULT = \"count\"\n\nFACET_COUNTS_DEFAULT_LIMIT = getattr(settings, \"RICHIE_FACET_COUNTS_DEFAULT_LIMIT\", 10)\nFACET_COUNTS_MAX_LIMIT = getattr(settings, \"RICHIE_FACET_COUNTS_MAX_LIMIT\", 50)\n\nES_STATE_WEIGHTS = getattr(settings, \"RICHIE_ES_STATE_WEIGHTS\", None) or [\n 80, # ONGOING_OPEN\n 70, # FUTURE_OPEN\n 60, # ARCHIVED_OPEN\n 30, # FUTURE_NOT_YET_OPEN\n 6, # FUTURE_CLOSED\n 5, # ONGOING_CLOSED\n 1, # ARCHIVED_CLOSED\n]\n\nFILTERS_CONFIGURATION = [\n (\n \"richie.apps.search.filter_definitions.StaticChoicesFilterDefinition\",\n {\n \"fragment_map\": {\"new\": [{\"term\": {\"is_new\": True}}]},\n \"human_name\": _(\"New courses\"),\n \"min_doc_count\": 0,\n \"name\": \"new\",\n \"position\": 0,\n \"sorting\": \"conf\",\n \"values\": {\"new\": _(\"First session\")},\n },\n ),\n (\n \"richie.apps.search.filter_definitions.NestingWrapper\",\n {\n \"name\": \"course_runs\",\n \"filters\": [\n (\n \"richie.apps.search.filter_definitions.AvailabilityFilterDefinition\",\n {\n \"human_name\": _(\"Availability\"),\n \"is_drilldown\": True,\n \"min_doc_count\": 0,\n \"name\": \"availability\",\n \"position\": 1,\n \"sorting\": \"conf\",\n },\n ),\n (\n \"richie.apps.search.filter_definitions.LanguagesFilterDefinition\",\n {\n \"human_name\": _(\"Languages\"),\n # There are too many available languages to show them all, all the time.\n # Eg. 200 languages, 190+ of which will have 0 matching courses.\n \"min_doc_count\": 1,\n \"name\": \"languages\",\n \"position\": 5,\n },\n ),\n ],\n },\n ),\n (\n \"richie.apps.search.filter_definitions.IndexableHierarchicalFilterDefinition\",\n {\n \"human_name\": _(\"Subjects\"),\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n \"name\": \"subjects\",\n \"position\": 2,\n \"reverse_id\": \"subjects\",\n \"term\": \"categories\",\n },\n ),\n (\n \"richie.apps.search.filter_definitions.IndexableHierarchicalFilterDefinition\",\n {\n \"human_name\": _(\"Levels\"),\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n \"name\": \"levels\",\n \"position\": 3,\n \"reverse_id\": \"levels\",\n \"term\": \"categories\",\n },\n ),\n (\n \"richie.apps.search.filter_definitions.IndexableHierarchicalFilterDefinition\",\n {\n \"human_name\": _(\"Organizations\"),\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n \"name\": \"organizations\",\n \"position\": 4,\n \"reverse_id\": \"organizations\",\n },\n ),\n (\n \"richie.apps.search.filter_definitions.IndexableFilterDefinition\",\n {\n \"human_name\": _(\"Persons\"),\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n \"name\": \"persons\",\n \"position\": 5,\n \"reverse_id\": \"persons\",\n },\n ),\n]\n", "path": "src/richie/apps/search/defaults.py"}], "after_files": [{"content": "\"\"\"\nImport custom settings and set up defaults for values the Search app needs\n\"\"\"\nfrom django.conf import settings\nfrom django.utils.functional import lazy\nfrom django.utils.translation import gettext_lazy as _\n\n# Elasticsearch\nES_CHUNK_SIZE = 500\nES_PAGE_SIZE = 10\n\n# Use a lazy to enable easier testing by not defining the value at bootstrap time\nES_INDICES_PREFIX = lazy(\n lambda: getattr(settings, \"RICHIE_ES_INDICES_PREFIX\", \"richie\")\n)()\n\n# Define which analyzer should be used for each language\nQUERY_ANALYZERS = getattr(\n settings, \"RICHIE_QUERY_ANALYZERS\", {\"en\": \"english\", \"fr\": \"french\"}\n)\n\n# Define the scoring boost (in ElasticSearch) related value names receive when using\n# full-text search.\n# For example, when a user searches for \"Science\" in full-text, it should match any\n# course whose category contains \"Science\" or a related word, albeit with a lower\n# score than courses that include it in their title or description.\n# This lower score factor is the boost value we get or set here.\nRELATED_CONTENT_BOOST = 0.05\n\nFACET_SORTING_DEFAULT = \"count\"\n\nFACET_COUNTS_DEFAULT_LIMIT = getattr(settings, \"RICHIE_FACET_COUNTS_DEFAULT_LIMIT\", 10)\nFACET_COUNTS_MAX_LIMIT = getattr(settings, \"RICHIE_FACET_COUNTS_MAX_LIMIT\", 50)\n\nES_STATE_WEIGHTS = getattr(settings, \"RICHIE_ES_STATE_WEIGHTS\", None) or [\n 80, # ONGOING_OPEN\n 70, # FUTURE_OPEN\n 60, # ARCHIVED_OPEN\n 30, # FUTURE_NOT_YET_OPEN\n 6, # FUTURE_CLOSED\n 5, # ONGOING_CLOSED\n 1, # ARCHIVED_CLOSED\n]\n\nFILTERS_CONFIGURATION = [\n (\n \"richie.apps.search.filter_definitions.StaticChoicesFilterDefinition\",\n {\n \"fragment_map\": {\"new\": [{\"term\": {\"is_new\": True}}]},\n \"human_name\": _(\"New courses\"),\n \"min_doc_count\": 0,\n \"name\": \"new\",\n \"position\": 0,\n \"sorting\": \"conf\",\n \"values\": {\"new\": _(\"First session\")},\n },\n ),\n (\n \"richie.apps.search.filter_definitions.NestingWrapper\",\n {\n \"name\": \"course_runs\",\n \"filters\": [\n (\n \"richie.apps.search.filter_definitions.AvailabilityFilterDefinition\",\n {\n \"human_name\": _(\"Availability\"),\n \"is_drilldown\": True,\n \"min_doc_count\": 0,\n \"name\": \"availability\",\n \"position\": 1,\n \"sorting\": \"conf\",\n },\n ),\n (\n \"richie.apps.search.filter_definitions.LanguagesFilterDefinition\",\n {\n \"human_name\": _(\"Languages\"),\n # There are too many available languages to show them all, all the time.\n # Eg. 200 languages, 190+ of which will have 0 matching courses.\n \"min_doc_count\": 1,\n \"name\": \"languages\",\n \"position\": 5,\n },\n ),\n ],\n },\n ),\n (\n \"richie.apps.search.filter_definitions.IndexableHierarchicalFilterDefinition\",\n {\n \"human_name\": _(\"Subjects\"),\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n \"name\": \"subjects\",\n \"position\": 2,\n \"reverse_id\": \"subjects\",\n \"term\": \"categories\",\n },\n ),\n (\n \"richie.apps.search.filter_definitions.IndexableHierarchicalFilterDefinition\",\n {\n \"human_name\": _(\"Levels\"),\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n \"name\": \"levels\",\n \"position\": 3,\n \"reverse_id\": \"levels\",\n \"term\": \"categories\",\n },\n ),\n (\n \"richie.apps.search.filter_definitions.IndexableHierarchicalFilterDefinition\",\n {\n \"human_name\": _(\"Organizations\"),\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n # Note: this is a special name that connects the filter to Organization objects\n # in Richie as well was the corresponding indexer and API endpoint.\n \"name\": \"organizations\",\n \"position\": 4,\n \"reverse_id\": \"organizations\",\n },\n ),\n (\n \"richie.apps.search.filter_definitions.IndexableFilterDefinition\",\n {\n \"human_name\": _(\"Persons\"),\n \"is_autocompletable\": True,\n \"is_searchable\": True,\n \"min_doc_count\": 0,\n # Note: this is a special name that connects the filter to Person objects\n # in Richie as well was the corresponding indexer and API endpoint.\n \"name\": \"persons\",\n \"position\": 5,\n \"reverse_id\": \"persons\",\n },\n ),\n]\n", "path": "src/richie/apps/search/defaults.py"}]}
| 1,781 | 240 |
gh_patches_debug_7419
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-6344
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Scale/Range incompatibility in examples/models/server/population.py
in master:
Scale/Range incompatibility in examples/models/server/population.py
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/models/server/population.py`
Content:
```
1 from __future__ import print_function
2
3 from math import pi
4
5 from bokeh.client import push_session
6 from bokeh.document import Document
7 from bokeh.models.glyphs import Line, HBar
8 from bokeh.models import (Plot, ColumnDataSource, DataRange1d, FactorRange,
9 LinearAxis, CategoricalAxis, Grid, Legend, CategoricalScale)
10 from bokeh.sampledata.population import load_population
11 from bokeh.models.widgets import Select
12 from bokeh.models.layouts import WidgetBox, Column
13
14 document = Document()
15 session = push_session(document)
16
17 df = load_population()
18 revision = 2012
19
20 year, location = 2010, "World"
21
22 years = [str(x) for x in sorted(df.Year.unique())]
23 locations = sorted(df.Location.unique())
24 groups = [str(x) for x in df.AgeGrp.unique()]
25 groups.remove('80+') # remove oddball group
26
27 source_pyramid_m = ColumnDataSource(data=dict(value=[], group=[]))
28 source_pyramid_f = ColumnDataSource(data=dict(value=[], group=[]))
29
30 def pyramid():
31 xdr = DataRange1d()
32 ydr = FactorRange(factors=groups)
33 y_scale = CategoricalScale()
34
35 plot = Plot(x_range=xdr, y_range=ydr, y_scale=y_scale, plot_width=600, plot_height=500, toolbar_location=None)
36
37 xaxis = LinearAxis()
38 plot.add_layout(xaxis, 'below')
39 plot.add_layout(CategoricalAxis(), 'left')
40
41 plot.add_layout(Grid(dimension=0, ticker=xaxis.ticker))
42
43 m = HBar(left="value", right=0, y="group", height=1, fill_color="#3B8686")
44 mglyph = plot.add_glyph(source_pyramid_m, m)
45
46 f = HBar(left=0, right="value", y="group", height=1, fill_color="#CFF09E")
47 fglyph = plot.add_glyph(source_pyramid_f, f)
48
49 plot.add_layout(Legend(items=[("Male" , [mglyph]), ("Female" , [fglyph])]))
50
51 return plot
52
53 source_known = ColumnDataSource(data=dict(x=[], y=[]))
54 source_predicted = ColumnDataSource(data=dict(x=[], y=[]))
55
56 def population():
57 xdr = FactorRange(factors=years)
58 ydr = DataRange1d()
59 y_scale = CategoricalScale()
60
61 plot = Plot(x_range=xdr, y_range=ydr, y_scale=y_scale, plot_width=600, plot_height=150, toolbar_location=None)
62
63 plot.add_layout(CategoricalAxis(major_label_orientation=pi / 4), 'below')
64
65 known = Line(x="x", y="y", line_color="violet", line_width=2)
66 known_glyph = plot.add_glyph(source_known, known)
67
68 predicted = Line(x="x", y="y", line_color="violet", line_width=2, line_dash="dashed")
69 predicted_glyph = plot.add_glyph(source_predicted, predicted)
70
71 legend = Legend(location="bottom_right",
72 items=[("known", [known_glyph]), ("predicted", [predicted_glyph])])
73 plot.add_layout(legend)
74
75 return plot
76
77 def update_pyramid():
78 pyramid = df[(df.Location == location) & (df.Year == year)]
79
80 male = pyramid[pyramid.Sex == "Male"]
81 female = pyramid[pyramid.Sex == "Female"]
82
83 total = df.Value.sum()
84 male_percent = -male.Value / total
85 female_percent = female.Value / total
86
87 source_pyramid_m.data = dict(
88 group=[str(x) for x in male.AgeGrp.unique()],
89 value=male_percent,
90 )
91 source_pyramid_f.data = dict(
92 group=[str(x) for x in female.AgeGrp.unique()],
93 value=female_percent,
94 )
95
96 def update_population():
97 population = df[df.Location == location].groupby(df.Year).Value.sum()
98 aligned_revision = revision // 10 * 10
99
100 known = population[population.index <= aligned_revision]
101 predicted = population[population.index >= aligned_revision]
102
103 source_known.data = dict(x=known.index.map(str), y=known.values)
104 source_predicted.data = dict(x=predicted.index.map(str), y=predicted.values)
105
106 def update_data():
107 update_population()
108 update_pyramid()
109
110 def on_year_change(attr, old, new):
111 global year
112 year = int(new)
113 update_data()
114
115 def on_location_change(attr, old, new):
116 global location
117 location = new
118 update_data()
119
120 def create_layout():
121 year_select = Select(title="Year:", value="2010", options=years)
122 location_select = Select(title="Location:", value="World", options=locations)
123
124 year_select.on_change('value', on_year_change)
125 location_select.on_change('value', on_location_change)
126
127 controls = WidgetBox(children=[year_select, location_select], height=150, width=600)
128 layout = Column(children=[controls, pyramid(), population()])
129
130 return layout
131
132 layout = create_layout()
133
134 update_data()
135
136 document.add_root(layout)
137 session.show(layout)
138
139 if __name__ == "__main__":
140 document.validate()
141 print("\npress ctrl-C to exit")
142 session.loop_until_closed()
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/models/server/population.py b/examples/models/server/population.py
--- a/examples/models/server/population.py
+++ b/examples/models/server/population.py
@@ -56,9 +56,9 @@
def population():
xdr = FactorRange(factors=years)
ydr = DataRange1d()
- y_scale = CategoricalScale()
+ x_scale = CategoricalScale()
- plot = Plot(x_range=xdr, y_range=ydr, y_scale=y_scale, plot_width=600, plot_height=150, toolbar_location=None)
+ plot = Plot(x_range=xdr, y_range=ydr, x_scale=x_scale, plot_width=600, plot_height=150, toolbar_location=None)
plot.add_layout(CategoricalAxis(major_label_orientation=pi / 4), 'below')
|
{"golden_diff": "diff --git a/examples/models/server/population.py b/examples/models/server/population.py\n--- a/examples/models/server/population.py\n+++ b/examples/models/server/population.py\n@@ -56,9 +56,9 @@\n def population():\n xdr = FactorRange(factors=years)\n ydr = DataRange1d()\n- y_scale = CategoricalScale()\n+ x_scale = CategoricalScale()\n \n- plot = Plot(x_range=xdr, y_range=ydr, y_scale=y_scale, plot_width=600, plot_height=150, toolbar_location=None)\n+ plot = Plot(x_range=xdr, y_range=ydr, x_scale=x_scale, plot_width=600, plot_height=150, toolbar_location=None)\n \n plot.add_layout(CategoricalAxis(major_label_orientation=pi / 4), 'below')\n", "issue": "Scale/Range incompatibility in examples/models/server/population.py\nin master:\r\n\r\nScale/Range incompatibility in examples/models/server/population.py\n", "before_files": [{"content": "from __future__ import print_function\n\nfrom math import pi\n\nfrom bokeh.client import push_session\nfrom bokeh.document import Document\nfrom bokeh.models.glyphs import Line, HBar\nfrom bokeh.models import (Plot, ColumnDataSource, DataRange1d, FactorRange,\n LinearAxis, CategoricalAxis, Grid, Legend, CategoricalScale)\nfrom bokeh.sampledata.population import load_population\nfrom bokeh.models.widgets import Select\nfrom bokeh.models.layouts import WidgetBox, Column\n\ndocument = Document()\nsession = push_session(document)\n\ndf = load_population()\nrevision = 2012\n\nyear, location = 2010, \"World\"\n\nyears = [str(x) for x in sorted(df.Year.unique())]\nlocations = sorted(df.Location.unique())\ngroups = [str(x) for x in df.AgeGrp.unique()]\ngroups.remove('80+') # remove oddball group\n\nsource_pyramid_m = ColumnDataSource(data=dict(value=[], group=[]))\nsource_pyramid_f = ColumnDataSource(data=dict(value=[], group=[]))\n\ndef pyramid():\n xdr = DataRange1d()\n ydr = FactorRange(factors=groups)\n y_scale = CategoricalScale()\n\n plot = Plot(x_range=xdr, y_range=ydr, y_scale=y_scale, plot_width=600, plot_height=500, toolbar_location=None)\n\n xaxis = LinearAxis()\n plot.add_layout(xaxis, 'below')\n plot.add_layout(CategoricalAxis(), 'left')\n\n plot.add_layout(Grid(dimension=0, ticker=xaxis.ticker))\n\n m = HBar(left=\"value\", right=0, y=\"group\", height=1, fill_color=\"#3B8686\")\n mglyph = plot.add_glyph(source_pyramid_m, m)\n\n f = HBar(left=0, right=\"value\", y=\"group\", height=1, fill_color=\"#CFF09E\")\n fglyph = plot.add_glyph(source_pyramid_f, f)\n\n plot.add_layout(Legend(items=[(\"Male\" , [mglyph]), (\"Female\" , [fglyph])]))\n\n return plot\n\nsource_known = ColumnDataSource(data=dict(x=[], y=[]))\nsource_predicted = ColumnDataSource(data=dict(x=[], y=[]))\n\ndef population():\n xdr = FactorRange(factors=years)\n ydr = DataRange1d()\n y_scale = CategoricalScale()\n\n plot = Plot(x_range=xdr, y_range=ydr, y_scale=y_scale, plot_width=600, plot_height=150, toolbar_location=None)\n\n plot.add_layout(CategoricalAxis(major_label_orientation=pi / 4), 'below')\n\n known = Line(x=\"x\", y=\"y\", line_color=\"violet\", line_width=2)\n known_glyph = plot.add_glyph(source_known, known)\n\n predicted = Line(x=\"x\", y=\"y\", line_color=\"violet\", line_width=2, line_dash=\"dashed\")\n predicted_glyph = plot.add_glyph(source_predicted, predicted)\n\n legend = Legend(location=\"bottom_right\",\n items=[(\"known\", [known_glyph]), (\"predicted\", [predicted_glyph])])\n plot.add_layout(legend)\n\n return plot\n\ndef update_pyramid():\n pyramid = df[(df.Location == location) & (df.Year == year)]\n\n male = pyramid[pyramid.Sex == \"Male\"]\n female = pyramid[pyramid.Sex == \"Female\"]\n\n total = df.Value.sum()\n male_percent = -male.Value / total\n female_percent = female.Value / total\n\n source_pyramid_m.data = dict(\n group=[str(x) for x in male.AgeGrp.unique()],\n value=male_percent,\n )\n source_pyramid_f.data = dict(\n group=[str(x) for x in female.AgeGrp.unique()],\n value=female_percent,\n )\n\ndef update_population():\n population = df[df.Location == location].groupby(df.Year).Value.sum()\n aligned_revision = revision // 10 * 10\n\n known = population[population.index <= aligned_revision]\n predicted = population[population.index >= aligned_revision]\n\n source_known.data = dict(x=known.index.map(str), y=known.values)\n source_predicted.data = dict(x=predicted.index.map(str), y=predicted.values)\n\ndef update_data():\n update_population()\n update_pyramid()\n\ndef on_year_change(attr, old, new):\n global year\n year = int(new)\n update_data()\n\ndef on_location_change(attr, old, new):\n global location\n location = new\n update_data()\n\ndef create_layout():\n year_select = Select(title=\"Year:\", value=\"2010\", options=years)\n location_select = Select(title=\"Location:\", value=\"World\", options=locations)\n\n year_select.on_change('value', on_year_change)\n location_select.on_change('value', on_location_change)\n\n controls = WidgetBox(children=[year_select, location_select], height=150, width=600)\n layout = Column(children=[controls, pyramid(), population()])\n\n return layout\n\nlayout = create_layout()\n\nupdate_data()\n\ndocument.add_root(layout)\nsession.show(layout)\n\nif __name__ == \"__main__\":\n document.validate()\n print(\"\\npress ctrl-C to exit\")\n session.loop_until_closed()\n", "path": "examples/models/server/population.py"}], "after_files": [{"content": "from __future__ import print_function\n\nfrom math import pi\n\nfrom bokeh.client import push_session\nfrom bokeh.document import Document\nfrom bokeh.models.glyphs import Line, HBar\nfrom bokeh.models import (Plot, ColumnDataSource, DataRange1d, FactorRange,\n LinearAxis, CategoricalAxis, Grid, Legend, CategoricalScale)\nfrom bokeh.sampledata.population import load_population\nfrom bokeh.models.widgets import Select\nfrom bokeh.models.layouts import WidgetBox, Column\n\ndocument = Document()\nsession = push_session(document)\n\ndf = load_population()\nrevision = 2012\n\nyear, location = 2010, \"World\"\n\nyears = [str(x) for x in sorted(df.Year.unique())]\nlocations = sorted(df.Location.unique())\ngroups = [str(x) for x in df.AgeGrp.unique()]\ngroups.remove('80+') # remove oddball group\n\nsource_pyramid_m = ColumnDataSource(data=dict(value=[], group=[]))\nsource_pyramid_f = ColumnDataSource(data=dict(value=[], group=[]))\n\ndef pyramid():\n xdr = DataRange1d()\n ydr = FactorRange(factors=groups)\n y_scale = CategoricalScale()\n\n plot = Plot(x_range=xdr, y_range=ydr, y_scale=y_scale, plot_width=600, plot_height=500, toolbar_location=None)\n\n xaxis = LinearAxis()\n plot.add_layout(xaxis, 'below')\n plot.add_layout(CategoricalAxis(), 'left')\n\n plot.add_layout(Grid(dimension=0, ticker=xaxis.ticker))\n\n m = HBar(left=\"value\", right=0, y=\"group\", height=1, fill_color=\"#3B8686\")\n mglyph = plot.add_glyph(source_pyramid_m, m)\n\n f = HBar(left=0, right=\"value\", y=\"group\", height=1, fill_color=\"#CFF09E\")\n fglyph = plot.add_glyph(source_pyramid_f, f)\n\n plot.add_layout(Legend(items=[(\"Male\" , [mglyph]), (\"Female\" , [fglyph])]))\n\n return plot\n\nsource_known = ColumnDataSource(data=dict(x=[], y=[]))\nsource_predicted = ColumnDataSource(data=dict(x=[], y=[]))\n\ndef population():\n xdr = FactorRange(factors=years)\n ydr = DataRange1d()\n x_scale = CategoricalScale()\n\n plot = Plot(x_range=xdr, y_range=ydr, x_scale=x_scale, plot_width=600, plot_height=150, toolbar_location=None)\n\n plot.add_layout(CategoricalAxis(major_label_orientation=pi / 4), 'below')\n\n known = Line(x=\"x\", y=\"y\", line_color=\"violet\", line_width=2)\n known_glyph = plot.add_glyph(source_known, known)\n\n predicted = Line(x=\"x\", y=\"y\", line_color=\"violet\", line_width=2, line_dash=\"dashed\")\n predicted_glyph = plot.add_glyph(source_predicted, predicted)\n\n legend = Legend(location=\"bottom_right\",\n items=[(\"known\", [known_glyph]), (\"predicted\", [predicted_glyph])])\n plot.add_layout(legend)\n\n return plot\n\ndef update_pyramid():\n pyramid = df[(df.Location == location) & (df.Year == year)]\n\n male = pyramid[pyramid.Sex == \"Male\"]\n female = pyramid[pyramid.Sex == \"Female\"]\n\n total = df.Value.sum()\n male_percent = -male.Value / total\n female_percent = female.Value / total\n\n source_pyramid_m.data = dict(\n group=[str(x) for x in male.AgeGrp.unique()],\n value=male_percent,\n )\n source_pyramid_f.data = dict(\n group=[str(x) for x in female.AgeGrp.unique()],\n value=female_percent,\n )\n\ndef update_population():\n population = df[df.Location == location].groupby(df.Year).Value.sum()\n aligned_revision = revision // 10 * 10\n\n known = population[population.index <= aligned_revision]\n predicted = population[population.index >= aligned_revision]\n\n source_known.data = dict(x=known.index.map(str), y=known.values)\n source_predicted.data = dict(x=predicted.index.map(str), y=predicted.values)\n\ndef update_data():\n update_population()\n update_pyramid()\n\ndef on_year_change(attr, old, new):\n global year\n year = int(new)\n update_data()\n\ndef on_location_change(attr, old, new):\n global location\n location = new\n update_data()\n\ndef create_layout():\n year_select = Select(title=\"Year:\", value=\"2010\", options=years)\n location_select = Select(title=\"Location:\", value=\"World\", options=locations)\n\n year_select.on_change('value', on_year_change)\n location_select.on_change('value', on_location_change)\n\n controls = WidgetBox(children=[year_select, location_select], height=150, width=600)\n layout = Column(children=[controls, pyramid(), population()])\n\n return layout\n\nlayout = create_layout()\n\nupdate_data()\n\ndocument.add_root(layout)\nsession.show(layout)\n\nif __name__ == \"__main__\":\n document.validate()\n print(\"\\npress ctrl-C to exit\")\n session.loop_until_closed()\n", "path": "examples/models/server/population.py"}]}
| 1,772 | 187 |
gh_patches_debug_3350
|
rasdani/github-patches
|
git_diff
|
searxng__searxng-422
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Search suggestions are lumped together if Yahoo is enabled
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
1.0.0-940-32fb2bdf, master branch, not forked
**How did you install SearXNG?**
searxng-docker, fresh install from yesterday.
**What happened?**
Search keyword suggestions are lumped together in one.
**How To Reproduce**
Enable the Yahoo engine.
You can also reproduce this issue with the Yahoo bang (!yh).
**Expected behavior**
Normally, you would have separate keyword suggestions instead of what's happening right now.
**Screenshots & Logs**

**Additional context**
I have Google, Qwant, Duckduckgo, Startpage, Brave and Yahoo engines enabled by default for all users.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/engines/yahoo.py`
Content:
```
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 # lint: pylint
3 """Yahoo Search (Web)
4
5 Languages are supported by mapping the language to a domain. If domain is not
6 found in :py:obj:`lang2domain` URL ``<lang>.search.yahoo.com`` is used.
7
8 """
9
10 from urllib.parse import (
11 unquote,
12 urlencode,
13 )
14 from lxml import html
15
16 from searx.utils import (
17 eval_xpath_getindex,
18 eval_xpath_list,
19 extract_text,
20 match_language,
21 )
22
23 # about
24 about = {
25 "website": 'https://search.yahoo.com/',
26 "wikidata_id": None,
27 "official_api_documentation": 'https://developer.yahoo.com/api/',
28 "use_official_api": False,
29 "require_api_key": False,
30 "results": 'HTML',
31 }
32
33 # engine dependent config
34 categories = ['general']
35 paging = True
36 time_range_support = True
37 supported_languages_url = 'https://search.yahoo.com/preferences/languages'
38 """Supported languages are read from Yahoo preference page."""
39
40 time_range_dict = {
41 'day': ('1d', 'd'),
42 'week': ('1w', 'w'),
43 'month': ('1m', 'm'),
44 }
45
46 language_aliases = {
47 'zh-HK': 'zh_chs',
48 'zh-CN': 'zh_chs', # dead since 2015 / routed to hk.search.yahoo.com
49 'zh-TW': 'zh_cht',
50 }
51
52 lang2domain = {
53 'zh_chs' : 'hk.search.yahoo.com',
54 'zh_cht' : 'tw.search.yahoo.com',
55 'en' : 'search.yahoo.com',
56
57 'bg' : 'search.yahoo.com',
58 'cs' : 'search.yahoo.com',
59 'da' : 'search.yahoo.com',
60 'el' : 'search.yahoo.com',
61 'et' : 'search.yahoo.com',
62 'he' : 'search.yahoo.com',
63 'hr' : 'search.yahoo.com',
64 'ja' : 'search.yahoo.com',
65 'ko' : 'search.yahoo.com',
66 'sk' : 'search.yahoo.com',
67 'sl' : 'search.yahoo.com',
68
69 }
70 """Map language to domain"""
71
72 def _get_language(params):
73
74 lang = language_aliases.get(params['language'])
75 if lang is None:
76 lang = match_language(
77 params['language'], supported_languages, language_aliases
78 )
79 lang = lang.split('-')[0]
80 logger.debug("params['language']: %s --> %s" , params['language'], lang)
81 return lang
82
83 def request(query, params):
84 """build request"""
85 offset = (params['pageno'] - 1) * 7 + 1
86 lang = _get_language(params)
87 age, btf = time_range_dict.get(
88 params['time_range'], ('', ''))
89
90 args = urlencode({
91 'p' : query,
92 'ei' : 'UTF-8',
93 'fl' : 1,
94 'vl' : 'lang_' + lang,
95 'btf' : btf,
96 'fr2' : 'time',
97 'age' : age,
98 'b' : offset,
99 'xargs' :0
100 })
101
102 domain = lang2domain.get(lang, '%s.search.yahoo.com' % lang)
103 params['url'] = 'https://%s/search?%s' % (domain, args)
104 return params
105
106 def parse_url(url_string):
107 """remove yahoo-specific tracking-url"""
108
109 endings = ['/RS', '/RK']
110 endpositions = []
111 start = url_string.find('http', url_string.find('/RU=') + 1)
112
113 for ending in endings:
114 endpos = url_string.rfind(ending)
115 if endpos > -1:
116 endpositions.append(endpos)
117
118 if start == 0 or len(endpositions) == 0:
119 return url_string
120
121 end = min(endpositions)
122 return unquote(url_string[start:end])
123
124 def response(resp):
125 """parse response"""
126
127 results = []
128 dom = html.fromstring(resp.text)
129
130 # parse results
131 for result in eval_xpath_list(dom, '//div[contains(@class,"algo-sr")]'):
132 url = eval_xpath_getindex(result, './/h3/a/@href', 0, default=None)
133 if url is None:
134 continue
135 url = parse_url(url)
136
137 title = eval_xpath_getindex(result, './/h3/a', 0, default=None)
138 if title is None:
139 continue
140 offset = len(extract_text(title.xpath('span')))
141 title = extract_text(title)[offset:]
142
143 content = eval_xpath_getindex(
144 result, './/div[contains(@class, "compText")]', 0, default=''
145 )
146 if content:
147 content = extract_text(content)
148
149 # append result
150 results.append({
151 'url': url,
152 'title': title,
153 'content': content
154 })
155
156 for suggestion in eval_xpath_list(dom, '//div[contains(@class, "AlsoTry")]'):
157 # append suggestion
158 results.append({'suggestion': extract_text(suggestion)})
159
160 return results
161
162
163 # get supported languages from their site
164 def _fetch_supported_languages(resp):
165 supported_languages = []
166 dom = html.fromstring(resp.text)
167 offset = len('lang_')
168
169 for val in eval_xpath_list(dom, '//div[contains(@class, "lang-item")]/input/@value'):
170 supported_languages.append( val[offset:] )
171
172 return supported_languages
173
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/searx/engines/yahoo.py b/searx/engines/yahoo.py
--- a/searx/engines/yahoo.py
+++ b/searx/engines/yahoo.py
@@ -153,7 +153,7 @@
'content': content
})
- for suggestion in eval_xpath_list(dom, '//div[contains(@class, "AlsoTry")]'):
+ for suggestion in eval_xpath_list(dom, '//div[contains(@class, "AlsoTry")]//table//a'):
# append suggestion
results.append({'suggestion': extract_text(suggestion)})
|
{"golden_diff": "diff --git a/searx/engines/yahoo.py b/searx/engines/yahoo.py\n--- a/searx/engines/yahoo.py\n+++ b/searx/engines/yahoo.py\n@@ -153,7 +153,7 @@\n 'content': content\n })\n \n- for suggestion in eval_xpath_list(dom, '//div[contains(@class, \"AlsoTry\")]'):\n+ for suggestion in eval_xpath_list(dom, '//div[contains(@class, \"AlsoTry\")]//table//a'):\n # append suggestion\n results.append({'suggestion': extract_text(suggestion)})\n", "issue": "Search suggestions are lumped together if Yahoo is enabled\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\n1.0.0-940-32fb2bdf, master branch, not forked\r\n\r\n**How did you install SearXNG?**\r\nsearxng-docker, fresh install from yesterday.\r\n\r\n**What happened?**\r\nSearch keyword suggestions are lumped together in one.\r\n\r\n**How To Reproduce**\r\nEnable the Yahoo engine.\r\nYou can also reproduce this issue with the Yahoo bang (!yh). \r\n\r\n**Expected behavior**\r\nNormally, you would have separate keyword suggestions instead of what's happening right now. \r\n\r\n**Screenshots & Logs**\r\n\r\n\r\n**Additional context**\r\nI have Google, Qwant, Duckduckgo, Startpage, Brave and Yahoo engines enabled by default for all users.\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Yahoo Search (Web)\n\nLanguages are supported by mapping the language to a domain. If domain is not\nfound in :py:obj:`lang2domain` URL ``<lang>.search.yahoo.com`` is used.\n\n\"\"\"\n\nfrom urllib.parse import (\n unquote,\n urlencode,\n)\nfrom lxml import html\n\nfrom searx.utils import (\n eval_xpath_getindex,\n eval_xpath_list,\n extract_text,\n match_language,\n)\n\n# about\nabout = {\n \"website\": 'https://search.yahoo.com/',\n \"wikidata_id\": None,\n \"official_api_documentation\": 'https://developer.yahoo.com/api/',\n \"use_official_api\": False,\n \"require_api_key\": False,\n \"results\": 'HTML',\n}\n\n# engine dependent config\ncategories = ['general']\npaging = True\ntime_range_support = True\nsupported_languages_url = 'https://search.yahoo.com/preferences/languages'\n\"\"\"Supported languages are read from Yahoo preference page.\"\"\"\n\ntime_range_dict = {\n 'day': ('1d', 'd'),\n 'week': ('1w', 'w'),\n 'month': ('1m', 'm'),\n}\n\nlanguage_aliases = {\n 'zh-HK': 'zh_chs',\n 'zh-CN': 'zh_chs', # dead since 2015 / routed to hk.search.yahoo.com\n 'zh-TW': 'zh_cht',\n}\n\nlang2domain = {\n 'zh_chs' : 'hk.search.yahoo.com',\n 'zh_cht' : 'tw.search.yahoo.com',\n 'en' : 'search.yahoo.com',\n\n 'bg' : 'search.yahoo.com',\n 'cs' : 'search.yahoo.com',\n 'da' : 'search.yahoo.com',\n 'el' : 'search.yahoo.com',\n 'et' : 'search.yahoo.com',\n 'he' : 'search.yahoo.com',\n 'hr' : 'search.yahoo.com',\n 'ja' : 'search.yahoo.com',\n 'ko' : 'search.yahoo.com',\n 'sk' : 'search.yahoo.com',\n 'sl' : 'search.yahoo.com',\n\n}\n\"\"\"Map language to domain\"\"\"\n\ndef _get_language(params):\n\n lang = language_aliases.get(params['language'])\n if lang is None:\n lang = match_language(\n params['language'], supported_languages, language_aliases\n )\n lang = lang.split('-')[0]\n logger.debug(\"params['language']: %s --> %s\" , params['language'], lang)\n return lang\n\ndef request(query, params):\n \"\"\"build request\"\"\"\n offset = (params['pageno'] - 1) * 7 + 1\n lang = _get_language(params)\n age, btf = time_range_dict.get(\n params['time_range'], ('', ''))\n\n args = urlencode({\n 'p' : query,\n 'ei' : 'UTF-8',\n 'fl' : 1,\n 'vl' : 'lang_' + lang,\n 'btf' : btf,\n 'fr2' : 'time',\n 'age' : age,\n 'b' : offset,\n 'xargs' :0\n })\n\n domain = lang2domain.get(lang, '%s.search.yahoo.com' % lang)\n params['url'] = 'https://%s/search?%s' % (domain, args)\n return params\n\ndef parse_url(url_string):\n \"\"\"remove yahoo-specific tracking-url\"\"\"\n\n endings = ['/RS', '/RK']\n endpositions = []\n start = url_string.find('http', url_string.find('/RU=') + 1)\n\n for ending in endings:\n endpos = url_string.rfind(ending)\n if endpos > -1:\n endpositions.append(endpos)\n\n if start == 0 or len(endpositions) == 0:\n return url_string\n\n end = min(endpositions)\n return unquote(url_string[start:end])\n\ndef response(resp):\n \"\"\"parse response\"\"\"\n\n results = []\n dom = html.fromstring(resp.text)\n\n # parse results\n for result in eval_xpath_list(dom, '//div[contains(@class,\"algo-sr\")]'):\n url = eval_xpath_getindex(result, './/h3/a/@href', 0, default=None)\n if url is None:\n continue\n url = parse_url(url)\n\n title = eval_xpath_getindex(result, './/h3/a', 0, default=None)\n if title is None:\n continue\n offset = len(extract_text(title.xpath('span')))\n title = extract_text(title)[offset:]\n\n content = eval_xpath_getindex(\n result, './/div[contains(@class, \"compText\")]', 0, default=''\n )\n if content:\n content = extract_text(content)\n\n # append result\n results.append({\n 'url': url,\n 'title': title,\n 'content': content\n })\n\n for suggestion in eval_xpath_list(dom, '//div[contains(@class, \"AlsoTry\")]'):\n # append suggestion\n results.append({'suggestion': extract_text(suggestion)})\n\n return results\n\n\n# get supported languages from their site\ndef _fetch_supported_languages(resp):\n supported_languages = []\n dom = html.fromstring(resp.text)\n offset = len('lang_')\n\n for val in eval_xpath_list(dom, '//div[contains(@class, \"lang-item\")]/input/@value'):\n supported_languages.append( val[offset:] )\n\n return supported_languages\n", "path": "searx/engines/yahoo.py"}], "after_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Yahoo Search (Web)\n\nLanguages are supported by mapping the language to a domain. If domain is not\nfound in :py:obj:`lang2domain` URL ``<lang>.search.yahoo.com`` is used.\n\n\"\"\"\n\nfrom urllib.parse import (\n unquote,\n urlencode,\n)\nfrom lxml import html\n\nfrom searx.utils import (\n eval_xpath_getindex,\n eval_xpath_list,\n extract_text,\n match_language,\n)\n\n# about\nabout = {\n \"website\": 'https://search.yahoo.com/',\n \"wikidata_id\": None,\n \"official_api_documentation\": 'https://developer.yahoo.com/api/',\n \"use_official_api\": False,\n \"require_api_key\": False,\n \"results\": 'HTML',\n}\n\n# engine dependent config\ncategories = ['general']\npaging = True\ntime_range_support = True\nsupported_languages_url = 'https://search.yahoo.com/preferences/languages'\n\"\"\"Supported languages are read from Yahoo preference page.\"\"\"\n\ntime_range_dict = {\n 'day': ('1d', 'd'),\n 'week': ('1w', 'w'),\n 'month': ('1m', 'm'),\n}\n\nlanguage_aliases = {\n 'zh-HK': 'zh_chs',\n 'zh-CN': 'zh_chs', # dead since 2015 / routed to hk.search.yahoo.com\n 'zh-TW': 'zh_cht',\n}\n\nlang2domain = {\n 'zh_chs' : 'hk.search.yahoo.com',\n 'zh_cht' : 'tw.search.yahoo.com',\n 'en' : 'search.yahoo.com',\n\n 'bg' : 'search.yahoo.com',\n 'cs' : 'search.yahoo.com',\n 'da' : 'search.yahoo.com',\n 'el' : 'search.yahoo.com',\n 'et' : 'search.yahoo.com',\n 'he' : 'search.yahoo.com',\n 'hr' : 'search.yahoo.com',\n 'ja' : 'search.yahoo.com',\n 'ko' : 'search.yahoo.com',\n 'sk' : 'search.yahoo.com',\n 'sl' : 'search.yahoo.com',\n\n}\n\"\"\"Map language to domain\"\"\"\n\ndef _get_language(params):\n\n lang = language_aliases.get(params['language'])\n if lang is None:\n lang = match_language(\n params['language'], supported_languages, language_aliases\n )\n lang = lang.split('-')[0]\n logger.debug(\"params['language']: %s --> %s\" , params['language'], lang)\n return lang\n\ndef request(query, params):\n \"\"\"build request\"\"\"\n offset = (params['pageno'] - 1) * 7 + 1\n lang = _get_language(params)\n age, btf = time_range_dict.get(\n params['time_range'], ('', ''))\n\n args = urlencode({\n 'p' : query,\n 'ei' : 'UTF-8',\n 'fl' : 1,\n 'vl' : 'lang_' + lang,\n 'btf' : btf,\n 'fr2' : 'time',\n 'age' : age,\n 'b' : offset,\n 'xargs' :0\n })\n\n domain = lang2domain.get(lang, '%s.search.yahoo.com' % lang)\n params['url'] = 'https://%s/search?%s' % (domain, args)\n return params\n\ndef parse_url(url_string):\n \"\"\"remove yahoo-specific tracking-url\"\"\"\n\n endings = ['/RS', '/RK']\n endpositions = []\n start = url_string.find('http', url_string.find('/RU=') + 1)\n\n for ending in endings:\n endpos = url_string.rfind(ending)\n if endpos > -1:\n endpositions.append(endpos)\n\n if start == 0 or len(endpositions) == 0:\n return url_string\n\n end = min(endpositions)\n return unquote(url_string[start:end])\n\ndef response(resp):\n \"\"\"parse response\"\"\"\n\n results = []\n dom = html.fromstring(resp.text)\n\n # parse results\n for result in eval_xpath_list(dom, '//div[contains(@class,\"algo-sr\")]'):\n url = eval_xpath_getindex(result, './/h3/a/@href', 0, default=None)\n if url is None:\n continue\n url = parse_url(url)\n\n title = eval_xpath_getindex(result, './/h3/a', 0, default=None)\n if title is None:\n continue\n offset = len(extract_text(title.xpath('span')))\n title = extract_text(title)[offset:]\n\n content = eval_xpath_getindex(\n result, './/div[contains(@class, \"compText\")]', 0, default=''\n )\n if content:\n content = extract_text(content)\n\n # append result\n results.append({\n 'url': url,\n 'title': title,\n 'content': content\n })\n\n for suggestion in eval_xpath_list(dom, '//div[contains(@class, \"AlsoTry\")]//table//a'):\n # append suggestion\n results.append({'suggestion': extract_text(suggestion)})\n\n return results\n\n\n# get supported languages from their site\ndef _fetch_supported_languages(resp):\n supported_languages = []\n dom = html.fromstring(resp.text)\n offset = len('lang_')\n\n for val in eval_xpath_list(dom, '//div[contains(@class, \"lang-item\")]/input/@value'):\n supported_languages.append( val[offset:] )\n\n return supported_languages\n", "path": "searx/engines/yahoo.py"}]}
| 2,142 | 134 |
gh_patches_debug_762
|
rasdani/github-patches
|
git_diff
|
kubeflow__pipelines-2610
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
kfp 0.1.35 tar.gz in pypi.org is missing diagnose_me directory
**What happened:**
The 0.1.35 release of kfp available on pypi.org (i.e. what is installed via `pip3 install kfp`) seems to be missing the `kfp/cli/diagnose_me` directory containing the diagnose_me modules required by the cli. The release hosted on github contains these files.
This is the tar.gz file hosted on pypi: https://files.pythonhosted.org/packages/e8/02/51dbeae211ddf1c931b2d1613db90856b7d94a53c1d9f704593dfa6253ae/kfp-0.1.35.tar.gz
If you try to install and run kfp 0.1.35 via pip it causes an error:
```
Traceback (most recent call last):
File "/Users/shenderson/venvs/kubeflow/bin/kfp", line 5, in <module>
from kfp.__main__ import main
File "/Users/shenderson/venvs/kubeflow/lib/python3.7/site-packages/kfp/__main__.py", line 15, in <module>
from .cli.cli import main
File "/Users/shenderson/venvs/kubeflow/lib/python3.7/site-packages/kfp/cli/cli.py", line 21, in <module>
from .diagnose_me_cli import diagnose_me
File "/Users/shenderson/venvs/kubeflow/lib/python3.7/site-packages/kfp/cli/diagnose_me_cli.py", line 6, in <module>
from .diagnose_me import dev_env
ModuleNotFoundError: No module named 'kfp.cli.diagnose_me'
```
**What did you expect to happen:**
All kfp modules including the diagnose_me package to be installed.
**What steps did you take:**
* Run `pip3 install --upgrade --force --no-cache-dir kfp`
* Run `kfp`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/setup.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import re
17 from setuptools import setup
18
19 NAME = 'kfp'
20 #VERSION = .... Change the version in kfp/__init__.py
21
22 REQUIRES = [
23 'urllib3>=1.15,<1.25', #Fixing the version conflict with the "requests" package
24 'six >= 1.10',
25 'certifi',
26 'python-dateutil',
27 'PyYAML',
28 'google-cloud-storage>=1.13.0',
29 'kubernetes>=8.0.0, <=9.0.0',
30 'PyJWT>=1.6.4',
31 'cryptography>=2.4.2',
32 'google-auth>=1.6.1',
33 'requests_toolbelt>=0.8.0',
34 'cloudpickle==1.1.1',
35 'kfp-server-api >= 0.1.18, <= 0.1.25', #Update the upper version whenever a new version of the kfp-server-api package is released. Update the lower version when there is a breaking change in kfp-server-api.
36 'argo-models == 2.2.1a', #2.2.1a is equivalent to argo 2.2.1
37 'jsonschema >= 3.0.1',
38 'tabulate == 0.8.3',
39 'click == 7.0',
40 'Deprecated',
41 ]
42
43 def find_version(*file_path_parts):
44 here = os.path.abspath(os.path.dirname(__file__))
45 with open(os.path.join(here, *file_path_parts), 'r') as fp:
46 version_file_text = fp.read()
47
48 version_match = re.search(
49 r"^__version__ = ['\"]([^'\"]*)['\"]",
50 version_file_text,
51 re.M,
52 )
53 if version_match:
54 return version_match.group(1)
55
56 raise RuntimeError("Unable to find version string.")
57
58 setup(
59 name=NAME,
60 version=find_version("kfp", "__init__.py"),
61 description='KubeFlow Pipelines SDK',
62 author='google',
63 install_requires=REQUIRES,
64 packages=[
65 'kfp',
66 'kfp.cli',
67 'kfp.compiler',
68 'kfp.components',
69 'kfp.components.structures',
70 'kfp.components.structures.kubernetes',
71 'kfp.containers',
72 'kfp.dsl',
73 'kfp.notebook',
74 ],
75 classifiers=[
76 'Intended Audience :: Developers',
77 'Intended Audience :: Education',
78 'Intended Audience :: Science/Research',
79 'License :: OSI Approved :: Apache Software License',
80 'Programming Language :: Python :: 3',
81 'Programming Language :: Python :: 3.5',
82 'Programming Language :: Python :: 3.6',
83 'Programming Language :: Python :: 3.7',
84 'Topic :: Scientific/Engineering',
85 'Topic :: Scientific/Engineering :: Artificial Intelligence',
86 'Topic :: Software Development',
87 'Topic :: Software Development :: Libraries',
88 'Topic :: Software Development :: Libraries :: Python Modules',
89 ],
90 python_requires='>=3.5.3',
91 include_package_data=True,
92 entry_points={
93 'console_scripts': [
94 'dsl-compile = kfp.compiler.main:main', 'kfp=kfp.__main__:main'
95 ]
96 })
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -64,6 +64,7 @@
packages=[
'kfp',
'kfp.cli',
+ 'kfp.cli.diagnose_me',
'kfp.compiler',
'kfp.components',
'kfp.components.structures',
|
{"golden_diff": "diff --git a/sdk/python/setup.py b/sdk/python/setup.py\n--- a/sdk/python/setup.py\n+++ b/sdk/python/setup.py\n@@ -64,6 +64,7 @@\n packages=[\n 'kfp',\n 'kfp.cli',\n+ 'kfp.cli.diagnose_me',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n", "issue": "kfp 0.1.35 tar.gz in pypi.org is missing diagnose_me directory\n**What happened:**\r\nThe 0.1.35 release of kfp available on pypi.org (i.e. what is installed via `pip3 install kfp`) seems to be missing the `kfp/cli/diagnose_me` directory containing the diagnose_me modules required by the cli. The release hosted on github contains these files.\r\n\r\nThis is the tar.gz file hosted on pypi: https://files.pythonhosted.org/packages/e8/02/51dbeae211ddf1c931b2d1613db90856b7d94a53c1d9f704593dfa6253ae/kfp-0.1.35.tar.gz\r\n\r\nIf you try to install and run kfp 0.1.35 via pip it causes an error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/shenderson/venvs/kubeflow/bin/kfp\", line 5, in <module>\r\n from kfp.__main__ import main\r\n File \"/Users/shenderson/venvs/kubeflow/lib/python3.7/site-packages/kfp/__main__.py\", line 15, in <module>\r\n from .cli.cli import main\r\n File \"/Users/shenderson/venvs/kubeflow/lib/python3.7/site-packages/kfp/cli/cli.py\", line 21, in <module>\r\n from .diagnose_me_cli import diagnose_me\r\n File \"/Users/shenderson/venvs/kubeflow/lib/python3.7/site-packages/kfp/cli/diagnose_me_cli.py\", line 6, in <module>\r\n from .diagnose_me import dev_env\r\nModuleNotFoundError: No module named 'kfp.cli.diagnose_me'\r\n```\r\n\r\n**What did you expect to happen:**\r\nAll kfp modules including the diagnose_me package to be installed.\r\n\r\n**What steps did you take:**\r\n* Run `pip3 install --upgrade --force --no-cache-dir kfp`\r\n* Run `kfp`\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\nfrom setuptools import setup\n\nNAME = 'kfp'\n#VERSION = .... Change the version in kfp/__init__.py\n\nREQUIRES = [\n 'urllib3>=1.15,<1.25', #Fixing the version conflict with the \"requests\" package\n 'six >= 1.10',\n 'certifi',\n 'python-dateutil',\n 'PyYAML',\n 'google-cloud-storage>=1.13.0',\n 'kubernetes>=8.0.0, <=9.0.0',\n 'PyJWT>=1.6.4',\n 'cryptography>=2.4.2',\n 'google-auth>=1.6.1',\n 'requests_toolbelt>=0.8.0',\n 'cloudpickle==1.1.1',\n 'kfp-server-api >= 0.1.18, <= 0.1.25', #Update the upper version whenever a new version of the kfp-server-api package is released. Update the lower version when there is a breaking change in kfp-server-api.\n 'argo-models == 2.2.1a', #2.2.1a is equivalent to argo 2.2.1\n 'jsonschema >= 3.0.1',\n 'tabulate == 0.8.3',\n 'click == 7.0',\n 'Deprecated',\n]\n\ndef find_version(*file_path_parts):\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *file_path_parts), 'r') as fp:\n version_file_text = fp.read()\n\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file_text,\n re.M,\n )\n if version_match:\n return version_match.group(1)\n\n raise RuntimeError(\"Unable to find version string.\")\n\nsetup(\n name=NAME,\n version=find_version(\"kfp\", \"__init__.py\"),\n description='KubeFlow Pipelines SDK',\n author='google',\n install_requires=REQUIRES,\n packages=[\n 'kfp',\n 'kfp.cli',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n 'kfp.components.structures.kubernetes',\n 'kfp.containers',\n 'kfp.dsl',\n 'kfp.notebook',\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.5.3',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n 'dsl-compile = kfp.compiler.main:main', 'kfp=kfp.__main__:main'\n ]\n })\n", "path": "sdk/python/setup.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\nfrom setuptools import setup\n\nNAME = 'kfp'\n#VERSION = .... Change the version in kfp/__init__.py\n\nREQUIRES = [\n 'urllib3>=1.15,<1.25', #Fixing the version conflict with the \"requests\" package\n 'six >= 1.10',\n 'certifi',\n 'python-dateutil',\n 'PyYAML',\n 'google-cloud-storage>=1.13.0',\n 'kubernetes>=8.0.0, <=9.0.0',\n 'PyJWT>=1.6.4',\n 'cryptography>=2.4.2',\n 'google-auth>=1.6.1',\n 'requests_toolbelt>=0.8.0',\n 'cloudpickle==1.1.1',\n 'kfp-server-api >= 0.1.18, <= 0.1.25', #Update the upper version whenever a new version of the kfp-server-api package is released. Update the lower version when there is a breaking change in kfp-server-api.\n 'argo-models == 2.2.1a', #2.2.1a is equivalent to argo 2.2.1\n 'jsonschema >= 3.0.1',\n 'tabulate == 0.8.3',\n 'click == 7.0',\n 'Deprecated',\n]\n\ndef find_version(*file_path_parts):\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *file_path_parts), 'r') as fp:\n version_file_text = fp.read()\n\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file_text,\n re.M,\n )\n if version_match:\n return version_match.group(1)\n\n raise RuntimeError(\"Unable to find version string.\")\n\nsetup(\n name=NAME,\n version=find_version(\"kfp\", \"__init__.py\"),\n description='KubeFlow Pipelines SDK',\n author='google',\n install_requires=REQUIRES,\n packages=[\n 'kfp',\n 'kfp.cli',\n 'kfp.cli.diagnose_me',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n 'kfp.components.structures.kubernetes',\n 'kfp.containers',\n 'kfp.dsl',\n 'kfp.notebook',\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.5.3',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n 'dsl-compile = kfp.compiler.main:main', 'kfp=kfp.__main__:main'\n ]\n })\n", "path": "sdk/python/setup.py"}]}
| 1,761 | 85 |
gh_patches_debug_25943
|
rasdani/github-patches
|
git_diff
|
opensearch-project__opensearch-build-2437
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: Updated input manifests creates old version manifests part of `legacy-manifests/` folder
### Describe the bug
With the change to move the old input manifests to [legacy-manifests folder](url) in build repo, the [auto generate manifest workflow ](https://github.com/opensearch-project/opensearch-build/blob/main/.github/workflows/versions.yml) creates even the manifests part of the legacy-manifests folder assuming they does not exist.
Sample PR.
https://github.com/opensearch-project/opensearch-build/pull/2389/files
### To reproduce
The workflow PR
https://github.com/opensearch-project/opensearch-build/pull/2389/files
### Expected behavior
The `./manifest.sh update` logic should be modified:
1) Either it should create manifests greater than the version number from the manifests inside the [legacy manifest folder](https://github.com/opensearch-project/opensearch-build/tree/main/legacy-manifests)
2) Logic to compare both manifests and legacy-manifests folder.
### Screenshots
If applicable, add screenshots to help explain your problem.
### Host / Environment
_No response_
### Additional context
_No response_
### Relevant log output
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/manifests_workflow/input_manifests.py`
Content:
```
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7 import glob
8 import logging
9 import os
10 import re
11 from abc import abstractmethod
12 from typing import Dict, List, Type, Union
13
14 from manifests.input_manifest import InputComponents, InputManifest
15 from manifests.manifests import Manifests
16 from manifests_workflow.component_opensearch import ComponentOpenSearch
17 from manifests_workflow.component_opensearch_dashboards_min import ComponentOpenSearchDashboardsMin
18 from manifests_workflow.component_opensearch_min import ComponentOpenSearchMin
19 from system.temporary_directory import TemporaryDirectory
20
21
22 class InputManifests(Manifests):
23 def __init__(self, name: str) -> None:
24 self.name = name
25 self.prefix = name.lower().replace(" ", "-")
26 super().__init__(InputManifest, InputManifests.files(self.prefix))
27
28 @classmethod
29 def manifests_path(self) -> str:
30 return os.path.realpath(os.path.join(os.path.dirname(__file__), "..", "..", "manifests"))
31
32 @classmethod
33 def jenkins_path(self) -> str:
34 return os.path.realpath(os.path.join(os.path.dirname(__file__), "..", "..", "jenkins"))
35
36 @classmethod
37 def cron_jenkinsfile(self) -> str:
38 return os.path.join(self.jenkins_path(), "check-for-build.jenkinsfile")
39
40 @classmethod
41 def files(self, name: str) -> List:
42 results = []
43 for filename in glob.glob(os.path.join(self.manifests_path(), f"**/{name}-*.yml")):
44 # avoids the -maven manifest
45 match = re.search(rf"^{name}-([0-9.]*).yml$", os.path.basename(filename))
46 if match:
47 results.append(filename)
48 return results
49
50 @abstractmethod
51 def update(
52 self,
53 min_klass: Union[Type[ComponentOpenSearchMin], Type[ComponentOpenSearchDashboardsMin]],
54 component_klass: Type[ComponentOpenSearch],
55 keep: bool = False,
56 ) -> None:
57 known_versions = self.versions
58 logging.info(f"Known versions: {known_versions}")
59 main_versions: Dict = {}
60 with TemporaryDirectory(keep=keep, chdir=True) as work_dir:
61 logging.info(f"Checking out components into {work_dir.name}")
62
63 # check out and build #main, 1.x, etc.
64 branches = min_klass.branches()
65
66 logging.info(f"Checking {self.name} {branches} branches")
67 for branch in branches:
68 c = min_klass.checkout(
69 path=os.path.join(work_dir.name, self.name.replace(" ", ""), branch),
70 branch=branch,
71 )
72
73 version = c.version
74 logging.info(f"{self.name}#{branch} is version {version}")
75 if version not in main_versions.keys():
76 main_versions[version] = [c]
77
78 if component_klass is not None:
79 # components can increment their own version first without incrementing min
80 manifest = self.latest
81 logging.info(f"Examining components in the latest manifest of {manifest.build.name} ({manifest.build.version})")
82 for component in manifest.components.values():
83 if component.name == self.name:
84 continue
85
86 logging.info(f"Checking out {component.name}#main")
87 component = component_klass.checkout(
88 name=component.name,
89 path=os.path.join(work_dir.name, component.name),
90 opensearch_version=manifest.build.version,
91 branch="main",
92 )
93
94 component_version = component.version
95 if component_version:
96 release_version = ".".join(component_version.split(".")[:3])
97 if release_version not in main_versions.keys():
98 main_versions[release_version] = []
99 main_versions[release_version].append(component)
100 logging.info(f"{component.name}#main is version {release_version} (from {component_version})")
101
102 # summarize
103 logging.info("Found versions on main:")
104 for main_version in main_versions.keys():
105 for component in main_versions[main_version]:
106 logging.info(f" {component.name}={main_version}")
107
108 # generate new manifests
109 for release_version in sorted(main_versions.keys() - known_versions):
110 self.write_manifest(release_version, main_versions[release_version])
111 self.add_to_cron(release_version)
112
113 def create_manifest(self, version: str, components: List = []) -> InputManifest:
114 templates_base_path = os.path.join(self.manifests_path(), "templates")
115 template_version_folder = version.split(".")[0] + ".x"
116 template_full_path = os.path.join(templates_base_path, self.prefix, template_version_folder, "manifest.yml")
117 if not os.path.exists(template_full_path):
118 template_full_path = os.path.join(templates_base_path, self.prefix, "default", "manifest.yml")
119
120 manifest = InputManifest.from_file(open(template_full_path))
121
122 manifest.build.version = version
123 manifests_components = []
124
125 for component in components:
126 logging.info(f" Adding {component.name}")
127 manifests_components.append(component.to_dict())
128
129 manifest.components = InputComponents(manifests_components) # type: ignore
130 return manifest
131
132 def write_manifest(self, version: str, components: List = []) -> None:
133 logging.info(f"Creating new version: {version}")
134 manifest = self.create_manifest(version, components)
135 manifest_dir = os.path.join(self.manifests_path(), version)
136 os.makedirs(manifest_dir, exist_ok=True)
137 manifest_path = os.path.join(manifest_dir, f"{self.prefix}-{version}.yml")
138 manifest.to_file(manifest_path)
139 logging.info(f"Wrote {manifest_path}")
140
141 def add_to_cron(self, version: str) -> None:
142 logging.info(f"Adding new version to cron: {version}")
143 jenkinsfile = self.cron_jenkinsfile()
144 with open(jenkinsfile, "r") as f:
145 data = f.read()
146
147 cron_entry = f"H 1 * * * %INPUT_MANIFEST={version}/{self.prefix}-{version}.yml;TARGET_JOB_NAME=distribution-build-{self.prefix}\n"
148
149 if cron_entry in data:
150 raise ValueError(f"{jenkinsfile} already contains an entry for {self.prefix} {version}")
151
152 data = data.replace("parameterizedCron '''\n", f"parameterizedCron '''\n{' ' * 12}{cron_entry}")
153
154 with open(jenkinsfile, "w") as f:
155 f.write(data)
156
157 logging.info(f"Wrote {jenkinsfile}")
158
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/manifests_workflow/input_manifests.py b/src/manifests_workflow/input_manifests.py
--- a/src/manifests_workflow/input_manifests.py
+++ b/src/manifests_workflow/input_manifests.py
@@ -29,6 +29,10 @@
def manifests_path(self) -> str:
return os.path.realpath(os.path.join(os.path.dirname(__file__), "..", "..", "manifests"))
+ @classmethod
+ def legacy_manifests_path(self) -> str:
+ return os.path.realpath(os.path.join(os.path.dirname(__file__), "..", "..", "legacy-manifests"))
+
@classmethod
def jenkins_path(self) -> str:
return os.path.realpath(os.path.join(os.path.dirname(__file__), "..", "..", "jenkins"))
@@ -40,11 +44,12 @@
@classmethod
def files(self, name: str) -> List:
results = []
- for filename in glob.glob(os.path.join(self.manifests_path(), f"**/{name}-*.yml")):
- # avoids the -maven manifest
- match = re.search(rf"^{name}-([0-9.]*).yml$", os.path.basename(filename))
- if match:
- results.append(filename)
+ for path in [self.manifests_path(), self.legacy_manifests_path()]:
+ for filename in glob.glob(os.path.join(path, f"**/{name}-*.yml")):
+ # avoids the -maven manifest
+ match = re.search(rf"^{name}-([0-9.]*).yml$", os.path.basename(filename))
+ if match:
+ results.append(filename)
return results
@abstractmethod
|
{"golden_diff": "diff --git a/src/manifests_workflow/input_manifests.py b/src/manifests_workflow/input_manifests.py\n--- a/src/manifests_workflow/input_manifests.py\n+++ b/src/manifests_workflow/input_manifests.py\n@@ -29,6 +29,10 @@\n def manifests_path(self) -> str:\n return os.path.realpath(os.path.join(os.path.dirname(__file__), \"..\", \"..\", \"manifests\"))\n \n+ @classmethod\n+ def legacy_manifests_path(self) -> str:\n+ return os.path.realpath(os.path.join(os.path.dirname(__file__), \"..\", \"..\", \"legacy-manifests\"))\n+\n @classmethod\n def jenkins_path(self) -> str:\n return os.path.realpath(os.path.join(os.path.dirname(__file__), \"..\", \"..\", \"jenkins\"))\n@@ -40,11 +44,12 @@\n @classmethod\n def files(self, name: str) -> List:\n results = []\n- for filename in glob.glob(os.path.join(self.manifests_path(), f\"**/{name}-*.yml\")):\n- # avoids the -maven manifest\n- match = re.search(rf\"^{name}-([0-9.]*).yml$\", os.path.basename(filename))\n- if match:\n- results.append(filename)\n+ for path in [self.manifests_path(), self.legacy_manifests_path()]:\n+ for filename in glob.glob(os.path.join(path, f\"**/{name}-*.yml\")):\n+ # avoids the -maven manifest\n+ match = re.search(rf\"^{name}-([0-9.]*).yml$\", os.path.basename(filename))\n+ if match:\n+ results.append(filename)\n return results\n \n @abstractmethod\n", "issue": "[Bug]: Updated input manifests creates old version manifests part of `legacy-manifests/` folder\n### Describe the bug\n\nWith the change to move the old input manifests to [legacy-manifests folder](url) in build repo, the [auto generate manifest workflow ](https://github.com/opensearch-project/opensearch-build/blob/main/.github/workflows/versions.yml) creates even the manifests part of the legacy-manifests folder assuming they does not exist.\r\nSample PR.\r\nhttps://github.com/opensearch-project/opensearch-build/pull/2389/files\n\n### To reproduce\n\nThe workflow PR\r\nhttps://github.com/opensearch-project/opensearch-build/pull/2389/files\n\n### Expected behavior\n\nThe `./manifest.sh update` logic should be modified:\r\n1) Either it should create manifests greater than the version number from the manifests inside the [legacy manifest folder](https://github.com/opensearch-project/opensearch-build/tree/main/legacy-manifests)\r\n2) Logic to compare both manifests and legacy-manifests folder.\n\n### Screenshots\n\nIf applicable, add screenshots to help explain your problem.\n\n### Host / Environment\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Relevant log output\n\n_No response_\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport glob\nimport logging\nimport os\nimport re\nfrom abc import abstractmethod\nfrom typing import Dict, List, Type, Union\n\nfrom manifests.input_manifest import InputComponents, InputManifest\nfrom manifests.manifests import Manifests\nfrom manifests_workflow.component_opensearch import ComponentOpenSearch\nfrom manifests_workflow.component_opensearch_dashboards_min import ComponentOpenSearchDashboardsMin\nfrom manifests_workflow.component_opensearch_min import ComponentOpenSearchMin\nfrom system.temporary_directory import TemporaryDirectory\n\n\nclass InputManifests(Manifests):\n def __init__(self, name: str) -> None:\n self.name = name\n self.prefix = name.lower().replace(\" \", \"-\")\n super().__init__(InputManifest, InputManifests.files(self.prefix))\n\n @classmethod\n def manifests_path(self) -> str:\n return os.path.realpath(os.path.join(os.path.dirname(__file__), \"..\", \"..\", \"manifests\"))\n\n @classmethod\n def jenkins_path(self) -> str:\n return os.path.realpath(os.path.join(os.path.dirname(__file__), \"..\", \"..\", \"jenkins\"))\n\n @classmethod\n def cron_jenkinsfile(self) -> str:\n return os.path.join(self.jenkins_path(), \"check-for-build.jenkinsfile\")\n\n @classmethod\n def files(self, name: str) -> List:\n results = []\n for filename in glob.glob(os.path.join(self.manifests_path(), f\"**/{name}-*.yml\")):\n # avoids the -maven manifest\n match = re.search(rf\"^{name}-([0-9.]*).yml$\", os.path.basename(filename))\n if match:\n results.append(filename)\n return results\n\n @abstractmethod\n def update(\n self,\n min_klass: Union[Type[ComponentOpenSearchMin], Type[ComponentOpenSearchDashboardsMin]],\n component_klass: Type[ComponentOpenSearch],\n keep: bool = False,\n ) -> None:\n known_versions = self.versions\n logging.info(f\"Known versions: {known_versions}\")\n main_versions: Dict = {}\n with TemporaryDirectory(keep=keep, chdir=True) as work_dir:\n logging.info(f\"Checking out components into {work_dir.name}\")\n\n # check out and build #main, 1.x, etc.\n branches = min_klass.branches()\n\n logging.info(f\"Checking {self.name} {branches} branches\")\n for branch in branches:\n c = min_klass.checkout(\n path=os.path.join(work_dir.name, self.name.replace(\" \", \"\"), branch),\n branch=branch,\n )\n\n version = c.version\n logging.info(f\"{self.name}#{branch} is version {version}\")\n if version not in main_versions.keys():\n main_versions[version] = [c]\n\n if component_klass is not None:\n # components can increment their own version first without incrementing min\n manifest = self.latest\n logging.info(f\"Examining components in the latest manifest of {manifest.build.name} ({manifest.build.version})\")\n for component in manifest.components.values():\n if component.name == self.name:\n continue\n\n logging.info(f\"Checking out {component.name}#main\")\n component = component_klass.checkout(\n name=component.name,\n path=os.path.join(work_dir.name, component.name),\n opensearch_version=manifest.build.version,\n branch=\"main\",\n )\n\n component_version = component.version\n if component_version:\n release_version = \".\".join(component_version.split(\".\")[:3])\n if release_version not in main_versions.keys():\n main_versions[release_version] = []\n main_versions[release_version].append(component)\n logging.info(f\"{component.name}#main is version {release_version} (from {component_version})\")\n\n # summarize\n logging.info(\"Found versions on main:\")\n for main_version in main_versions.keys():\n for component in main_versions[main_version]:\n logging.info(f\" {component.name}={main_version}\")\n\n # generate new manifests\n for release_version in sorted(main_versions.keys() - known_versions):\n self.write_manifest(release_version, main_versions[release_version])\n self.add_to_cron(release_version)\n\n def create_manifest(self, version: str, components: List = []) -> InputManifest:\n templates_base_path = os.path.join(self.manifests_path(), \"templates\")\n template_version_folder = version.split(\".\")[0] + \".x\"\n template_full_path = os.path.join(templates_base_path, self.prefix, template_version_folder, \"manifest.yml\")\n if not os.path.exists(template_full_path):\n template_full_path = os.path.join(templates_base_path, self.prefix, \"default\", \"manifest.yml\")\n\n manifest = InputManifest.from_file(open(template_full_path))\n\n manifest.build.version = version\n manifests_components = []\n\n for component in components:\n logging.info(f\" Adding {component.name}\")\n manifests_components.append(component.to_dict())\n\n manifest.components = InputComponents(manifests_components) # type: ignore\n return manifest\n\n def write_manifest(self, version: str, components: List = []) -> None:\n logging.info(f\"Creating new version: {version}\")\n manifest = self.create_manifest(version, components)\n manifest_dir = os.path.join(self.manifests_path(), version)\n os.makedirs(manifest_dir, exist_ok=True)\n manifest_path = os.path.join(manifest_dir, f\"{self.prefix}-{version}.yml\")\n manifest.to_file(manifest_path)\n logging.info(f\"Wrote {manifest_path}\")\n\n def add_to_cron(self, version: str) -> None:\n logging.info(f\"Adding new version to cron: {version}\")\n jenkinsfile = self.cron_jenkinsfile()\n with open(jenkinsfile, \"r\") as f:\n data = f.read()\n\n cron_entry = f\"H 1 * * * %INPUT_MANIFEST={version}/{self.prefix}-{version}.yml;TARGET_JOB_NAME=distribution-build-{self.prefix}\\n\"\n\n if cron_entry in data:\n raise ValueError(f\"{jenkinsfile} already contains an entry for {self.prefix} {version}\")\n\n data = data.replace(\"parameterizedCron '''\\n\", f\"parameterizedCron '''\\n{' ' * 12}{cron_entry}\")\n\n with open(jenkinsfile, \"w\") as f:\n f.write(data)\n\n logging.info(f\"Wrote {jenkinsfile}\")\n", "path": "src/manifests_workflow/input_manifests.py"}], "after_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport glob\nimport logging\nimport os\nimport re\nfrom abc import abstractmethod\nfrom typing import Dict, List, Type, Union\n\nfrom manifests.input_manifest import InputComponents, InputManifest\nfrom manifests.manifests import Manifests\nfrom manifests_workflow.component_opensearch import ComponentOpenSearch\nfrom manifests_workflow.component_opensearch_dashboards_min import ComponentOpenSearchDashboardsMin\nfrom manifests_workflow.component_opensearch_min import ComponentOpenSearchMin\nfrom system.temporary_directory import TemporaryDirectory\n\n\nclass InputManifests(Manifests):\n def __init__(self, name: str) -> None:\n self.name = name\n self.prefix = name.lower().replace(\" \", \"-\")\n super().__init__(InputManifest, InputManifests.files(self.prefix))\n\n @classmethod\n def manifests_path(self) -> str:\n return os.path.realpath(os.path.join(os.path.dirname(__file__), \"..\", \"..\", \"manifests\"))\n\n @classmethod\n def legacy_manifests_path(self) -> str:\n return os.path.realpath(os.path.join(os.path.dirname(__file__), \"..\", \"..\", \"legacy-manifests\"))\n\n @classmethod\n def jenkins_path(self) -> str:\n return os.path.realpath(os.path.join(os.path.dirname(__file__), \"..\", \"..\", \"jenkins\"))\n\n @classmethod\n def cron_jenkinsfile(self) -> str:\n return os.path.join(self.jenkins_path(), \"check-for-build.jenkinsfile\")\n\n @classmethod\n def files(self, name: str) -> List:\n results = []\n for path in [self.manifests_path(), self.legacy_manifests_path()]:\n for filename in glob.glob(os.path.join(path, f\"**/{name}-*.yml\")):\n # avoids the -maven manifest\n match = re.search(rf\"^{name}-([0-9.]*).yml$\", os.path.basename(filename))\n if match:\n results.append(filename)\n return results\n\n @abstractmethod\n def update(\n self,\n min_klass: Union[Type[ComponentOpenSearchMin], Type[ComponentOpenSearchDashboardsMin]],\n component_klass: Type[ComponentOpenSearch],\n keep: bool = False,\n ) -> None:\n known_versions = self.versions\n logging.info(f\"Known versions: {known_versions}\")\n main_versions: Dict = {}\n with TemporaryDirectory(keep=keep, chdir=True) as work_dir:\n logging.info(f\"Checking out components into {work_dir.name}\")\n\n # check out and build #main, 1.x, etc.\n branches = min_klass.branches()\n\n logging.info(f\"Checking {self.name} {branches} branches\")\n for branch in branches:\n c = min_klass.checkout(\n path=os.path.join(work_dir.name, self.name.replace(\" \", \"\"), branch),\n branch=branch,\n )\n\n version = c.version\n logging.info(f\"{self.name}#{branch} is version {version}\")\n if version not in main_versions.keys():\n main_versions[version] = [c]\n\n if component_klass is not None:\n # components can increment their own version first without incrementing min\n manifest = self.latest\n logging.info(f\"Examining components in the latest manifest of {manifest.build.name} ({manifest.build.version})\")\n for component in manifest.components.values():\n if component.name == self.name:\n continue\n\n logging.info(f\"Checking out {component.name}#main\")\n component = component_klass.checkout(\n name=component.name,\n path=os.path.join(work_dir.name, component.name),\n opensearch_version=manifest.build.version,\n branch=\"main\",\n )\n\n component_version = component.version\n if component_version:\n release_version = \".\".join(component_version.split(\".\")[:3])\n if release_version not in main_versions.keys():\n main_versions[release_version] = []\n main_versions[release_version].append(component)\n logging.info(f\"{component.name}#main is version {release_version} (from {component_version})\")\n\n # summarize\n logging.info(\"Found versions on main:\")\n for main_version in main_versions.keys():\n for component in main_versions[main_version]:\n logging.info(f\" {component.name}={main_version}\")\n\n # generate new manifests\n for release_version in sorted(main_versions.keys() - known_versions):\n self.write_manifest(release_version, main_versions[release_version])\n self.add_to_cron(release_version)\n\n def create_manifest(self, version: str, components: List = []) -> InputManifest:\n templates_base_path = os.path.join(self.manifests_path(), \"templates\")\n template_version_folder = version.split(\".\")[0] + \".x\"\n template_full_path = os.path.join(templates_base_path, self.prefix, template_version_folder, \"manifest.yml\")\n if not os.path.exists(template_full_path):\n template_full_path = os.path.join(templates_base_path, self.prefix, \"default\", \"manifest.yml\")\n\n manifest = InputManifest.from_file(open(template_full_path))\n\n manifest.build.version = version\n manifests_components = []\n\n for component in components:\n logging.info(f\" Adding {component.name}\")\n manifests_components.append(component.to_dict())\n\n manifest.components = InputComponents(manifests_components) # type: ignore\n return manifest\n\n def write_manifest(self, version: str, components: List = []) -> None:\n logging.info(f\"Creating new version: {version}\")\n manifest = self.create_manifest(version, components)\n manifest_dir = os.path.join(self.manifests_path(), version)\n os.makedirs(manifest_dir, exist_ok=True)\n manifest_path = os.path.join(manifest_dir, f\"{self.prefix}-{version}.yml\")\n manifest.to_file(manifest_path)\n logging.info(f\"Wrote {manifest_path}\")\n\n def add_to_cron(self, version: str) -> None:\n logging.info(f\"Adding new version to cron: {version}\")\n jenkinsfile = self.cron_jenkinsfile()\n with open(jenkinsfile, \"r\") as f:\n data = f.read()\n\n cron_entry = f\"H 1 * * * %INPUT_MANIFEST={version}/{self.prefix}-{version}.yml;TARGET_JOB_NAME=distribution-build-{self.prefix}\\n\"\n\n if cron_entry in data:\n raise ValueError(f\"{jenkinsfile} already contains an entry for {self.prefix} {version}\")\n\n data = data.replace(\"parameterizedCron '''\\n\", f\"parameterizedCron '''\\n{' ' * 12}{cron_entry}\")\n\n with open(jenkinsfile, \"w\") as f:\n f.write(data)\n\n logging.info(f\"Wrote {jenkinsfile}\")\n", "path": "src/manifests_workflow/input_manifests.py"}]}
| 2,318 | 380 |
gh_patches_debug_41258
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-5774
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.artetv: error: Unable to validate response text: ValidationError(dict):
### Checklist
- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
streamlink 6.5.0
### Description
I fix this issue
```
by adding '**API_HLS_NG**' in line 51 of file
`/usr/lib/python3.11/site-packages/streamlink/plugins/artetv.py`
like this :
```

link/streamlink/assets/19744191/b78f47ba-67b2-439b-b336-85bef7e4615a)
### Debug log
```text
error: Unable to validate response text: ValidationError(dict):
Unable to validate value of key 'data'
Context(dict):
Unable to validate value of key 'attributes'
Context(dict):
Unable to validate value of key 'streams'
Context(AnySchema):
ValidationError(AnySchema):
ValidationError(AnySchema):
ValidationError(dict):
Unable to validate value of key 'protocol'
Context(AnySchema):
ValidationError(equality):
'API_HLS_NG' does not equal 'HLS'
ValidationError(equality):
'API_HLS_NG' does not equal 'HLS_NG'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/artetv.py`
Content:
```
1 """
2 $description European public service channel promoting culture, including magazine shows, concerts and documentaries.
3 $url arte.tv
4 $type live, vod
5 $metadata title
6 """
7
8 import logging
9 import re
10 from operator import itemgetter
11
12 from streamlink.plugin import Plugin, pluginmatcher
13 from streamlink.plugin.api import validate
14 from streamlink.stream.hls import HLSStream
15
16
17 log = logging.getLogger(__name__)
18
19
20 @pluginmatcher(re.compile(r"""
21 https?://(?:\w+\.)?arte\.tv/(?:guide/)?
22 (?P<language>[a-z]{2})/
23 (?:
24 (?:videos/)?(?P<video_id>(?!RC-|videos)[^/]+?)/.+
25 |
26 (?:direct|live)
27 )
28 """, re.VERBOSE))
29 class ArteTV(Plugin):
30 API_URL = "https://api.arte.tv/api/player/v2/config/{0}/{1}"
31 API_TOKEN = "MzYyZDYyYmM1Y2Q3ZWRlZWFjMmIyZjZjNTRiMGY4MzY4NzBhOWQ5YjE4MGQ1NGFiODJmOTFlZDQwN2FkOTZjMQ"
32
33 def _get_streams(self):
34 language = self.match.group("language")
35 video_id = self.match.group("video_id")
36
37 json_url = self.API_URL.format(language, video_id or "LIVE")
38 headers = {
39 "Authorization": f"Bearer {self.API_TOKEN}",
40 }
41 streams, metadata = self.session.http.get(json_url, headers=headers, schema=validate.Schema(
42 validate.parse_json(),
43 {"data": {"attributes": {
44 "streams": validate.any(
45 [],
46 [
47 validate.all(
48 {
49 "url": validate.url(),
50 "slot": int,
51 "protocol": validate.any("HLS", "HLS_NG"),
52 },
53 validate.union_get("slot", "protocol", "url"),
54 ),
55 ],
56 ),
57 "metadata": {
58 "title": str,
59 "subtitle": validate.any(None, str),
60 },
61 }}},
62 validate.get(("data", "attributes")),
63 validate.union_get("streams", "metadata"),
64 ))
65
66 if not streams:
67 return
68
69 self.title = f"{metadata['title']} - {metadata['subtitle']}" if metadata["subtitle"] else metadata["title"]
70
71 for _slot, _protocol, url in sorted(streams, key=itemgetter(0)):
72 return HLSStream.parse_variant_playlist(self.session, url)
73
74
75 __plugin__ = ArteTV
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/streamlink/plugins/artetv.py b/src/streamlink/plugins/artetv.py
--- a/src/streamlink/plugins/artetv.py
+++ b/src/streamlink/plugins/artetv.py
@@ -2,6 +2,7 @@
$description European public service channel promoting culture, including magazine shows, concerts and documentaries.
$url arte.tv
$type live, vod
+$metadata id
$metadata title
"""
@@ -17,38 +18,41 @@
log = logging.getLogger(__name__)
-@pluginmatcher(re.compile(r"""
- https?://(?:\w+\.)?arte\.tv/(?:guide/)?
- (?P<language>[a-z]{2})/
- (?:
- (?:videos/)?(?P<video_id>(?!RC-|videos)[^/]+?)/.+
- |
- (?:direct|live)
- )
-""", re.VERBOSE))
+@pluginmatcher(
+ name="live",
+ pattern=re.compile(
+ r"https?://(?:\w+\.)?arte\.tv/(?P<language>[a-z]{2})/(?:direct|live)/?",
+ ),
+)
+@pluginmatcher(
+ name="vod",
+ pattern=re.compile(
+ r"https?://(?:\w+\.)?arte\.tv/(?:guide/)?(?P<language>[a-z]{2})/(?:videos/)?(?P<video_id>(?!RC-|videos)[^/]+?)/.+",
+ ),
+)
class ArteTV(Plugin):
- API_URL = "https://api.arte.tv/api/player/v2/config/{0}/{1}"
- API_TOKEN = "MzYyZDYyYmM1Y2Q3ZWRlZWFjMmIyZjZjNTRiMGY4MzY4NzBhOWQ5YjE4MGQ1NGFiODJmOTFlZDQwN2FkOTZjMQ"
+ API_URL = "https://api.arte.tv/api/player/v2/config/{language}/{id}"
def _get_streams(self):
- language = self.match.group("language")
- video_id = self.match.group("video_id")
+ self.id = self.match["video_id"] if self.matches["vod"] else "LIVE"
- json_url = self.API_URL.format(language, video_id or "LIVE")
- headers = {
- "Authorization": f"Bearer {self.API_TOKEN}",
- }
- streams, metadata = self.session.http.get(json_url, headers=headers, schema=validate.Schema(
+ json_url = self.API_URL.format(
+ language=self.match["language"],
+ id=self.id,
+ )
+ streams, metadata = self.session.http.get(json_url, schema=validate.Schema(
validate.parse_json(),
- {"data": {"attributes": {
+ {"data": {"attributes": dict}},
+ validate.get(("data", "attributes")),
+ {
"streams": validate.any(
[],
[
validate.all(
{
- "url": validate.url(),
"slot": int,
- "protocol": validate.any("HLS", "HLS_NG"),
+ "protocol": str,
+ "url": validate.url(),
},
validate.union_get("slot", "protocol", "url"),
),
@@ -58,17 +62,15 @@
"title": str,
"subtitle": validate.any(None, str),
},
- }}},
- validate.get(("data", "attributes")),
+ },
validate.union_get("streams", "metadata"),
))
- if not streams:
- return
-
self.title = f"{metadata['title']} - {metadata['subtitle']}" if metadata["subtitle"] else metadata["title"]
- for _slot, _protocol, url in sorted(streams, key=itemgetter(0)):
+ for _slot, protocol, url in sorted(streams, key=itemgetter(0)):
+ if "HLS" not in protocol:
+ continue
return HLSStream.parse_variant_playlist(self.session, url)
|
{"golden_diff": "diff --git a/src/streamlink/plugins/artetv.py b/src/streamlink/plugins/artetv.py\n--- a/src/streamlink/plugins/artetv.py\n+++ b/src/streamlink/plugins/artetv.py\n@@ -2,6 +2,7 @@\n $description European public service channel promoting culture, including magazine shows, concerts and documentaries.\n $url arte.tv\n $type live, vod\n+$metadata id\n $metadata title\n \"\"\"\n \n@@ -17,38 +18,41 @@\n log = logging.getLogger(__name__)\n \n \n-@pluginmatcher(re.compile(r\"\"\"\n- https?://(?:\\w+\\.)?arte\\.tv/(?:guide/)?\n- (?P<language>[a-z]{2})/\n- (?:\n- (?:videos/)?(?P<video_id>(?!RC-|videos)[^/]+?)/.+\n- |\n- (?:direct|live)\n- )\n-\"\"\", re.VERBOSE))\n+@pluginmatcher(\n+ name=\"live\",\n+ pattern=re.compile(\n+ r\"https?://(?:\\w+\\.)?arte\\.tv/(?P<language>[a-z]{2})/(?:direct|live)/?\",\n+ ),\n+)\n+@pluginmatcher(\n+ name=\"vod\",\n+ pattern=re.compile(\n+ r\"https?://(?:\\w+\\.)?arte\\.tv/(?:guide/)?(?P<language>[a-z]{2})/(?:videos/)?(?P<video_id>(?!RC-|videos)[^/]+?)/.+\",\n+ ),\n+)\n class ArteTV(Plugin):\n- API_URL = \"https://api.arte.tv/api/player/v2/config/{0}/{1}\"\n- API_TOKEN = \"MzYyZDYyYmM1Y2Q3ZWRlZWFjMmIyZjZjNTRiMGY4MzY4NzBhOWQ5YjE4MGQ1NGFiODJmOTFlZDQwN2FkOTZjMQ\"\n+ API_URL = \"https://api.arte.tv/api/player/v2/config/{language}/{id}\"\n \n def _get_streams(self):\n- language = self.match.group(\"language\")\n- video_id = self.match.group(\"video_id\")\n+ self.id = self.match[\"video_id\"] if self.matches[\"vod\"] else \"LIVE\"\n \n- json_url = self.API_URL.format(language, video_id or \"LIVE\")\n- headers = {\n- \"Authorization\": f\"Bearer {self.API_TOKEN}\",\n- }\n- streams, metadata = self.session.http.get(json_url, headers=headers, schema=validate.Schema(\n+ json_url = self.API_URL.format(\n+ language=self.match[\"language\"],\n+ id=self.id,\n+ )\n+ streams, metadata = self.session.http.get(json_url, schema=validate.Schema(\n validate.parse_json(),\n- {\"data\": {\"attributes\": {\n+ {\"data\": {\"attributes\": dict}},\n+ validate.get((\"data\", \"attributes\")),\n+ {\n \"streams\": validate.any(\n [],\n [\n validate.all(\n {\n- \"url\": validate.url(),\n \"slot\": int,\n- \"protocol\": validate.any(\"HLS\", \"HLS_NG\"),\n+ \"protocol\": str,\n+ \"url\": validate.url(),\n },\n validate.union_get(\"slot\", \"protocol\", \"url\"),\n ),\n@@ -58,17 +62,15 @@\n \"title\": str,\n \"subtitle\": validate.any(None, str),\n },\n- }}},\n- validate.get((\"data\", \"attributes\")),\n+ },\n validate.union_get(\"streams\", \"metadata\"),\n ))\n \n- if not streams:\n- return\n-\n self.title = f\"{metadata['title']} - {metadata['subtitle']}\" if metadata[\"subtitle\"] else metadata[\"title\"]\n \n- for _slot, _protocol, url in sorted(streams, key=itemgetter(0)):\n+ for _slot, protocol, url in sorted(streams, key=itemgetter(0)):\n+ if \"HLS\" not in protocol:\n+ continue\n return HLSStream.parse_variant_playlist(self.session, url)\n", "issue": "plugins.artetv: error: Unable to validate response text: ValidationError(dict):\n### Checklist\r\n\r\n- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)\r\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\r\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\r\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\r\n\r\n### Streamlink version\r\nstreamlink 6.5.0\r\n\r\n### Description\r\n\r\nI fix this issue \r\n```\r\nby adding '**API_HLS_NG**' in line 51 of file \r\n`/usr/lib/python3.11/site-packages/streamlink/plugins/artetv.py`\r\nlike this :\r\n```\r\n\r\nlink/streamlink/assets/19744191/b78f47ba-67b2-439b-b336-85bef7e4615a)\r\n\r\n### Debug log\r\n\r\n```text\r\nerror: Unable to validate response text: ValidationError(dict):\r\n Unable to validate value of key 'data'\r\n Context(dict):\r\n Unable to validate value of key 'attributes'\r\n Context(dict):\r\n Unable to validate value of key 'streams'\r\n Context(AnySchema):\r\n ValidationError(AnySchema):\r\n ValidationError(AnySchema):\r\n ValidationError(dict):\r\n Unable to validate value of key 'protocol'\r\n Context(AnySchema):\r\n ValidationError(equality):\r\n 'API_HLS_NG' does not equal 'HLS'\r\n ValidationError(equality):\r\n 'API_HLS_NG' does not equal 'HLS_NG'\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\n$description European public service channel promoting culture, including magazine shows, concerts and documentaries.\n$url arte.tv\n$type live, vod\n$metadata title\n\"\"\"\n\nimport logging\nimport re\nfrom operator import itemgetter\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(r\"\"\"\n https?://(?:\\w+\\.)?arte\\.tv/(?:guide/)?\n (?P<language>[a-z]{2})/\n (?:\n (?:videos/)?(?P<video_id>(?!RC-|videos)[^/]+?)/.+\n |\n (?:direct|live)\n )\n\"\"\", re.VERBOSE))\nclass ArteTV(Plugin):\n API_URL = \"https://api.arte.tv/api/player/v2/config/{0}/{1}\"\n API_TOKEN = \"MzYyZDYyYmM1Y2Q3ZWRlZWFjMmIyZjZjNTRiMGY4MzY4NzBhOWQ5YjE4MGQ1NGFiODJmOTFlZDQwN2FkOTZjMQ\"\n\n def _get_streams(self):\n language = self.match.group(\"language\")\n video_id = self.match.group(\"video_id\")\n\n json_url = self.API_URL.format(language, video_id or \"LIVE\")\n headers = {\n \"Authorization\": f\"Bearer {self.API_TOKEN}\",\n }\n streams, metadata = self.session.http.get(json_url, headers=headers, schema=validate.Schema(\n validate.parse_json(),\n {\"data\": {\"attributes\": {\n \"streams\": validate.any(\n [],\n [\n validate.all(\n {\n \"url\": validate.url(),\n \"slot\": int,\n \"protocol\": validate.any(\"HLS\", \"HLS_NG\"),\n },\n validate.union_get(\"slot\", \"protocol\", \"url\"),\n ),\n ],\n ),\n \"metadata\": {\n \"title\": str,\n \"subtitle\": validate.any(None, str),\n },\n }}},\n validate.get((\"data\", \"attributes\")),\n validate.union_get(\"streams\", \"metadata\"),\n ))\n\n if not streams:\n return\n\n self.title = f\"{metadata['title']} - {metadata['subtitle']}\" if metadata[\"subtitle\"] else metadata[\"title\"]\n\n for _slot, _protocol, url in sorted(streams, key=itemgetter(0)):\n return HLSStream.parse_variant_playlist(self.session, url)\n\n\n__plugin__ = ArteTV\n", "path": "src/streamlink/plugins/artetv.py"}], "after_files": [{"content": "\"\"\"\n$description European public service channel promoting culture, including magazine shows, concerts and documentaries.\n$url arte.tv\n$type live, vod\n$metadata id\n$metadata title\n\"\"\"\n\nimport logging\nimport re\nfrom operator import itemgetter\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(\n name=\"live\",\n pattern=re.compile(\n r\"https?://(?:\\w+\\.)?arte\\.tv/(?P<language>[a-z]{2})/(?:direct|live)/?\",\n ),\n)\n@pluginmatcher(\n name=\"vod\",\n pattern=re.compile(\n r\"https?://(?:\\w+\\.)?arte\\.tv/(?:guide/)?(?P<language>[a-z]{2})/(?:videos/)?(?P<video_id>(?!RC-|videos)[^/]+?)/.+\",\n ),\n)\nclass ArteTV(Plugin):\n API_URL = \"https://api.arte.tv/api/player/v2/config/{language}/{id}\"\n\n def _get_streams(self):\n self.id = self.match[\"video_id\"] if self.matches[\"vod\"] else \"LIVE\"\n\n json_url = self.API_URL.format(\n language=self.match[\"language\"],\n id=self.id,\n )\n streams, metadata = self.session.http.get(json_url, schema=validate.Schema(\n validate.parse_json(),\n {\"data\": {\"attributes\": dict}},\n validate.get((\"data\", \"attributes\")),\n {\n \"streams\": validate.any(\n [],\n [\n validate.all(\n {\n \"slot\": int,\n \"protocol\": str,\n \"url\": validate.url(),\n },\n validate.union_get(\"slot\", \"protocol\", \"url\"),\n ),\n ],\n ),\n \"metadata\": {\n \"title\": str,\n \"subtitle\": validate.any(None, str),\n },\n },\n validate.union_get(\"streams\", \"metadata\"),\n ))\n\n self.title = f\"{metadata['title']} - {metadata['subtitle']}\" if metadata[\"subtitle\"] else metadata[\"title\"]\n\n for _slot, protocol, url in sorted(streams, key=itemgetter(0)):\n if \"HLS\" not in protocol:\n continue\n return HLSStream.parse_variant_playlist(self.session, url)\n\n\n__plugin__ = ArteTV\n", "path": "src/streamlink/plugins/artetv.py"}]}
| 1,452 | 909 |
gh_patches_debug_44280
|
rasdani/github-patches
|
git_diff
|
strawberry-graphql__strawberry-57
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for renaming fields
We should be able to specify a custom name for a field when needed, like this:
```python
@strawberry.type
class Query:
example_field: str = strawberry.field(name="example")
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/field.py`
Content:
```
1 import typing
2
3 import dataclasses
4 from graphql import GraphQLField
5
6 from .constants import IS_STRAWBERRY_FIELD, IS_STRAWBERRY_INPUT
7 from .exceptions import MissingArgumentsAnnotationsError, MissingReturnAnnotationError
8 from .type_converter import REGISTRY, get_graphql_type_for_annotation
9 from .utils.dict_to_type import dict_to_type
10 from .utils.inspect import get_func_args
11 from .utils.lazy_property import lazy_property
12 from .utils.str_converters import to_camel_case, to_snake_case
13 from .utils.typing import (
14 get_list_annotation,
15 get_optional_annotation,
16 is_list,
17 is_optional,
18 )
19
20
21 class LazyFieldWrapper:
22 """A lazy wrapper for a strawberry field.
23 This allows to use cyclic dependencies in a strawberry fields:
24
25 >>> @strawberry.type
26 >>> class TypeA:
27 >>> @strawberry.field
28 >>> def type_b(self, info) -> "TypeB":
29 >>> from .type_b import TypeB
30 >>> return TypeB()
31 """
32
33 def __init__(self, obj, is_subscription, **kwargs):
34 self._wrapped_obj = obj
35 self.is_subscription = is_subscription
36 self.kwargs = kwargs
37
38 if callable(self._wrapped_obj):
39 self._check_has_annotations()
40
41 def _check_has_annotations(self):
42 # using annotations without passing from typing.get_type_hints
43 # as we don't the actually types for the annotations
44 annotations = self._wrapped_obj.__annotations__
45 name = self._wrapped_obj.__name__
46
47 if "return" not in annotations:
48 raise MissingReturnAnnotationError(name)
49
50 function_arguments = set(get_func_args(self._wrapped_obj)) - {"self", "info"}
51
52 arguments_annotations = {
53 key: value
54 for key, value in annotations.items()
55 if key not in ["info", "return"]
56 }
57
58 annotated_function_arguments = set(arguments_annotations.keys())
59 arguments_missing_annotations = (
60 function_arguments - annotated_function_arguments
61 )
62
63 if len(arguments_missing_annotations) > 0:
64 raise MissingArgumentsAnnotationsError(name, arguments_missing_annotations)
65
66 def __getattr__(self, attr):
67 if attr in self.__dict__:
68 return getattr(self, attr)
69
70 return getattr(self._wrapped_obj, attr)
71
72 def __call__(self, *args, **kwargs):
73 return self._wrapped_obj(self, *args, **kwargs)
74
75 @lazy_property
76 def field(self):
77 return _get_field(
78 self._wrapped_obj, is_subscription=self.is_subscription, **self.kwargs
79 )
80
81
82 class strawberry_field:
83 """A small wrapper for a field in strawberry.
84
85 You shouldn't be using this directly as this is used internally
86 when using `strawberry.field`.
87
88 This allows to use the following two syntaxes when using the type
89 decorator:
90
91 >>> class X:
92 >>> field_abc: str = strawberry.field(description="ABC")
93
94 >>> class X:
95 >>> @strawberry.field(description="ABC")
96 >>> def field_a(self, info) -> str:
97 >>> return "abc"
98
99 When calling this class as strawberry_field it creates a field
100 that stores metadata (such as field description). In addition
101 to that it also acts as decorator when called as a function,
102 allowing us to us both syntaxes.
103 """
104
105 def __init__(self, *, is_subscription=False, **kwargs):
106 self.field = dataclasses.field()
107 self.is_subscription = is_subscription
108 self.description = kwargs.get("description", None)
109 self.kwargs = kwargs
110
111 def __call__(self, wrap):
112 setattr(wrap, IS_STRAWBERRY_FIELD, True)
113
114 self.kwargs["description"] = self.description or wrap.__doc__
115
116 return LazyFieldWrapper(wrap, self.is_subscription, **self.kwargs)
117
118
119 def convert_args(args, annotations):
120 """Converts a nested dictionary to a dictionary of strawberry input types."""
121
122 converted_args = {}
123
124 for key, value in args.items():
125 key = to_snake_case(key)
126 annotation = annotations[key]
127
128 # we don't need to check about unions here since they are not
129 # yet supported for arguments.
130 # see https://github.com/graphql/graphql-spec/issues/488
131
132 is_list_of_args = False
133
134 if is_optional(annotation):
135 annotation = get_optional_annotation(annotation)
136
137 if is_list(annotation):
138 annotation = get_list_annotation(annotation)
139 is_list_of_args = True
140
141 if getattr(annotation, IS_STRAWBERRY_INPUT, False):
142 if is_list_of_args:
143 converted_args[key] = [dict_to_type(x, annotation) for x in value]
144 else:
145 converted_args[key] = dict_to_type(value, annotation)
146 else:
147 converted_args[key] = value
148
149 return converted_args
150
151
152 def _get_field(wrap, *, is_subscription=False, **kwargs):
153 annotations = typing.get_type_hints(wrap, None, REGISTRY)
154
155 name = wrap.__name__
156
157 field_type = get_graphql_type_for_annotation(annotations["return"], name)
158
159 arguments_annotations = {
160 key: value
161 for key, value in annotations.items()
162 if key not in ["info", "return"]
163 }
164
165 arguments = {
166 to_camel_case(name): get_graphql_type_for_annotation(annotation, name)
167 for name, annotation in arguments_annotations.items()
168 }
169
170 def resolver(source, info, **args):
171 args = convert_args(args, arguments_annotations)
172
173 return wrap(source, info, **args)
174
175 if is_subscription:
176
177 def _resolve(event, info):
178 return event
179
180 kwargs.update({"subscribe": resolver, "resolve": _resolve})
181 else:
182 kwargs.update({"resolve": resolver})
183
184 kwargs["description"] = kwargs.get("description", wrap.__doc__)
185
186 return GraphQLField(field_type, args=arguments, **kwargs)
187
188
189 def field(wrap=None, *, is_subscription=False, description=None):
190 """Annotates a method or property as a GraphQL field.
191
192 This is normally used inside a type declaration:
193
194 >>> @strawberry.type:
195 >>> class X:
196 >>> field_abc: str = strawberry.field(description="ABC")
197
198 >>> @strawberry.field(description="ABC")
199 >>> def field_with_resolver(self, info) -> str:
200 >>> return "abc"
201
202 it can be used both as decorator and as a normal function.
203 """
204
205 field = strawberry_field(description=description, is_subscription=is_subscription)
206
207 # when calling this with parens we are going to return a strawberry_field
208 # instance, so it can be used as both decorator and function.
209
210 if wrap is None:
211 return field
212
213 # otherwise we run the decorator directly,
214 # when called as @strawberry.field, without parens.
215
216 return field(wrap)
217
```
Path: `strawberry/type.py`
Content:
```
1 import typing
2 from functools import partial
3
4 from dataclasses import dataclass
5 from graphql import (
6 GraphQLField,
7 GraphQLInputField,
8 GraphQLInputObjectType,
9 GraphQLInterfaceType,
10 GraphQLObjectType,
11 )
12 from graphql.utilities.schema_printer import print_type
13
14 from .constants import IS_STRAWBERRY_FIELD, IS_STRAWBERRY_INPUT, IS_STRAWBERRY_INTERFACE
15 from .type_converter import REGISTRY, get_graphql_type_for_annotation
16 from .utils.str_converters import to_camel_case
17
18
19 def _get_resolver(cls, field_name):
20 def _resolver(obj, info):
21 # TODO: can we make this nicer?
22 # does it work in all the cases?
23
24 field_resolver = getattr(cls(**(obj.__dict__ if obj else {})), field_name)
25
26 if getattr(field_resolver, IS_STRAWBERRY_FIELD, False):
27 return field_resolver(obj, info)
28
29 return field_resolver
30
31 return _resolver
32
33
34 def _convert_annotations_fields(cls, *, is_input=False):
35 FieldClass = GraphQLInputField if is_input else GraphQLField
36 annotations = typing.get_type_hints(cls, None, REGISTRY)
37
38 fields = {}
39
40 for key, annotation in annotations.items():
41 field_name = to_camel_case(key)
42 class_field = getattr(cls, key, None)
43
44 description = getattr(class_field, "description", None)
45
46 fields[field_name] = FieldClass(
47 get_graphql_type_for_annotation(annotation, key),
48 description=description,
49 **({} if is_input else {"resolve": _get_resolver(cls, key)})
50 )
51
52 return fields
53
54
55 def _process_type(cls, *, is_input=False, is_interface=False, description=None):
56 name = cls.__name__
57 REGISTRY[name] = cls
58
59 def repr_(self):
60 return print_type(self.field)
61
62 setattr(cls, "__repr__", repr_)
63
64 def _get_fields():
65 fields = _convert_annotations_fields(cls, is_input=is_input)
66
67 fields.update(
68 {
69 to_camel_case(key): value.field
70 for key, value in cls.__dict__.items()
71 if getattr(value, IS_STRAWBERRY_FIELD, False)
72 }
73 )
74
75 return fields
76
77 if is_input:
78 setattr(cls, IS_STRAWBERRY_INPUT, True)
79 elif is_interface:
80 setattr(cls, IS_STRAWBERRY_INTERFACE, True)
81
82 extra_kwargs = {"description": description or cls.__doc__}
83
84 if is_input:
85 TypeClass = GraphQLInputObjectType
86 elif is_interface:
87 TypeClass = GraphQLInterfaceType
88 else:
89 TypeClass = GraphQLObjectType
90
91 extra_kwargs["interfaces"] = [
92 klass.field
93 for klass in cls.__bases__
94 if hasattr(klass, IS_STRAWBERRY_INTERFACE)
95 ]
96
97 cls.field = TypeClass(name, lambda: _get_fields(), **extra_kwargs)
98
99 return dataclass(cls, repr=False)
100
101
102 def type(cls=None, *, is_input=False, is_interface=False, description=None):
103 """Annotates a class as a GraphQL type.
104
105 Example usage:
106
107 >>> @strawberry.type:
108 >>> class X:
109 >>> field_abc: str = "ABC"
110 """
111
112 def wrap(cls):
113 return _process_type(
114 cls, is_input=is_input, is_interface=is_interface, description=description
115 )
116
117 if cls is None:
118 return wrap
119
120 return wrap(cls)
121
122
123 input = partial(type, is_input=True)
124 interface = partial(type, is_interface=True)
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/strawberry/field.py b/strawberry/field.py
--- a/strawberry/field.py
+++ b/strawberry/field.py
@@ -30,9 +30,10 @@
>>> return TypeB()
"""
- def __init__(self, obj, is_subscription, **kwargs):
+ def __init__(self, obj, is_subscription, name=None, **kwargs):
self._wrapped_obj = obj
self.is_subscription = is_subscription
+ self.name = name
self.kwargs = kwargs
if callable(self._wrapped_obj):
@@ -106,6 +107,7 @@
self.field = dataclasses.field()
self.is_subscription = is_subscription
self.description = kwargs.get("description", None)
+ self.name = kwargs.pop("name", None)
self.kwargs = kwargs
def __call__(self, wrap):
@@ -113,7 +115,7 @@
self.kwargs["description"] = self.description or wrap.__doc__
- return LazyFieldWrapper(wrap, self.is_subscription, **self.kwargs)
+ return LazyFieldWrapper(wrap, self.is_subscription, self.name, **self.kwargs)
def convert_args(args, annotations):
@@ -186,7 +188,7 @@
return GraphQLField(field_type, args=arguments, **kwargs)
-def field(wrap=None, *, is_subscription=False, description=None):
+def field(wrap=None, *, is_subscription=False, name=None, description=None):
"""Annotates a method or property as a GraphQL field.
This is normally used inside a type declaration:
@@ -202,7 +204,9 @@
it can be used both as decorator and as a normal function.
"""
- field = strawberry_field(description=description, is_subscription=is_subscription)
+ field = strawberry_field(
+ name=name, description=description, is_subscription=is_subscription
+ )
# when calling this with parens we are going to return a strawberry_field
# instance, so it can be used as both decorator and function.
diff --git a/strawberry/type.py b/strawberry/type.py
--- a/strawberry/type.py
+++ b/strawberry/type.py
@@ -12,6 +12,7 @@
from graphql.utilities.schema_printer import print_type
from .constants import IS_STRAWBERRY_FIELD, IS_STRAWBERRY_INPUT, IS_STRAWBERRY_INTERFACE
+from .field import strawberry_field
from .type_converter import REGISTRY, get_graphql_type_for_annotation
from .utils.str_converters import to_camel_case
@@ -26,6 +27,10 @@
if getattr(field_resolver, IS_STRAWBERRY_FIELD, False):
return field_resolver(obj, info)
+ elif field_resolver.__class__ is strawberry_field:
+ # TODO: support default values
+ return None
+
return field_resolver
return _resolver
@@ -38,10 +43,12 @@
fields = {}
for key, annotation in annotations.items():
- field_name = to_camel_case(key)
class_field = getattr(cls, key, None)
description = getattr(class_field, "description", None)
+ name = getattr(class_field, "name", None)
+
+ field_name = name or to_camel_case(key)
fields[field_name] = FieldClass(
get_graphql_type_for_annotation(annotation, key),
@@ -64,13 +71,16 @@
def _get_fields():
fields = _convert_annotations_fields(cls, is_input=is_input)
- fields.update(
- {
- to_camel_case(key): value.field
- for key, value in cls.__dict__.items()
- if getattr(value, IS_STRAWBERRY_FIELD, False)
- }
- )
+ strawberry_fields = {
+ key: value
+ for key, value in cls.__dict__.items()
+ if getattr(value, IS_STRAWBERRY_FIELD, False)
+ }
+
+ for key, value in strawberry_fields.items():
+ name = getattr(value, "name", None) or to_camel_case(key)
+
+ fields[name] = value.field
return fields
|
{"golden_diff": "diff --git a/strawberry/field.py b/strawberry/field.py\n--- a/strawberry/field.py\n+++ b/strawberry/field.py\n@@ -30,9 +30,10 @@\n >>> return TypeB()\n \"\"\"\n \n- def __init__(self, obj, is_subscription, **kwargs):\n+ def __init__(self, obj, is_subscription, name=None, **kwargs):\n self._wrapped_obj = obj\n self.is_subscription = is_subscription\n+ self.name = name\n self.kwargs = kwargs\n \n if callable(self._wrapped_obj):\n@@ -106,6 +107,7 @@\n self.field = dataclasses.field()\n self.is_subscription = is_subscription\n self.description = kwargs.get(\"description\", None)\n+ self.name = kwargs.pop(\"name\", None)\n self.kwargs = kwargs\n \n def __call__(self, wrap):\n@@ -113,7 +115,7 @@\n \n self.kwargs[\"description\"] = self.description or wrap.__doc__\n \n- return LazyFieldWrapper(wrap, self.is_subscription, **self.kwargs)\n+ return LazyFieldWrapper(wrap, self.is_subscription, self.name, **self.kwargs)\n \n \n def convert_args(args, annotations):\n@@ -186,7 +188,7 @@\n return GraphQLField(field_type, args=arguments, **kwargs)\n \n \n-def field(wrap=None, *, is_subscription=False, description=None):\n+def field(wrap=None, *, is_subscription=False, name=None, description=None):\n \"\"\"Annotates a method or property as a GraphQL field.\n \n This is normally used inside a type declaration:\n@@ -202,7 +204,9 @@\n it can be used both as decorator and as a normal function.\n \"\"\"\n \n- field = strawberry_field(description=description, is_subscription=is_subscription)\n+ field = strawberry_field(\n+ name=name, description=description, is_subscription=is_subscription\n+ )\n \n # when calling this with parens we are going to return a strawberry_field\n # instance, so it can be used as both decorator and function.\ndiff --git a/strawberry/type.py b/strawberry/type.py\n--- a/strawberry/type.py\n+++ b/strawberry/type.py\n@@ -12,6 +12,7 @@\n from graphql.utilities.schema_printer import print_type\n \n from .constants import IS_STRAWBERRY_FIELD, IS_STRAWBERRY_INPUT, IS_STRAWBERRY_INTERFACE\n+from .field import strawberry_field\n from .type_converter import REGISTRY, get_graphql_type_for_annotation\n from .utils.str_converters import to_camel_case\n \n@@ -26,6 +27,10 @@\n if getattr(field_resolver, IS_STRAWBERRY_FIELD, False):\n return field_resolver(obj, info)\n \n+ elif field_resolver.__class__ is strawberry_field:\n+ # TODO: support default values\n+ return None\n+\n return field_resolver\n \n return _resolver\n@@ -38,10 +43,12 @@\n fields = {}\n \n for key, annotation in annotations.items():\n- field_name = to_camel_case(key)\n class_field = getattr(cls, key, None)\n \n description = getattr(class_field, \"description\", None)\n+ name = getattr(class_field, \"name\", None)\n+\n+ field_name = name or to_camel_case(key)\n \n fields[field_name] = FieldClass(\n get_graphql_type_for_annotation(annotation, key),\n@@ -64,13 +71,16 @@\n def _get_fields():\n fields = _convert_annotations_fields(cls, is_input=is_input)\n \n- fields.update(\n- {\n- to_camel_case(key): value.field\n- for key, value in cls.__dict__.items()\n- if getattr(value, IS_STRAWBERRY_FIELD, False)\n- }\n- )\n+ strawberry_fields = {\n+ key: value\n+ for key, value in cls.__dict__.items()\n+ if getattr(value, IS_STRAWBERRY_FIELD, False)\n+ }\n+\n+ for key, value in strawberry_fields.items():\n+ name = getattr(value, \"name\", None) or to_camel_case(key)\n+\n+ fields[name] = value.field\n \n return fields\n", "issue": "Add support for renaming fields\nWe should be able to specify a custom name for a field when needed, like this:\r\n\r\n```python\r\[email protected]\r\nclass Query:\r\n example_field: str = strawberry.field(name=\"example\")\r\n```\n", "before_files": [{"content": "import typing\n\nimport dataclasses\nfrom graphql import GraphQLField\n\nfrom .constants import IS_STRAWBERRY_FIELD, IS_STRAWBERRY_INPUT\nfrom .exceptions import MissingArgumentsAnnotationsError, MissingReturnAnnotationError\nfrom .type_converter import REGISTRY, get_graphql_type_for_annotation\nfrom .utils.dict_to_type import dict_to_type\nfrom .utils.inspect import get_func_args\nfrom .utils.lazy_property import lazy_property\nfrom .utils.str_converters import to_camel_case, to_snake_case\nfrom .utils.typing import (\n get_list_annotation,\n get_optional_annotation,\n is_list,\n is_optional,\n)\n\n\nclass LazyFieldWrapper:\n \"\"\"A lazy wrapper for a strawberry field.\n This allows to use cyclic dependencies in a strawberry fields:\n\n >>> @strawberry.type\n >>> class TypeA:\n >>> @strawberry.field\n >>> def type_b(self, info) -> \"TypeB\":\n >>> from .type_b import TypeB\n >>> return TypeB()\n \"\"\"\n\n def __init__(self, obj, is_subscription, **kwargs):\n self._wrapped_obj = obj\n self.is_subscription = is_subscription\n self.kwargs = kwargs\n\n if callable(self._wrapped_obj):\n self._check_has_annotations()\n\n def _check_has_annotations(self):\n # using annotations without passing from typing.get_type_hints\n # as we don't the actually types for the annotations\n annotations = self._wrapped_obj.__annotations__\n name = self._wrapped_obj.__name__\n\n if \"return\" not in annotations:\n raise MissingReturnAnnotationError(name)\n\n function_arguments = set(get_func_args(self._wrapped_obj)) - {\"self\", \"info\"}\n\n arguments_annotations = {\n key: value\n for key, value in annotations.items()\n if key not in [\"info\", \"return\"]\n }\n\n annotated_function_arguments = set(arguments_annotations.keys())\n arguments_missing_annotations = (\n function_arguments - annotated_function_arguments\n )\n\n if len(arguments_missing_annotations) > 0:\n raise MissingArgumentsAnnotationsError(name, arguments_missing_annotations)\n\n def __getattr__(self, attr):\n if attr in self.__dict__:\n return getattr(self, attr)\n\n return getattr(self._wrapped_obj, attr)\n\n def __call__(self, *args, **kwargs):\n return self._wrapped_obj(self, *args, **kwargs)\n\n @lazy_property\n def field(self):\n return _get_field(\n self._wrapped_obj, is_subscription=self.is_subscription, **self.kwargs\n )\n\n\nclass strawberry_field:\n \"\"\"A small wrapper for a field in strawberry.\n\n You shouldn't be using this directly as this is used internally\n when using `strawberry.field`.\n\n This allows to use the following two syntaxes when using the type\n decorator:\n\n >>> class X:\n >>> field_abc: str = strawberry.field(description=\"ABC\")\n\n >>> class X:\n >>> @strawberry.field(description=\"ABC\")\n >>> def field_a(self, info) -> str:\n >>> return \"abc\"\n\n When calling this class as strawberry_field it creates a field\n that stores metadata (such as field description). In addition\n to that it also acts as decorator when called as a function,\n allowing us to us both syntaxes.\n \"\"\"\n\n def __init__(self, *, is_subscription=False, **kwargs):\n self.field = dataclasses.field()\n self.is_subscription = is_subscription\n self.description = kwargs.get(\"description\", None)\n self.kwargs = kwargs\n\n def __call__(self, wrap):\n setattr(wrap, IS_STRAWBERRY_FIELD, True)\n\n self.kwargs[\"description\"] = self.description or wrap.__doc__\n\n return LazyFieldWrapper(wrap, self.is_subscription, **self.kwargs)\n\n\ndef convert_args(args, annotations):\n \"\"\"Converts a nested dictionary to a dictionary of strawberry input types.\"\"\"\n\n converted_args = {}\n\n for key, value in args.items():\n key = to_snake_case(key)\n annotation = annotations[key]\n\n # we don't need to check about unions here since they are not\n # yet supported for arguments.\n # see https://github.com/graphql/graphql-spec/issues/488\n\n is_list_of_args = False\n\n if is_optional(annotation):\n annotation = get_optional_annotation(annotation)\n\n if is_list(annotation):\n annotation = get_list_annotation(annotation)\n is_list_of_args = True\n\n if getattr(annotation, IS_STRAWBERRY_INPUT, False):\n if is_list_of_args:\n converted_args[key] = [dict_to_type(x, annotation) for x in value]\n else:\n converted_args[key] = dict_to_type(value, annotation)\n else:\n converted_args[key] = value\n\n return converted_args\n\n\ndef _get_field(wrap, *, is_subscription=False, **kwargs):\n annotations = typing.get_type_hints(wrap, None, REGISTRY)\n\n name = wrap.__name__\n\n field_type = get_graphql_type_for_annotation(annotations[\"return\"], name)\n\n arguments_annotations = {\n key: value\n for key, value in annotations.items()\n if key not in [\"info\", \"return\"]\n }\n\n arguments = {\n to_camel_case(name): get_graphql_type_for_annotation(annotation, name)\n for name, annotation in arguments_annotations.items()\n }\n\n def resolver(source, info, **args):\n args = convert_args(args, arguments_annotations)\n\n return wrap(source, info, **args)\n\n if is_subscription:\n\n def _resolve(event, info):\n return event\n\n kwargs.update({\"subscribe\": resolver, \"resolve\": _resolve})\n else:\n kwargs.update({\"resolve\": resolver})\n\n kwargs[\"description\"] = kwargs.get(\"description\", wrap.__doc__)\n\n return GraphQLField(field_type, args=arguments, **kwargs)\n\n\ndef field(wrap=None, *, is_subscription=False, description=None):\n \"\"\"Annotates a method or property as a GraphQL field.\n\n This is normally used inside a type declaration:\n\n >>> @strawberry.type:\n >>> class X:\n >>> field_abc: str = strawberry.field(description=\"ABC\")\n\n >>> @strawberry.field(description=\"ABC\")\n >>> def field_with_resolver(self, info) -> str:\n >>> return \"abc\"\n\n it can be used both as decorator and as a normal function.\n \"\"\"\n\n field = strawberry_field(description=description, is_subscription=is_subscription)\n\n # when calling this with parens we are going to return a strawberry_field\n # instance, so it can be used as both decorator and function.\n\n if wrap is None:\n return field\n\n # otherwise we run the decorator directly,\n # when called as @strawberry.field, without parens.\n\n return field(wrap)\n", "path": "strawberry/field.py"}, {"content": "import typing\nfrom functools import partial\n\nfrom dataclasses import dataclass\nfrom graphql import (\n GraphQLField,\n GraphQLInputField,\n GraphQLInputObjectType,\n GraphQLInterfaceType,\n GraphQLObjectType,\n)\nfrom graphql.utilities.schema_printer import print_type\n\nfrom .constants import IS_STRAWBERRY_FIELD, IS_STRAWBERRY_INPUT, IS_STRAWBERRY_INTERFACE\nfrom .type_converter import REGISTRY, get_graphql_type_for_annotation\nfrom .utils.str_converters import to_camel_case\n\n\ndef _get_resolver(cls, field_name):\n def _resolver(obj, info):\n # TODO: can we make this nicer?\n # does it work in all the cases?\n\n field_resolver = getattr(cls(**(obj.__dict__ if obj else {})), field_name)\n\n if getattr(field_resolver, IS_STRAWBERRY_FIELD, False):\n return field_resolver(obj, info)\n\n return field_resolver\n\n return _resolver\n\n\ndef _convert_annotations_fields(cls, *, is_input=False):\n FieldClass = GraphQLInputField if is_input else GraphQLField\n annotations = typing.get_type_hints(cls, None, REGISTRY)\n\n fields = {}\n\n for key, annotation in annotations.items():\n field_name = to_camel_case(key)\n class_field = getattr(cls, key, None)\n\n description = getattr(class_field, \"description\", None)\n\n fields[field_name] = FieldClass(\n get_graphql_type_for_annotation(annotation, key),\n description=description,\n **({} if is_input else {\"resolve\": _get_resolver(cls, key)})\n )\n\n return fields\n\n\ndef _process_type(cls, *, is_input=False, is_interface=False, description=None):\n name = cls.__name__\n REGISTRY[name] = cls\n\n def repr_(self):\n return print_type(self.field)\n\n setattr(cls, \"__repr__\", repr_)\n\n def _get_fields():\n fields = _convert_annotations_fields(cls, is_input=is_input)\n\n fields.update(\n {\n to_camel_case(key): value.field\n for key, value in cls.__dict__.items()\n if getattr(value, IS_STRAWBERRY_FIELD, False)\n }\n )\n\n return fields\n\n if is_input:\n setattr(cls, IS_STRAWBERRY_INPUT, True)\n elif is_interface:\n setattr(cls, IS_STRAWBERRY_INTERFACE, True)\n\n extra_kwargs = {\"description\": description or cls.__doc__}\n\n if is_input:\n TypeClass = GraphQLInputObjectType\n elif is_interface:\n TypeClass = GraphQLInterfaceType\n else:\n TypeClass = GraphQLObjectType\n\n extra_kwargs[\"interfaces\"] = [\n klass.field\n for klass in cls.__bases__\n if hasattr(klass, IS_STRAWBERRY_INTERFACE)\n ]\n\n cls.field = TypeClass(name, lambda: _get_fields(), **extra_kwargs)\n\n return dataclass(cls, repr=False)\n\n\ndef type(cls=None, *, is_input=False, is_interface=False, description=None):\n \"\"\"Annotates a class as a GraphQL type.\n\n Example usage:\n\n >>> @strawberry.type:\n >>> class X:\n >>> field_abc: str = \"ABC\"\n \"\"\"\n\n def wrap(cls):\n return _process_type(\n cls, is_input=is_input, is_interface=is_interface, description=description\n )\n\n if cls is None:\n return wrap\n\n return wrap(cls)\n\n\ninput = partial(type, is_input=True)\ninterface = partial(type, is_interface=True)\n", "path": "strawberry/type.py"}], "after_files": [{"content": "import typing\n\nimport dataclasses\nfrom graphql import GraphQLField\n\nfrom .constants import IS_STRAWBERRY_FIELD, IS_STRAWBERRY_INPUT\nfrom .exceptions import MissingArgumentsAnnotationsError, MissingReturnAnnotationError\nfrom .type_converter import REGISTRY, get_graphql_type_for_annotation\nfrom .utils.dict_to_type import dict_to_type\nfrom .utils.inspect import get_func_args\nfrom .utils.lazy_property import lazy_property\nfrom .utils.str_converters import to_camel_case, to_snake_case\nfrom .utils.typing import (\n get_list_annotation,\n get_optional_annotation,\n is_list,\n is_optional,\n)\n\n\nclass LazyFieldWrapper:\n \"\"\"A lazy wrapper for a strawberry field.\n This allows to use cyclic dependencies in a strawberry fields:\n\n >>> @strawberry.type\n >>> class TypeA:\n >>> @strawberry.field\n >>> def type_b(self, info) -> \"TypeB\":\n >>> from .type_b import TypeB\n >>> return TypeB()\n \"\"\"\n\n def __init__(self, obj, is_subscription, name=None, **kwargs):\n self._wrapped_obj = obj\n self.is_subscription = is_subscription\n self.name = name\n self.kwargs = kwargs\n\n if callable(self._wrapped_obj):\n self._check_has_annotations()\n\n def _check_has_annotations(self):\n # using annotations without passing from typing.get_type_hints\n # as we don't the actually types for the annotations\n annotations = self._wrapped_obj.__annotations__\n name = self._wrapped_obj.__name__\n\n if \"return\" not in annotations:\n raise MissingReturnAnnotationError(name)\n\n function_arguments = set(get_func_args(self._wrapped_obj)) - {\"self\", \"info\"}\n\n arguments_annotations = {\n key: value\n for key, value in annotations.items()\n if key not in [\"info\", \"return\"]\n }\n\n annotated_function_arguments = set(arguments_annotations.keys())\n arguments_missing_annotations = (\n function_arguments - annotated_function_arguments\n )\n\n if len(arguments_missing_annotations) > 0:\n raise MissingArgumentsAnnotationsError(name, arguments_missing_annotations)\n\n def __getattr__(self, attr):\n if attr in self.__dict__:\n return getattr(self, attr)\n\n return getattr(self._wrapped_obj, attr)\n\n def __call__(self, *args, **kwargs):\n return self._wrapped_obj(self, *args, **kwargs)\n\n @lazy_property\n def field(self):\n return _get_field(\n self._wrapped_obj, is_subscription=self.is_subscription, **self.kwargs\n )\n\n\nclass strawberry_field:\n \"\"\"A small wrapper for a field in strawberry.\n\n You shouldn't be using this directly as this is used internally\n when using `strawberry.field`.\n\n This allows to use the following two syntaxes when using the type\n decorator:\n\n >>> class X:\n >>> field_abc: str = strawberry.field(description=\"ABC\")\n\n >>> class X:\n >>> @strawberry.field(description=\"ABC\")\n >>> def field_a(self, info) -> str:\n >>> return \"abc\"\n\n When calling this class as strawberry_field it creates a field\n that stores metadata (such as field description). In addition\n to that it also acts as decorator when called as a function,\n allowing us to us both syntaxes.\n \"\"\"\n\n def __init__(self, *, is_subscription=False, **kwargs):\n self.field = dataclasses.field()\n self.is_subscription = is_subscription\n self.description = kwargs.get(\"description\", None)\n self.name = kwargs.pop(\"name\", None)\n self.kwargs = kwargs\n\n def __call__(self, wrap):\n setattr(wrap, IS_STRAWBERRY_FIELD, True)\n\n self.kwargs[\"description\"] = self.description or wrap.__doc__\n\n return LazyFieldWrapper(wrap, self.is_subscription, self.name, **self.kwargs)\n\n\ndef convert_args(args, annotations):\n \"\"\"Converts a nested dictionary to a dictionary of strawberry input types.\"\"\"\n\n converted_args = {}\n\n for key, value in args.items():\n key = to_snake_case(key)\n annotation = annotations[key]\n\n # we don't need to check about unions here since they are not\n # yet supported for arguments.\n # see https://github.com/graphql/graphql-spec/issues/488\n\n is_list_of_args = False\n\n if is_optional(annotation):\n annotation = get_optional_annotation(annotation)\n\n if is_list(annotation):\n annotation = get_list_annotation(annotation)\n is_list_of_args = True\n\n if getattr(annotation, IS_STRAWBERRY_INPUT, False):\n if is_list_of_args:\n converted_args[key] = [dict_to_type(x, annotation) for x in value]\n else:\n converted_args[key] = dict_to_type(value, annotation)\n else:\n converted_args[key] = value\n\n return converted_args\n\n\ndef _get_field(wrap, *, is_subscription=False, **kwargs):\n annotations = typing.get_type_hints(wrap, None, REGISTRY)\n\n name = wrap.__name__\n\n field_type = get_graphql_type_for_annotation(annotations[\"return\"], name)\n\n arguments_annotations = {\n key: value\n for key, value in annotations.items()\n if key not in [\"info\", \"return\"]\n }\n\n arguments = {\n to_camel_case(name): get_graphql_type_for_annotation(annotation, name)\n for name, annotation in arguments_annotations.items()\n }\n\n def resolver(source, info, **args):\n args = convert_args(args, arguments_annotations)\n\n return wrap(source, info, **args)\n\n if is_subscription:\n\n def _resolve(event, info):\n return event\n\n kwargs.update({\"subscribe\": resolver, \"resolve\": _resolve})\n else:\n kwargs.update({\"resolve\": resolver})\n\n kwargs[\"description\"] = kwargs.get(\"description\", wrap.__doc__)\n\n return GraphQLField(field_type, args=arguments, **kwargs)\n\n\ndef field(wrap=None, *, is_subscription=False, name=None, description=None):\n \"\"\"Annotates a method or property as a GraphQL field.\n\n This is normally used inside a type declaration:\n\n >>> @strawberry.type:\n >>> class X:\n >>> field_abc: str = strawberry.field(description=\"ABC\")\n\n >>> @strawberry.field(description=\"ABC\")\n >>> def field_with_resolver(self, info) -> str:\n >>> return \"abc\"\n\n it can be used both as decorator and as a normal function.\n \"\"\"\n\n field = strawberry_field(\n name=name, description=description, is_subscription=is_subscription\n )\n\n # when calling this with parens we are going to return a strawberry_field\n # instance, so it can be used as both decorator and function.\n\n if wrap is None:\n return field\n\n # otherwise we run the decorator directly,\n # when called as @strawberry.field, without parens.\n\n return field(wrap)\n", "path": "strawberry/field.py"}, {"content": "import typing\nfrom functools import partial\n\nfrom dataclasses import dataclass\nfrom graphql import (\n GraphQLField,\n GraphQLInputField,\n GraphQLInputObjectType,\n GraphQLInterfaceType,\n GraphQLObjectType,\n)\nfrom graphql.utilities.schema_printer import print_type\n\nfrom .constants import IS_STRAWBERRY_FIELD, IS_STRAWBERRY_INPUT, IS_STRAWBERRY_INTERFACE\nfrom .field import strawberry_field\nfrom .type_converter import REGISTRY, get_graphql_type_for_annotation\nfrom .utils.str_converters import to_camel_case\n\n\ndef _get_resolver(cls, field_name):\n def _resolver(obj, info):\n # TODO: can we make this nicer?\n # does it work in all the cases?\n\n field_resolver = getattr(cls(**(obj.__dict__ if obj else {})), field_name)\n\n if getattr(field_resolver, IS_STRAWBERRY_FIELD, False):\n return field_resolver(obj, info)\n\n elif field_resolver.__class__ is strawberry_field:\n # TODO: support default values\n return None\n\n return field_resolver\n\n return _resolver\n\n\ndef _convert_annotations_fields(cls, *, is_input=False):\n FieldClass = GraphQLInputField if is_input else GraphQLField\n annotations = typing.get_type_hints(cls, None, REGISTRY)\n\n fields = {}\n\n for key, annotation in annotations.items():\n class_field = getattr(cls, key, None)\n\n description = getattr(class_field, \"description\", None)\n name = getattr(class_field, \"name\", None)\n\n field_name = name or to_camel_case(key)\n\n fields[field_name] = FieldClass(\n get_graphql_type_for_annotation(annotation, key),\n description=description,\n **({} if is_input else {\"resolve\": _get_resolver(cls, key)})\n )\n\n return fields\n\n\ndef _process_type(cls, *, is_input=False, is_interface=False, description=None):\n name = cls.__name__\n REGISTRY[name] = cls\n\n def repr_(self):\n return print_type(self.field)\n\n setattr(cls, \"__repr__\", repr_)\n\n def _get_fields():\n fields = _convert_annotations_fields(cls, is_input=is_input)\n\n strawberry_fields = {\n key: value\n for key, value in cls.__dict__.items()\n if getattr(value, IS_STRAWBERRY_FIELD, False)\n }\n\n for key, value in strawberry_fields.items():\n name = getattr(value, \"name\", None) or to_camel_case(key)\n\n fields[name] = value.field\n\n return fields\n\n if is_input:\n setattr(cls, IS_STRAWBERRY_INPUT, True)\n elif is_interface:\n setattr(cls, IS_STRAWBERRY_INTERFACE, True)\n\n extra_kwargs = {\"description\": description or cls.__doc__}\n\n if is_input:\n TypeClass = GraphQLInputObjectType\n elif is_interface:\n TypeClass = GraphQLInterfaceType\n else:\n TypeClass = GraphQLObjectType\n\n extra_kwargs[\"interfaces\"] = [\n klass.field\n for klass in cls.__bases__\n if hasattr(klass, IS_STRAWBERRY_INTERFACE)\n ]\n\n cls.field = TypeClass(name, lambda: _get_fields(), **extra_kwargs)\n\n return dataclass(cls, repr=False)\n\n\ndef type(cls=None, *, is_input=False, is_interface=False, description=None):\n \"\"\"Annotates a class as a GraphQL type.\n\n Example usage:\n\n >>> @strawberry.type:\n >>> class X:\n >>> field_abc: str = \"ABC\"\n \"\"\"\n\n def wrap(cls):\n return _process_type(\n cls, is_input=is_input, is_interface=is_interface, description=description\n )\n\n if cls is None:\n return wrap\n\n return wrap(cls)\n\n\ninput = partial(type, is_input=True)\ninterface = partial(type, is_interface=True)\n", "path": "strawberry/type.py"}]}
| 3,391 | 945 |
gh_patches_debug_20703
|
rasdani/github-patches
|
git_diff
|
OCHA-DAP__hdx-ckan-1817
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Search: search doesn't appear to use the organization
Example: the MapAction org has two public datasets, but searching "mapaction" or MapAction returns 0 results.
Other org searches will return results, but this is probably because the name of the org is mentioned in other metadata.
To do:
1. confirm that search queries from the homepage or main search bar are not using organizations
2. if that is the source of the problem, add org to the search queries
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_search/ckanext/hdx_search/plugin.py`
Content:
```
1 import logging
2 import ckan.plugins as plugins
3 import ckan.plugins.toolkit as tk
4 import ckan.lib.plugins as lib_plugins
5
6
7 class HDXSearchPlugin(plugins.SingletonPlugin):
8 plugins.implements(plugins.IConfigurer, inherit=False)
9 plugins.implements(plugins.IRoutes, inherit=True)
10 plugins.implements(plugins.ITemplateHelpers, inherit=False)
11 plugins.implements(plugins.IPackageController, inherit=True)
12
13 def update_config(self, config):
14 tk.add_template_directory(config, 'templates')
15
16 def get_helpers(self):
17 return {}
18
19 def before_map(self, map):
20 map.connect('search', '/search',
21 controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')
22 map.connect('simple_search',
23 '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')
24 return map
25
26 def after_map(self, map):
27 map.connect('search', '/search',
28 controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')
29 map.connect('simple_search',
30 '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')
31 return map
32
33 def before_search(self, search_params):
34 if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:
35 search_params['facet.field'].append('vocab_Topics')
36
37 # If indicator flag is set, search only that type
38 if 'ext_indicator' in search_params['extras']:
39 if int(search_params['extras']['ext_indicator']) == 1:
40 search_params['fq'] = search_params['fq'] + ' +extras_indicator:1'
41 elif int(search_params['extras']['ext_indicator']) == 0:
42 search_params['fq'] = search_params[
43 'fq'] + ' -extras_indicator:1'
44 return search_params
45
46 def after_search(self, search_results, search_params):
47 return search_results
48
49 def before_view(self, pkg_dict):
50 return pkg_dict
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ckanext-hdx_search/ckanext/hdx_search/plugin.py b/ckanext-hdx_search/ckanext/hdx_search/plugin.py
--- a/ckanext-hdx_search/ckanext/hdx_search/plugin.py
+++ b/ckanext-hdx_search/ckanext/hdx_search/plugin.py
@@ -1,8 +1,13 @@
-import logging
+import logging, re
import ckan.plugins as plugins
import ckan.plugins.toolkit as tk
import ckan.lib.plugins as lib_plugins
+def convert_country(q):
+ for c in tk.get_action('group_list')({'user':'127.0.0.1'},{'all_fields': True}):
+ if re.findall(c['display_name'].lower(),q.lower()):
+ q += ' '+c['name']
+ return q
class HDXSearchPlugin(plugins.SingletonPlugin):
plugins.implements(plugins.IConfigurer, inherit=False)
@@ -31,6 +36,7 @@
return map
def before_search(self, search_params):
+ search_params['q'] = convert_country(search_params['q'])
if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:
search_params['facet.field'].append('vocab_Topics')
|
{"golden_diff": "diff --git a/ckanext-hdx_search/ckanext/hdx_search/plugin.py b/ckanext-hdx_search/ckanext/hdx_search/plugin.py\n--- a/ckanext-hdx_search/ckanext/hdx_search/plugin.py\n+++ b/ckanext-hdx_search/ckanext/hdx_search/plugin.py\n@@ -1,8 +1,13 @@\n-import logging\n+import logging, re\n import ckan.plugins as plugins\n import ckan.plugins.toolkit as tk\n import ckan.lib.plugins as lib_plugins\n \n+def convert_country(q):\n+ for c in tk.get_action('group_list')({'user':'127.0.0.1'},{'all_fields': True}):\n+ if re.findall(c['display_name'].lower(),q.lower()):\n+ q += ' '+c['name']\n+ return q\n \n class HDXSearchPlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer, inherit=False)\n@@ -31,6 +36,7 @@\n return map\n \n def before_search(self, search_params):\n+ search_params['q'] = convert_country(search_params['q'])\n if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:\n search_params['facet.field'].append('vocab_Topics')\n", "issue": "Search: search doesn't appear to use the organization\nExample: the MapAction org has two public datasets, but searching \"mapaction\" or MapAction returns 0 results. \n\nOther org searches will return results, but this is probably because the name of the org is mentioned in other metadata. \n\nTo do: \n1. confirm that search queries from the homepage or main search bar are not using organizations\n2. if that is the source of the problem, add org to the search queries\n\n", "before_files": [{"content": "import logging\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as tk\nimport ckan.lib.plugins as lib_plugins\n\n\nclass HDXSearchPlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer, inherit=False)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.ITemplateHelpers, inherit=False)\n plugins.implements(plugins.IPackageController, inherit=True)\n\n def update_config(self, config):\n tk.add_template_directory(config, 'templates')\n\n def get_helpers(self):\n return {}\n\n def before_map(self, map):\n map.connect('search', '/search',\n controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('simple_search',\n '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')\n return map\n\n def after_map(self, map):\n map.connect('search', '/search',\n controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('simple_search',\n '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')\n return map\n\n def before_search(self, search_params):\n if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:\n search_params['facet.field'].append('vocab_Topics')\n\n # If indicator flag is set, search only that type\n if 'ext_indicator' in search_params['extras']:\n if int(search_params['extras']['ext_indicator']) == 1:\n search_params['fq'] = search_params['fq'] + ' +extras_indicator:1'\n elif int(search_params['extras']['ext_indicator']) == 0:\n search_params['fq'] = search_params[\n 'fq'] + ' -extras_indicator:1'\n return search_params\n\n def after_search(self, search_results, search_params):\n return search_results\n\n def before_view(self, pkg_dict):\n return pkg_dict\n", "path": "ckanext-hdx_search/ckanext/hdx_search/plugin.py"}], "after_files": [{"content": "import logging, re\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as tk\nimport ckan.lib.plugins as lib_plugins\n\ndef convert_country(q):\n for c in tk.get_action('group_list')({'user':'127.0.0.1'},{'all_fields': True}):\n if re.findall(c['display_name'].lower(),q.lower()):\n q += ' '+c['name']\n return q\n\nclass HDXSearchPlugin(plugins.SingletonPlugin):\n plugins.implements(plugins.IConfigurer, inherit=False)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.ITemplateHelpers, inherit=False)\n plugins.implements(plugins.IPackageController, inherit=True)\n\n def update_config(self, config):\n tk.add_template_directory(config, 'templates')\n\n def get_helpers(self):\n return {}\n\n def before_map(self, map):\n map.connect('search', '/search',\n controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('simple_search',\n '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')\n return map\n\n def after_map(self, map):\n map.connect('search', '/search',\n controller='ckanext.hdx_search.controllers.search_controller:HDXSearchController', action='search')\n map.connect('simple_search',\n '/dataset', controller='ckanext.hdx_search.controllers.simple_search_controller:HDXSimpleSearchController', action='package_search')\n return map\n\n def before_search(self, search_params):\n search_params['q'] = convert_country(search_params['q'])\n if 'facet.field' in search_params and 'vocab_Topics' not in search_params['facet.field']:\n search_params['facet.field'].append('vocab_Topics')\n\n # If indicator flag is set, search only that type\n if 'ext_indicator' in search_params['extras']:\n if int(search_params['extras']['ext_indicator']) == 1:\n search_params['fq'] = search_params['fq'] + ' +extras_indicator:1'\n elif int(search_params['extras']['ext_indicator']) == 0:\n search_params['fq'] = search_params[\n 'fq'] + ' -extras_indicator:1'\n return search_params\n\n def after_search(self, search_results, search_params):\n return search_results\n\n def before_view(self, pkg_dict):\n return pkg_dict\n", "path": "ckanext-hdx_search/ckanext/hdx_search/plugin.py"}]}
| 918 | 287 |
gh_patches_debug_11786
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-2877
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BigQuery] Allow more recent versions of google-api-core?
### Describe the feature
Currently dbt-bigquery has [an upper limit of 1.16 on `google-api-core`](https://github.com/fishtown-analytics/dbt/blob/v0.18.1b3/plugins/bigquery/setup.py#L53). That release is from Jan of this year.
Would it be possible to loosen that?
While it's clearly not dbt's responsibility for us to be able to install arbitrary packages, here's an example where we can't instally `google-cloud-bigquery-datatransfer` because of this restriction:
```
[SolverProblemError]
Because no versions of google-cloud-bigquery-datatransfer match >2.0.0,<3.0.0
and google-cloud-bigquery-datatransfer (2.0.0) depends on google-api-core (>=1.22.2,<2.0.0dev), google-cloud-bigquery-datatransfer (>=2.0.0,<3.0.0) requires google-api-core (>=1.22.2,<2.0.0dev).
And because dbt-bigquery (0.18.0) depends on google-api-core (>=1.16.0,<1.17.0), google-cloud-bigquery-datatransfer (>=2.0.0,<3.0.0) is incompatible with dbt-bigquery (0.18.0).
And because dbt (0.18.0) depends on dbt-bigquery (0.18.0)
and no versions of dbt match >0.18.0,<0.19.0, google-cloud-bigquery-datatransfer (>=2.0.0,<3.0.0) is incompatible with dbt (>=0.18.0,<0.19.0).
So, because {repo} depends on both dbt (^0.18.0) and google-cloud-bigquery-datatransfer (^2.0.0), version solving failed.
```
Thanks as ever for the awesome product!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/bigquery/setup.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 6):
6 print('Error: dbt does not support this version of Python.')
7 print('Please upgrade to Python 3.6 or higher.')
8 sys.exit(1)
9
10
11 from setuptools import setup
12 try:
13 from setuptools import find_namespace_packages
14 except ImportError:
15 # the user has a downlevel version of setuptools.
16 print('Error: dbt requires setuptools v40.1.0 or higher.')
17 print('Please upgrade setuptools with "pip install --upgrade setuptools" '
18 'and try again')
19 sys.exit(1)
20
21
22 package_name = "dbt-bigquery"
23 package_version = "0.19.0b1"
24 description = """The bigquery adapter plugin for dbt (data build tool)"""
25
26 this_directory = os.path.abspath(os.path.dirname(__file__))
27 with open(os.path.join(this_directory, 'README.md')) as f:
28 long_description = f.read()
29
30 setup(
31 name=package_name,
32 version=package_version,
33 description=description,
34 long_description=long_description,
35 long_description_content_type='text/markdown',
36 author="Fishtown Analytics",
37 author_email="[email protected]",
38 url="https://github.com/fishtown-analytics/dbt",
39 packages=find_namespace_packages(include=['dbt', 'dbt.*']),
40 package_data={
41 'dbt': [
42 'include/bigquery/dbt_project.yml',
43 'include/bigquery/sample_profiles.yml',
44 'include/bigquery/macros/*.sql',
45 'include/bigquery/macros/**/*.sql',
46 ]
47 },
48 install_requires=[
49 'dbt-core=={}'.format(package_version),
50 'protobuf>=3.6.0,<3.12',
51 'google-cloud-core>=1.3.0,<1.4',
52 'google-cloud-bigquery>=1.25.0,<1.26.0',
53 'google-api-core>=1.16.0,<1.17.0',
54 'googleapis-common-protos>=1.6.0,<1.7.0',
55 'six>=1.14.0',
56 ],
57 zip_safe=False,
58 classifiers=[
59 'Development Status :: 5 - Production/Stable',
60
61 'License :: OSI Approved :: Apache Software License',
62
63 'Operating System :: Microsoft :: Windows',
64 'Operating System :: MacOS :: MacOS X',
65 'Operating System :: POSIX :: Linux',
66
67 'Programming Language :: Python :: 3.6',
68 'Programming Language :: Python :: 3.7',
69 'Programming Language :: Python :: 3.8',
70 ],
71 python_requires=">=3.6.2",
72 )
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plugins/bigquery/setup.py b/plugins/bigquery/setup.py
--- a/plugins/bigquery/setup.py
+++ b/plugins/bigquery/setup.py
@@ -47,11 +47,13 @@
},
install_requires=[
'dbt-core=={}'.format(package_version),
- 'protobuf>=3.6.0,<3.12',
- 'google-cloud-core>=1.3.0,<1.4',
- 'google-cloud-bigquery>=1.25.0,<1.26.0',
- 'google-api-core>=1.16.0,<1.17.0',
- 'googleapis-common-protos>=1.6.0,<1.7.0',
+ 'protobuf>=3.13.0,<4',
+ # These are more tightly pinned, as they have a track record of
+ # breaking changes in minor releases.
+ 'google-cloud-core>=1.3.0,<1.5',
+ 'google-cloud-bigquery>=1.25.0,<2.4',
+ 'google-api-core>=1.16.0,<1.24',
+ 'googleapis-common-protos>=1.6.0,<1.53',
'six>=1.14.0',
],
zip_safe=False,
|
{"golden_diff": "diff --git a/plugins/bigquery/setup.py b/plugins/bigquery/setup.py\n--- a/plugins/bigquery/setup.py\n+++ b/plugins/bigquery/setup.py\n@@ -47,11 +47,13 @@\n },\n install_requires=[\n 'dbt-core=={}'.format(package_version),\n- 'protobuf>=3.6.0,<3.12',\n- 'google-cloud-core>=1.3.0,<1.4',\n- 'google-cloud-bigquery>=1.25.0,<1.26.0',\n- 'google-api-core>=1.16.0,<1.17.0',\n- 'googleapis-common-protos>=1.6.0,<1.7.0',\n+ 'protobuf>=3.13.0,<4',\n+ # These are more tightly pinned, as they have a track record of\n+ # breaking changes in minor releases.\n+ 'google-cloud-core>=1.3.0,<1.5',\n+ 'google-cloud-bigquery>=1.25.0,<2.4',\n+ 'google-api-core>=1.16.0,<1.24',\n+ 'googleapis-common-protos>=1.6.0,<1.53',\n 'six>=1.14.0',\n ],\n zip_safe=False,\n", "issue": "[BigQuery] Allow more recent versions of google-api-core?\n### Describe the feature\r\n\r\nCurrently dbt-bigquery has [an upper limit of 1.16 on `google-api-core`](https://github.com/fishtown-analytics/dbt/blob/v0.18.1b3/plugins/bigquery/setup.py#L53). That release is from Jan of this year.\r\n\r\nWould it be possible to loosen that?\r\n\r\nWhile it's clearly not dbt's responsibility for us to be able to install arbitrary packages, here's an example where we can't instally `google-cloud-bigquery-datatransfer` because of this restriction:\r\n\r\n```\r\n[SolverProblemError]\r\nBecause no versions of google-cloud-bigquery-datatransfer match >2.0.0,<3.0.0\r\n and google-cloud-bigquery-datatransfer (2.0.0) depends on google-api-core (>=1.22.2,<2.0.0dev), google-cloud-bigquery-datatransfer (>=2.0.0,<3.0.0) requires google-api-core (>=1.22.2,<2.0.0dev).\r\nAnd because dbt-bigquery (0.18.0) depends on google-api-core (>=1.16.0,<1.17.0), google-cloud-bigquery-datatransfer (>=2.0.0,<3.0.0) is incompatible with dbt-bigquery (0.18.0).\r\nAnd because dbt (0.18.0) depends on dbt-bigquery (0.18.0)\r\n and no versions of dbt match >0.18.0,<0.19.0, google-cloud-bigquery-datatransfer (>=2.0.0,<3.0.0) is incompatible with dbt (>=0.18.0,<0.19.0).\r\nSo, because {repo} depends on both dbt (^0.18.0) and google-cloud-bigquery-datatransfer (^2.0.0), version solving failed.\r\n```\r\n\r\nThanks as ever for the awesome product!\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 6):\n print('Error: dbt does not support this version of Python.')\n print('Please upgrade to Python 3.6 or higher.')\n sys.exit(1)\n\n\nfrom setuptools import setup\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print('Error: dbt requires setuptools v40.1.0 or higher.')\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" '\n 'and try again')\n sys.exit(1)\n\n\npackage_name = \"dbt-bigquery\"\npackage_version = \"0.19.0b1\"\ndescription = \"\"\"The bigquery adapter plugin for dbt (data build tool)\"\"\"\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, 'README.md')) as f:\n long_description = f.read()\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type='text/markdown',\n author=\"Fishtown Analytics\",\n author_email=\"[email protected]\",\n url=\"https://github.com/fishtown-analytics/dbt\",\n packages=find_namespace_packages(include=['dbt', 'dbt.*']),\n package_data={\n 'dbt': [\n 'include/bigquery/dbt_project.yml',\n 'include/bigquery/sample_profiles.yml',\n 'include/bigquery/macros/*.sql',\n 'include/bigquery/macros/**/*.sql',\n ]\n },\n install_requires=[\n 'dbt-core=={}'.format(package_version),\n 'protobuf>=3.6.0,<3.12',\n 'google-cloud-core>=1.3.0,<1.4',\n 'google-cloud-bigquery>=1.25.0,<1.26.0',\n 'google-api-core>=1.16.0,<1.17.0',\n 'googleapis-common-protos>=1.6.0,<1.7.0',\n 'six>=1.14.0',\n ],\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n\n 'License :: OSI Approved :: Apache Software License',\n\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n python_requires=\">=3.6.2\",\n)\n", "path": "plugins/bigquery/setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 6):\n print('Error: dbt does not support this version of Python.')\n print('Please upgrade to Python 3.6 or higher.')\n sys.exit(1)\n\n\nfrom setuptools import setup\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print('Error: dbt requires setuptools v40.1.0 or higher.')\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" '\n 'and try again')\n sys.exit(1)\n\n\npackage_name = \"dbt-bigquery\"\npackage_version = \"0.19.0b1\"\ndescription = \"\"\"The bigquery adapter plugin for dbt (data build tool)\"\"\"\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, 'README.md')) as f:\n long_description = f.read()\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type='text/markdown',\n author=\"Fishtown Analytics\",\n author_email=\"[email protected]\",\n url=\"https://github.com/fishtown-analytics/dbt\",\n packages=find_namespace_packages(include=['dbt', 'dbt.*']),\n package_data={\n 'dbt': [\n 'include/bigquery/dbt_project.yml',\n 'include/bigquery/sample_profiles.yml',\n 'include/bigquery/macros/*.sql',\n 'include/bigquery/macros/**/*.sql',\n ]\n },\n install_requires=[\n 'dbt-core=={}'.format(package_version),\n 'protobuf>=3.13.0,<4',\n # These are more tightly pinned, as they have a track record of\n # breaking changes in minor releases.\n 'google-cloud-core>=1.3.0,<1.5',\n 'google-cloud-bigquery>=1.25.0,<2.4',\n 'google-api-core>=1.16.0,<1.24',\n 'googleapis-common-protos>=1.6.0,<1.53',\n 'six>=1.14.0',\n ],\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n\n 'License :: OSI Approved :: Apache Software License',\n\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n python_requires=\">=3.6.2\",\n)\n", "path": "plugins/bigquery/setup.py"}]}
| 1,442 | 295 |
gh_patches_debug_20310
|
rasdani/github-patches
|
git_diff
|
scverse__scanpy-2928
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Linkage 'Z' contains negative distances.
### Please make sure these conditions are met
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest version of scanpy.
- [ ] (optional) I have confirmed this bug exists on the master branch of scanpy.
### What happened?
I'm encountering an error when running the sc.pl.rank_genes_groups_heatmap function in the scanpy package. The error message is "Linkage 'Z' contains negative distances." What could be causing this error and how can I fix it?
### Minimal code sample
```python
sc.pl.rank_genes_groups_heatmap(adata, n_genes=10, groupby='clusters',show_gene_labels=True,save='cluster.markers.heatmap.svg')
```
### Error output
```pytb
sc.pl.rank_genes_groups_heatmap(adata, n_genes=10, groupby=cluster,show_gene_labels=True,save=(id+'_processed.top10.cluster.markers.heatmap.svg'))
File "/opt/conda/envs/st/lib/python3.8/site-packages/scanpy/plotting/_tools/__init__.py", line 673, in rank_genes_groups_heatmap
return _rank_genes_groups_plot(
File "/opt/conda/envs/st/lib/python3.8/site-packages/scanpy/plotting/_tools/__init__.py", line 592, in _rank_genes_groups_plot
return heatmap(
File "/opt/conda/envs/st/lib/python3.8/site-packages/scanpy/plotting/_anndata.py", line 1087, in heatmap
dendro_data = _reorder_categories_after_dendrogram(
File "/opt/conda/envs/st/lib/python3.8/site-packages/scanpy/plotting/_anndata.py", line 2134, in _reorder_categories_after_dendrogram
key = _get_dendrogram_key(adata, dendrogram, groupby)
File "/opt/conda/envs/st/lib/python3.8/site-packages/scanpy/plotting/_anndata.py", line 2236, in _get_dendrogram_key
dendrogram(adata, groupby, key_added=dendrogram_key)
File "/opt/conda/envs/st/lib/python3.8/site-packages/scanpy/tools/_dendrogram.py", line 143, in dendrogram
dendro_info = sch.dendrogram(z_var, labels=list(categories), no_plot=True)
File "/opt/conda/envs/st/lib/python3.8/site-packages/scipy/cluster/hierarchy.py", line 3301, in dendrogram
is_valid_linkage(Z, throw=True, name='Z')
File "/opt/conda/envs/st/lib/python3.8/site-packages/scipy/cluster/hierarchy.py", line 2280, in is_valid_linkage
raise ValueError('Linkage %scontains negative distances.' %
ValueError: Linkage 'Z' contains negative distances.
```
### Versions
<details>
```
-----
anndata 0.8.0
scanpy 1.9.3
-----
PIL 9.4.0
asciitree NA
beta_ufunc NA
binom_ufunc NA
cairocffi 1.6.1
cffi 1.15.1
cloudpickle 2.2.1
colorama 0.4.6
cycler 0.10.0
cython_runtime NA
cytoolz 0.12.0
dask 2022.11.1
dateutil 2.8.2
defusedxml 0.7.1
entrypoints 0.4
fasteners 0.17.3
fsspec 2023.6.0
google NA
h5py 3.7.0
igraph 0.9.11
jinja2 3.0.3
joblib 1.2.0
kiwisolver 1.4.4
leidenalg 0.8.10
llvmlite 0.39.1
louvain 0.7.1
lz4 4.3.2
markupsafe 2.1.3
matplotlib 3.5.2
mpl_toolkits NA
msgpack 1.0.5
natsort 8.2.0
nbinom_ufunc NA
numba 0.56.4
numcodecs 0.11.0
numexpr 2.8.4
numpy 1.21.6
packaging 23.1
pandas 1.5.3
pkg_resources NA
psutil 5.9.5
pyarrow 8.0.0
pycparser 2.21
pyparsing 3.1.0
pytz 2023.3
scipy 1.7.3
session_info 1.0.0
setuptools 68.0.0
setuptools_scm NA
six 1.16.0
sklearn 1.0.1
snappy NA
sphinxcontrib NA
tblib 1.7.0
texttable 1.6.7
threadpoolctl 3.2.0
tlz 0.12.0
toolz 0.12.0
typing_extensions NA
wcwidth 0.2.6
yaml 6.0
zarr 2.15.0
zipp NA
-----
Python 3.8.15 | packaged by conda-forge | (default, Nov 22 2022, 08:46:39) [GCC 10.4.0]
Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.10
-----
```
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scanpy/tools/_dendrogram.py`
Content:
```
1 """
2 Computes a dendrogram based on a given categorical observation.
3 """
4
5 from __future__ import annotations
6
7 from typing import TYPE_CHECKING, Any
8
9 import pandas as pd
10 from pandas.api.types import CategoricalDtype
11
12 from .. import logging as logg
13 from .._compat import old_positionals
14 from .._utils import _doc_params
15 from ..neighbors._doc import doc_n_pcs, doc_use_rep
16 from ._utils import _choose_representation
17
18 if TYPE_CHECKING:
19 from collections.abc import Sequence
20
21 from anndata import AnnData
22
23
24 @old_positionals(
25 "n_pcs",
26 "use_rep",
27 "var_names",
28 "use_raw",
29 "cor_method",
30 "linkage_method",
31 "optimal_ordering",
32 "key_added",
33 "inplace",
34 )
35 @_doc_params(n_pcs=doc_n_pcs, use_rep=doc_use_rep)
36 def dendrogram(
37 adata: AnnData,
38 groupby: str | Sequence[str],
39 *,
40 n_pcs: int | None = None,
41 use_rep: str | None = None,
42 var_names: Sequence[str] | None = None,
43 use_raw: bool | None = None,
44 cor_method: str = "pearson",
45 linkage_method: str = "complete",
46 optimal_ordering: bool = False,
47 key_added: str | None = None,
48 inplace: bool = True,
49 ) -> dict[str, Any] | None:
50 """\
51 Computes a hierarchical clustering for the given `groupby` categories.
52
53 By default, the PCA representation is used unless `.X`
54 has less than 50 variables.
55
56 Alternatively, a list of `var_names` (e.g. genes) can be given.
57
58 Average values of either `var_names` or components are used
59 to compute a correlation matrix.
60
61 The hierarchical clustering can be visualized using
62 :func:`scanpy.pl.dendrogram` or multiple other visualizations that can
63 include a dendrogram: :func:`~scanpy.pl.matrixplot`,
64 :func:`~scanpy.pl.heatmap`, :func:`~scanpy.pl.dotplot`,
65 and :func:`~scanpy.pl.stacked_violin`.
66
67 .. note::
68 The computation of the hierarchical clustering is based on predefined
69 groups and not per cell. The correlation matrix is computed using by
70 default pearson but other methods are available.
71
72 Parameters
73 ----------
74 adata
75 Annotated data matrix
76 {n_pcs}
77 {use_rep}
78 var_names
79 List of var_names to use for computing the hierarchical clustering.
80 If `var_names` is given, then `use_rep` and `n_pcs` is ignored.
81 use_raw
82 Only when `var_names` is not None.
83 Use `raw` attribute of `adata` if present.
84 cor_method
85 correlation method to use.
86 Options are 'pearson', 'kendall', and 'spearman'
87 linkage_method
88 linkage method to use. See :func:`scipy.cluster.hierarchy.linkage`
89 for more information.
90 optimal_ordering
91 Same as the optimal_ordering argument of :func:`scipy.cluster.hierarchy.linkage`
92 which reorders the linkage matrix so that the distance between successive
93 leaves is minimal.
94 key_added
95 By default, the dendrogram information is added to
96 `.uns[f'dendrogram_{{groupby}}']`.
97 Notice that the `groupby` information is added to the dendrogram.
98 inplace
99 If `True`, adds dendrogram information to `adata.uns[key_added]`,
100 else this function returns the information.
101
102 Returns
103 -------
104 Returns `None` if `inplace=True`, else returns a `dict` with dendrogram information. Sets the following field if `inplace=True`:
105
106 `adata.uns[f'dendrogram_{{group_by}}' | key_added]` : :class:`dict`
107 Dendrogram information.
108
109 Examples
110 --------
111 >>> import scanpy as sc
112 >>> adata = sc.datasets.pbmc68k_reduced()
113 >>> sc.tl.dendrogram(adata, groupby='bulk_labels')
114 >>> sc.pl.dendrogram(adata, groupby='bulk_labels') # doctest: +SKIP
115 <Axes: >
116 >>> markers = ['C1QA', 'PSAP', 'CD79A', 'CD79B', 'CST3', 'LYZ']
117 >>> sc.pl.dotplot(adata, markers, groupby='bulk_labels', dendrogram=True)
118 """
119 if isinstance(groupby, str):
120 # if not a list, turn into a list
121 groupby = [groupby]
122 for group in groupby:
123 if group not in adata.obs_keys():
124 raise ValueError(
125 "groupby has to be a valid observation. "
126 f"Given value: {group}, valid observations: {adata.obs_keys()}"
127 )
128 if not isinstance(adata.obs[group].dtype, CategoricalDtype):
129 raise ValueError(
130 "groupby has to be a categorical observation. "
131 f"Given value: {group}, Column type: {adata.obs[group].dtype}"
132 )
133
134 if var_names is None:
135 rep_df = pd.DataFrame(
136 _choose_representation(adata, use_rep=use_rep, n_pcs=n_pcs)
137 )
138 categorical = adata.obs[groupby[0]]
139 if len(groupby) > 1:
140 for group in groupby[1:]:
141 # create new category by merging the given groupby categories
142 categorical = (
143 categorical.astype(str) + "_" + adata.obs[group].astype(str)
144 ).astype("category")
145 categorical.name = "_".join(groupby)
146
147 rep_df.set_index(categorical, inplace=True)
148 categories = rep_df.index.categories
149 else:
150 gene_names = adata.raw.var_names if use_raw else adata.var_names
151 from ..plotting._anndata import _prepare_dataframe
152
153 categories, rep_df = _prepare_dataframe(
154 adata, gene_names, groupby, use_raw=use_raw
155 )
156
157 # aggregate values within categories using 'mean'
158 mean_df = (
159 rep_df.groupby(level=0, observed=True)
160 .mean()
161 .loc[categories] # Fixed ordering for pandas < 2
162 )
163
164 import scipy.cluster.hierarchy as sch
165 from scipy.spatial import distance
166
167 corr_matrix = mean_df.T.corr(method=cor_method)
168 corr_condensed = distance.squareform(1 - corr_matrix)
169 z_var = sch.linkage(
170 corr_condensed, method=linkage_method, optimal_ordering=optimal_ordering
171 )
172 dendro_info = sch.dendrogram(z_var, labels=list(categories), no_plot=True)
173
174 dat = dict(
175 linkage=z_var,
176 groupby=groupby,
177 use_rep=use_rep,
178 cor_method=cor_method,
179 linkage_method=linkage_method,
180 categories_ordered=dendro_info["ivl"],
181 categories_idx_ordered=dendro_info["leaves"],
182 dendrogram_info=dendro_info,
183 correlation_matrix=corr_matrix.values,
184 )
185
186 if inplace:
187 if key_added is None:
188 key_added = f'dendrogram_{"_".join(groupby)}'
189 logg.info(f"Storing dendrogram info using `.uns[{key_added!r}]`")
190 adata.uns[key_added] = dat
191 else:
192 return dat
193
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scanpy/tools/_dendrogram.py b/scanpy/tools/_dendrogram.py
--- a/scanpy/tools/_dendrogram.py
+++ b/scanpy/tools/_dendrogram.py
@@ -145,7 +145,7 @@
categorical.name = "_".join(groupby)
rep_df.set_index(categorical, inplace=True)
- categories = rep_df.index.categories
+ categories: pd.Index = rep_df.index.categories
else:
gene_names = adata.raw.var_names if use_raw else adata.var_names
from ..plotting._anndata import _prepare_dataframe
@@ -164,7 +164,7 @@
import scipy.cluster.hierarchy as sch
from scipy.spatial import distance
- corr_matrix = mean_df.T.corr(method=cor_method)
+ corr_matrix = mean_df.T.corr(method=cor_method).clip(-1, 1)
corr_condensed = distance.squareform(1 - corr_matrix)
z_var = sch.linkage(
corr_condensed, method=linkage_method, optimal_ordering=optimal_ordering
|
{"golden_diff": "diff --git a/scanpy/tools/_dendrogram.py b/scanpy/tools/_dendrogram.py\n--- a/scanpy/tools/_dendrogram.py\n+++ b/scanpy/tools/_dendrogram.py\n@@ -145,7 +145,7 @@\n categorical.name = \"_\".join(groupby)\n \n rep_df.set_index(categorical, inplace=True)\n- categories = rep_df.index.categories\n+ categories: pd.Index = rep_df.index.categories\n else:\n gene_names = adata.raw.var_names if use_raw else adata.var_names\n from ..plotting._anndata import _prepare_dataframe\n@@ -164,7 +164,7 @@\n import scipy.cluster.hierarchy as sch\n from scipy.spatial import distance\n \n- corr_matrix = mean_df.T.corr(method=cor_method)\n+ corr_matrix = mean_df.T.corr(method=cor_method).clip(-1, 1)\n corr_condensed = distance.squareform(1 - corr_matrix)\n z_var = sch.linkage(\n corr_condensed, method=linkage_method, optimal_ordering=optimal_ordering\n", "issue": " Linkage 'Z' contains negative distances.\n### Please make sure these conditions are met\n\n- [X] I have checked that this issue has not already been reported.\n- [X] I have confirmed this bug exists on the latest version of scanpy.\n- [ ] (optional) I have confirmed this bug exists on the master branch of scanpy.\n\n### What happened?\n\nI'm encountering an error when running the sc.pl.rank_genes_groups_heatmap function in the scanpy package. The error message is \"Linkage 'Z' contains negative distances.\" What could be causing this error and how can I fix it?\n\n### Minimal code sample\n\n```python\nsc.pl.rank_genes_groups_heatmap(adata, n_genes=10, groupby='clusters',show_gene_labels=True,save='cluster.markers.heatmap.svg')\n```\n\n\n### Error output\n\n```pytb\nsc.pl.rank_genes_groups_heatmap(adata, n_genes=10, groupby=cluster,show_gene_labels=True,save=(id+'_processed.top10.cluster.markers.heatmap.svg'))\r\n File \"/opt/conda/envs/st/lib/python3.8/site-packages/scanpy/plotting/_tools/__init__.py\", line 673, in rank_genes_groups_heatmap\r\n return _rank_genes_groups_plot(\r\n File \"/opt/conda/envs/st/lib/python3.8/site-packages/scanpy/plotting/_tools/__init__.py\", line 592, in _rank_genes_groups_plot\r\n return heatmap(\r\n File \"/opt/conda/envs/st/lib/python3.8/site-packages/scanpy/plotting/_anndata.py\", line 1087, in heatmap\r\n dendro_data = _reorder_categories_after_dendrogram(\r\n File \"/opt/conda/envs/st/lib/python3.8/site-packages/scanpy/plotting/_anndata.py\", line 2134, in _reorder_categories_after_dendrogram\r\n key = _get_dendrogram_key(adata, dendrogram, groupby)\r\n File \"/opt/conda/envs/st/lib/python3.8/site-packages/scanpy/plotting/_anndata.py\", line 2236, in _get_dendrogram_key\r\n dendrogram(adata, groupby, key_added=dendrogram_key)\r\n File \"/opt/conda/envs/st/lib/python3.8/site-packages/scanpy/tools/_dendrogram.py\", line 143, in dendrogram\r\n dendro_info = sch.dendrogram(z_var, labels=list(categories), no_plot=True)\r\n File \"/opt/conda/envs/st/lib/python3.8/site-packages/scipy/cluster/hierarchy.py\", line 3301, in dendrogram\r\n is_valid_linkage(Z, throw=True, name='Z')\r\n File \"/opt/conda/envs/st/lib/python3.8/site-packages/scipy/cluster/hierarchy.py\", line 2280, in is_valid_linkage\r\n raise ValueError('Linkage %scontains negative distances.' %\r\nValueError: Linkage 'Z' contains negative distances.\n```\n\n\n### Versions\n\n<details>\r\n\r\n```\r\n-----\r\nanndata 0.8.0\r\nscanpy 1.9.3\r\n-----\r\nPIL 9.4.0\r\nasciitree NA\r\nbeta_ufunc NA\r\nbinom_ufunc NA\r\ncairocffi 1.6.1\r\ncffi 1.15.1\r\ncloudpickle 2.2.1\r\ncolorama 0.4.6\r\ncycler 0.10.0\r\ncython_runtime NA\r\ncytoolz 0.12.0\r\ndask 2022.11.1\r\ndateutil 2.8.2\r\ndefusedxml 0.7.1\r\nentrypoints 0.4\r\nfasteners 0.17.3\r\nfsspec 2023.6.0\r\ngoogle NA\r\nh5py 3.7.0\r\nigraph 0.9.11\r\njinja2 3.0.3\r\njoblib 1.2.0\r\nkiwisolver 1.4.4\r\nleidenalg 0.8.10\r\nllvmlite 0.39.1\r\nlouvain 0.7.1\r\nlz4 4.3.2\r\nmarkupsafe 2.1.3\r\nmatplotlib 3.5.2\r\nmpl_toolkits NA\r\nmsgpack 1.0.5\r\nnatsort 8.2.0\r\nnbinom_ufunc NA\r\nnumba 0.56.4\r\nnumcodecs 0.11.0\r\nnumexpr 2.8.4\r\nnumpy 1.21.6\r\npackaging 23.1\r\npandas 1.5.3\r\npkg_resources NA\r\npsutil 5.9.5\r\npyarrow 8.0.0\r\npycparser 2.21\r\npyparsing 3.1.0\r\npytz 2023.3\r\nscipy 1.7.3\r\nsession_info 1.0.0\r\nsetuptools 68.0.0\r\nsetuptools_scm NA\r\nsix 1.16.0\r\nsklearn 1.0.1\r\nsnappy NA\r\nsphinxcontrib NA\r\ntblib 1.7.0\r\ntexttable 1.6.7\r\nthreadpoolctl 3.2.0\r\ntlz 0.12.0\r\ntoolz 0.12.0\r\ntyping_extensions NA\r\nwcwidth 0.2.6\r\nyaml 6.0\r\nzarr 2.15.0\r\nzipp NA\r\n-----\r\nPython 3.8.15 | packaged by conda-forge | (default, Nov 22 2022, 08:46:39) [GCC 10.4.0]\r\nLinux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.10\r\n-----\r\n\r\n```\r\n\r\n</details>\r\n\n", "before_files": [{"content": "\"\"\"\nComputes a dendrogram based on a given categorical observation.\n\"\"\"\n\nfrom __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any\n\nimport pandas as pd\nfrom pandas.api.types import CategoricalDtype\n\nfrom .. import logging as logg\nfrom .._compat import old_positionals\nfrom .._utils import _doc_params\nfrom ..neighbors._doc import doc_n_pcs, doc_use_rep\nfrom ._utils import _choose_representation\n\nif TYPE_CHECKING:\n from collections.abc import Sequence\n\n from anndata import AnnData\n\n\n@old_positionals(\n \"n_pcs\",\n \"use_rep\",\n \"var_names\",\n \"use_raw\",\n \"cor_method\",\n \"linkage_method\",\n \"optimal_ordering\",\n \"key_added\",\n \"inplace\",\n)\n@_doc_params(n_pcs=doc_n_pcs, use_rep=doc_use_rep)\ndef dendrogram(\n adata: AnnData,\n groupby: str | Sequence[str],\n *,\n n_pcs: int | None = None,\n use_rep: str | None = None,\n var_names: Sequence[str] | None = None,\n use_raw: bool | None = None,\n cor_method: str = \"pearson\",\n linkage_method: str = \"complete\",\n optimal_ordering: bool = False,\n key_added: str | None = None,\n inplace: bool = True,\n) -> dict[str, Any] | None:\n \"\"\"\\\n Computes a hierarchical clustering for the given `groupby` categories.\n\n By default, the PCA representation is used unless `.X`\n has less than 50 variables.\n\n Alternatively, a list of `var_names` (e.g. genes) can be given.\n\n Average values of either `var_names` or components are used\n to compute a correlation matrix.\n\n The hierarchical clustering can be visualized using\n :func:`scanpy.pl.dendrogram` or multiple other visualizations that can\n include a dendrogram: :func:`~scanpy.pl.matrixplot`,\n :func:`~scanpy.pl.heatmap`, :func:`~scanpy.pl.dotplot`,\n and :func:`~scanpy.pl.stacked_violin`.\n\n .. note::\n The computation of the hierarchical clustering is based on predefined\n groups and not per cell. The correlation matrix is computed using by\n default pearson but other methods are available.\n\n Parameters\n ----------\n adata\n Annotated data matrix\n {n_pcs}\n {use_rep}\n var_names\n List of var_names to use for computing the hierarchical clustering.\n If `var_names` is given, then `use_rep` and `n_pcs` is ignored.\n use_raw\n Only when `var_names` is not None.\n Use `raw` attribute of `adata` if present.\n cor_method\n correlation method to use.\n Options are 'pearson', 'kendall', and 'spearman'\n linkage_method\n linkage method to use. See :func:`scipy.cluster.hierarchy.linkage`\n for more information.\n optimal_ordering\n Same as the optimal_ordering argument of :func:`scipy.cluster.hierarchy.linkage`\n which reorders the linkage matrix so that the distance between successive\n leaves is minimal.\n key_added\n By default, the dendrogram information is added to\n `.uns[f'dendrogram_{{groupby}}']`.\n Notice that the `groupby` information is added to the dendrogram.\n inplace\n If `True`, adds dendrogram information to `adata.uns[key_added]`,\n else this function returns the information.\n\n Returns\n -------\n Returns `None` if `inplace=True`, else returns a `dict` with dendrogram information. Sets the following field if `inplace=True`:\n\n `adata.uns[f'dendrogram_{{group_by}}' | key_added]` : :class:`dict`\n Dendrogram information.\n\n Examples\n --------\n >>> import scanpy as sc\n >>> adata = sc.datasets.pbmc68k_reduced()\n >>> sc.tl.dendrogram(adata, groupby='bulk_labels')\n >>> sc.pl.dendrogram(adata, groupby='bulk_labels') # doctest: +SKIP\n <Axes: >\n >>> markers = ['C1QA', 'PSAP', 'CD79A', 'CD79B', 'CST3', 'LYZ']\n >>> sc.pl.dotplot(adata, markers, groupby='bulk_labels', dendrogram=True)\n \"\"\"\n if isinstance(groupby, str):\n # if not a list, turn into a list\n groupby = [groupby]\n for group in groupby:\n if group not in adata.obs_keys():\n raise ValueError(\n \"groupby has to be a valid observation. \"\n f\"Given value: {group}, valid observations: {adata.obs_keys()}\"\n )\n if not isinstance(adata.obs[group].dtype, CategoricalDtype):\n raise ValueError(\n \"groupby has to be a categorical observation. \"\n f\"Given value: {group}, Column type: {adata.obs[group].dtype}\"\n )\n\n if var_names is None:\n rep_df = pd.DataFrame(\n _choose_representation(adata, use_rep=use_rep, n_pcs=n_pcs)\n )\n categorical = adata.obs[groupby[0]]\n if len(groupby) > 1:\n for group in groupby[1:]:\n # create new category by merging the given groupby categories\n categorical = (\n categorical.astype(str) + \"_\" + adata.obs[group].astype(str)\n ).astype(\"category\")\n categorical.name = \"_\".join(groupby)\n\n rep_df.set_index(categorical, inplace=True)\n categories = rep_df.index.categories\n else:\n gene_names = adata.raw.var_names if use_raw else adata.var_names\n from ..plotting._anndata import _prepare_dataframe\n\n categories, rep_df = _prepare_dataframe(\n adata, gene_names, groupby, use_raw=use_raw\n )\n\n # aggregate values within categories using 'mean'\n mean_df = (\n rep_df.groupby(level=0, observed=True)\n .mean()\n .loc[categories] # Fixed ordering for pandas < 2\n )\n\n import scipy.cluster.hierarchy as sch\n from scipy.spatial import distance\n\n corr_matrix = mean_df.T.corr(method=cor_method)\n corr_condensed = distance.squareform(1 - corr_matrix)\n z_var = sch.linkage(\n corr_condensed, method=linkage_method, optimal_ordering=optimal_ordering\n )\n dendro_info = sch.dendrogram(z_var, labels=list(categories), no_plot=True)\n\n dat = dict(\n linkage=z_var,\n groupby=groupby,\n use_rep=use_rep,\n cor_method=cor_method,\n linkage_method=linkage_method,\n categories_ordered=dendro_info[\"ivl\"],\n categories_idx_ordered=dendro_info[\"leaves\"],\n dendrogram_info=dendro_info,\n correlation_matrix=corr_matrix.values,\n )\n\n if inplace:\n if key_added is None:\n key_added = f'dendrogram_{\"_\".join(groupby)}'\n logg.info(f\"Storing dendrogram info using `.uns[{key_added!r}]`\")\n adata.uns[key_added] = dat\n else:\n return dat\n", "path": "scanpy/tools/_dendrogram.py"}], "after_files": [{"content": "\"\"\"\nComputes a dendrogram based on a given categorical observation.\n\"\"\"\n\nfrom __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any\n\nimport pandas as pd\nfrom pandas.api.types import CategoricalDtype\n\nfrom .. import logging as logg\nfrom .._compat import old_positionals\nfrom .._utils import _doc_params\nfrom ..neighbors._doc import doc_n_pcs, doc_use_rep\nfrom ._utils import _choose_representation\n\nif TYPE_CHECKING:\n from collections.abc import Sequence\n\n from anndata import AnnData\n\n\n@old_positionals(\n \"n_pcs\",\n \"use_rep\",\n \"var_names\",\n \"use_raw\",\n \"cor_method\",\n \"linkage_method\",\n \"optimal_ordering\",\n \"key_added\",\n \"inplace\",\n)\n@_doc_params(n_pcs=doc_n_pcs, use_rep=doc_use_rep)\ndef dendrogram(\n adata: AnnData,\n groupby: str | Sequence[str],\n *,\n n_pcs: int | None = None,\n use_rep: str | None = None,\n var_names: Sequence[str] | None = None,\n use_raw: bool | None = None,\n cor_method: str = \"pearson\",\n linkage_method: str = \"complete\",\n optimal_ordering: bool = False,\n key_added: str | None = None,\n inplace: bool = True,\n) -> dict[str, Any] | None:\n \"\"\"\\\n Computes a hierarchical clustering for the given `groupby` categories.\n\n By default, the PCA representation is used unless `.X`\n has less than 50 variables.\n\n Alternatively, a list of `var_names` (e.g. genes) can be given.\n\n Average values of either `var_names` or components are used\n to compute a correlation matrix.\n\n The hierarchical clustering can be visualized using\n :func:`scanpy.pl.dendrogram` or multiple other visualizations that can\n include a dendrogram: :func:`~scanpy.pl.matrixplot`,\n :func:`~scanpy.pl.heatmap`, :func:`~scanpy.pl.dotplot`,\n and :func:`~scanpy.pl.stacked_violin`.\n\n .. note::\n The computation of the hierarchical clustering is based on predefined\n groups and not per cell. The correlation matrix is computed using by\n default pearson but other methods are available.\n\n Parameters\n ----------\n adata\n Annotated data matrix\n {n_pcs}\n {use_rep}\n var_names\n List of var_names to use for computing the hierarchical clustering.\n If `var_names` is given, then `use_rep` and `n_pcs` is ignored.\n use_raw\n Only when `var_names` is not None.\n Use `raw` attribute of `adata` if present.\n cor_method\n correlation method to use.\n Options are 'pearson', 'kendall', and 'spearman'\n linkage_method\n linkage method to use. See :func:`scipy.cluster.hierarchy.linkage`\n for more information.\n optimal_ordering\n Same as the optimal_ordering argument of :func:`scipy.cluster.hierarchy.linkage`\n which reorders the linkage matrix so that the distance between successive\n leaves is minimal.\n key_added\n By default, the dendrogram information is added to\n `.uns[f'dendrogram_{{groupby}}']`.\n Notice that the `groupby` information is added to the dendrogram.\n inplace\n If `True`, adds dendrogram information to `adata.uns[key_added]`,\n else this function returns the information.\n\n Returns\n -------\n Returns `None` if `inplace=True`, else returns a `dict` with dendrogram information. Sets the following field if `inplace=True`:\n\n `adata.uns[f'dendrogram_{{group_by}}' | key_added]` : :class:`dict`\n Dendrogram information.\n\n Examples\n --------\n >>> import scanpy as sc\n >>> adata = sc.datasets.pbmc68k_reduced()\n >>> sc.tl.dendrogram(adata, groupby='bulk_labels')\n >>> sc.pl.dendrogram(adata, groupby='bulk_labels') # doctest: +SKIP\n <Axes: >\n >>> markers = ['C1QA', 'PSAP', 'CD79A', 'CD79B', 'CST3', 'LYZ']\n >>> sc.pl.dotplot(adata, markers, groupby='bulk_labels', dendrogram=True)\n \"\"\"\n if isinstance(groupby, str):\n # if not a list, turn into a list\n groupby = [groupby]\n for group in groupby:\n if group not in adata.obs_keys():\n raise ValueError(\n \"groupby has to be a valid observation. \"\n f\"Given value: {group}, valid observations: {adata.obs_keys()}\"\n )\n if not isinstance(adata.obs[group].dtype, CategoricalDtype):\n raise ValueError(\n \"groupby has to be a categorical observation. \"\n f\"Given value: {group}, Column type: {adata.obs[group].dtype}\"\n )\n\n if var_names is None:\n rep_df = pd.DataFrame(\n _choose_representation(adata, use_rep=use_rep, n_pcs=n_pcs)\n )\n categorical = adata.obs[groupby[0]]\n if len(groupby) > 1:\n for group in groupby[1:]:\n # create new category by merging the given groupby categories\n categorical = (\n categorical.astype(str) + \"_\" + adata.obs[group].astype(str)\n ).astype(\"category\")\n categorical.name = \"_\".join(groupby)\n\n rep_df.set_index(categorical, inplace=True)\n categories: pd.Index = rep_df.index.categories\n else:\n gene_names = adata.raw.var_names if use_raw else adata.var_names\n from ..plotting._anndata import _prepare_dataframe\n\n categories, rep_df = _prepare_dataframe(\n adata, gene_names, groupby, use_raw=use_raw\n )\n\n # aggregate values within categories using 'mean'\n mean_df = (\n rep_df.groupby(level=0, observed=True)\n .mean()\n .loc[categories] # Fixed ordering for pandas < 2\n )\n\n import scipy.cluster.hierarchy as sch\n from scipy.spatial import distance\n\n corr_matrix = mean_df.T.corr(method=cor_method).clip(-1, 1)\n corr_condensed = distance.squareform(1 - corr_matrix)\n z_var = sch.linkage(\n corr_condensed, method=linkage_method, optimal_ordering=optimal_ordering\n )\n dendro_info = sch.dendrogram(z_var, labels=list(categories), no_plot=True)\n\n dat = dict(\n linkage=z_var,\n groupby=groupby,\n use_rep=use_rep,\n cor_method=cor_method,\n linkage_method=linkage_method,\n categories_ordered=dendro_info[\"ivl\"],\n categories_idx_ordered=dendro_info[\"leaves\"],\n dendrogram_info=dendro_info,\n correlation_matrix=corr_matrix.values,\n )\n\n if inplace:\n if key_added is None:\n key_added = f'dendrogram_{\"_\".join(groupby)}'\n logg.info(f\"Storing dendrogram info using `.uns[{key_added!r}]`\")\n adata.uns[key_added] = dat\n else:\n return dat\n", "path": "scanpy/tools/_dendrogram.py"}]}
| 3,748 | 248 |
gh_patches_debug_10622
|
rasdani/github-patches
|
git_diff
|
mdn__kuma-6143
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Require minium length for "explanation" field in BCD signals
**Summary**
_What should be changed?_
A minimum length of 10 characters should be required for the "explanation" field in BCD signals
**Rationale**
_What problems would this solve?_
Less spam submissions
**Audience**
_Who would use this changed feature?_
BCD maintainers
**Proposal**
_What would users see and do? What would happen as a result?_
Users would be required to enter a meaningful explanation and hopefully refrain from submitting "fehfs", "test", and other garbage.
**Additional context**
_Is there anything else we should know?_
Was discussed in https://github.com/mdn/sprints/issues/2289
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kuma/api/v1/serializers.py`
Content:
```
1 from rest_framework import exceptions
2 from rest_framework import serializers
3
4 from kuma.wiki.models import BCSignal, Document
5
6
7 class BCSignalSerializer(serializers.Serializer):
8 feature = serializers.CharField(max_length=255)
9 browsers = serializers.CharField(max_length=255)
10 slug = serializers.CharField(max_length=255)
11 locale = serializers.CharField(max_length=7)
12 explanation = serializers.CharField(allow_blank=True, max_length=1000)
13 supporting_material = serializers.CharField(
14 allow_blank=True, required=False, max_length=1000
15 )
16
17 def create(self, validated_data):
18 slug = validated_data.pop("slug")
19 locale = validated_data.pop("locale")
20 document = Document.objects.filter(slug=slug, locale=locale).first()
21
22 if document:
23 return BCSignal.objects.create(document=document, **validated_data)
24 raise exceptions.ValidationError("Document not found")
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kuma/api/v1/serializers.py b/kuma/api/v1/serializers.py
--- a/kuma/api/v1/serializers.py
+++ b/kuma/api/v1/serializers.py
@@ -9,7 +9,11 @@
browsers = serializers.CharField(max_length=255)
slug = serializers.CharField(max_length=255)
locale = serializers.CharField(max_length=7)
- explanation = serializers.CharField(allow_blank=True, max_length=1000)
+ explanation = serializers.CharField(
+ # Make sure these match the constants in bcd-signal.jsx
+ max_length=1000,
+ min_length=10,
+ )
supporting_material = serializers.CharField(
allow_blank=True, required=False, max_length=1000
)
|
{"golden_diff": "diff --git a/kuma/api/v1/serializers.py b/kuma/api/v1/serializers.py\n--- a/kuma/api/v1/serializers.py\n+++ b/kuma/api/v1/serializers.py\n@@ -9,7 +9,11 @@\n browsers = serializers.CharField(max_length=255)\n slug = serializers.CharField(max_length=255)\n locale = serializers.CharField(max_length=7)\n- explanation = serializers.CharField(allow_blank=True, max_length=1000)\n+ explanation = serializers.CharField(\n+ # Make sure these match the constants in bcd-signal.jsx\n+ max_length=1000,\n+ min_length=10,\n+ )\n supporting_material = serializers.CharField(\n allow_blank=True, required=False, max_length=1000\n )\n", "issue": "Require minium length for \"explanation\" field in BCD signals\n**Summary**\r\n_What should be changed?_\r\nA minimum length of 10 characters should be required for the \"explanation\" field in BCD signals\r\n\r\n**Rationale**\r\n_What problems would this solve?_\r\nLess spam submissions\r\n\r\n**Audience**\r\n_Who would use this changed feature?_\r\nBCD maintainers\r\n\r\n**Proposal**\r\n_What would users see and do? What would happen as a result?_\r\nUsers would be required to enter a meaningful explanation and hopefully refrain from submitting \"fehfs\", \"test\", and other garbage.\r\n\r\n**Additional context**\r\n_Is there anything else we should know?_\r\nWas discussed in https://github.com/mdn/sprints/issues/2289\n", "before_files": [{"content": "from rest_framework import exceptions\nfrom rest_framework import serializers\n\nfrom kuma.wiki.models import BCSignal, Document\n\n\nclass BCSignalSerializer(serializers.Serializer):\n feature = serializers.CharField(max_length=255)\n browsers = serializers.CharField(max_length=255)\n slug = serializers.CharField(max_length=255)\n locale = serializers.CharField(max_length=7)\n explanation = serializers.CharField(allow_blank=True, max_length=1000)\n supporting_material = serializers.CharField(\n allow_blank=True, required=False, max_length=1000\n )\n\n def create(self, validated_data):\n slug = validated_data.pop(\"slug\")\n locale = validated_data.pop(\"locale\")\n document = Document.objects.filter(slug=slug, locale=locale).first()\n\n if document:\n return BCSignal.objects.create(document=document, **validated_data)\n raise exceptions.ValidationError(\"Document not found\")\n", "path": "kuma/api/v1/serializers.py"}], "after_files": [{"content": "from rest_framework import exceptions\nfrom rest_framework import serializers\n\nfrom kuma.wiki.models import BCSignal, Document\n\n\nclass BCSignalSerializer(serializers.Serializer):\n feature = serializers.CharField(max_length=255)\n browsers = serializers.CharField(max_length=255)\n slug = serializers.CharField(max_length=255)\n locale = serializers.CharField(max_length=7)\n explanation = serializers.CharField(\n # Make sure these match the constants in bcd-signal.jsx\n max_length=1000,\n min_length=10,\n )\n supporting_material = serializers.CharField(\n allow_blank=True, required=False, max_length=1000\n )\n\n def create(self, validated_data):\n slug = validated_data.pop(\"slug\")\n locale = validated_data.pop(\"locale\")\n document = Document.objects.filter(slug=slug, locale=locale).first()\n\n if document:\n return BCSignal.objects.create(document=document, **validated_data)\n raise exceptions.ValidationError(\"Document not found\")\n", "path": "kuma/api/v1/serializers.py"}]}
| 658 | 181 |
gh_patches_debug_9125
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-center-index-10039
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[request] mimalloc/1.7.5
### Package Details
* Package Name/Version: **mimalloc/1.7.5**
* Changelog: **https://github.com/microsoft/mimalloc/releases/tag/v1.7.5**
The above mentioned version is newly released by the upstream project and not yet available as a recipe. Please add this version.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/mimalloc/all/conanfile.py`
Content:
```
1 from conans import ConanFile, CMake, tools
2 from conans.errors import ConanInvalidConfiguration
3 import os
4 import shutil
5 import textwrap
6
7 required_conan_version = ">=1.43.0"
8
9
10 class MimallocConan(ConanFile):
11 name = "mimalloc"
12 license = "MIT"
13 url = "https://github.com/conan-io/conan-center-index"
14 homepage = "https://github.com/microsoft/mimalloc"
15 description = "mimalloc is a compact general purpose allocator with excellent performance."
16 topics = ("mimalloc", "allocator", "performance", "microsoft")
17
18 settings = "os", "arch", "compiler", "build_type"
19 options = {
20 "shared": [True, False],
21 "fPIC": [True, False],
22 "secure": [True, False],
23 "override": [True, False],
24 "inject": [True, False],
25 "single_object": [True, False],
26 }
27 default_options = {
28 "shared": False,
29 "fPIC": True,
30 "secure": False,
31 "override": False,
32 "inject": False,
33 "single_object": False,
34 }
35
36 generators = "cmake"
37 _cmake = None
38
39 @property
40 def _source_subfolder(self):
41 return "source_subfolder"
42
43 @property
44 def _build_subfolder(self):
45 return "build_subfolder"
46
47 @property
48 def _compilers_minimum_version(self):
49 return {
50 "gcc": "7",
51 "Visual Studio": "15",
52 "clang": "5",
53 "apple-clang": "10",
54 }
55
56 def export_sources(self):
57 self.copy("CMakeLists.txt")
58 for patch in self.conan_data.get("patches", {}).get(self.version, []):
59 self.copy(patch["patch_file"])
60
61 def config_options(self):
62 if self.settings.os == "Windows":
63 del self.options.fPIC
64
65 # single_object and inject are options
66 # only when overriding on Unix-like platforms:
67 if self.settings.compiler == "Visual Studio":
68 del self.options.single_object
69 del self.options.inject
70
71 def configure(self):
72 if self.options.shared:
73 del self.options.fPIC
74
75 # single_object is valid only for static
76 # override:
77 if self.options.get_safe("single_object"):
78 del self.options.single_object
79
80 # inject is valid only for Unix-like dynamic override:
81 if not self.options.shared and self.options.get_safe("inject"):
82 del self.options.inject
83
84 # single_object and inject are valid only when
85 # overriding on Unix-like platforms:
86 if not self.options.override:
87 if self.options.get_safe("single_object"):
88 del self.options.single_object
89 if self.options.get_safe("inject"):
90 del self.options.inject
91
92 def validate(self):
93 # Shared overriding requires dynamic runtime for MSVC:
94 if self.options.override and \
95 self.options.shared and \
96 self.settings.compiler == "Visual Studio" and \
97 "MT" in str(self.settings.compiler.runtime):
98 raise ConanInvalidConfiguration(
99 "Dynamic runtime (MD/MDd) is required when using mimalloc as a shared library for override")
100
101 if self.options.override and \
102 self.options.get_safe("single_object") and \
103 self.options.get_safe("inject"):
104 raise ConanInvalidConfiguration("Single object is incompatible with library injection");
105
106 if self.settings.compiler.get_safe("cppstd"):
107 tools.check_min_cppstd(self, "17")
108
109 minimum_version = self._compilers_minimum_version.get(str(self.settings.compiler), False)
110
111 if not minimum_version:
112 self.output.warn("mimalloc requires C++17. Your compiler is unknown. Assuming it supports C++17.")
113 elif tools.Version(self.settings.compiler.version) < minimum_version:
114 raise ConanInvalidConfiguration("mimalloc requires a compiler that supports at least C++17")
115
116 def source(self):
117 tools.get(**self.conan_data["sources"][self.version],
118 destination=self._source_subfolder, strip_root=True)
119
120 def _configure_cmake(self):
121 if self._cmake:
122 return self._cmake
123 self._cmake = CMake(self)
124 if self._cmake.is_multi_configuration:
125 self._cmake.definitions["CMAKE_BUILD_TYPE"] = self.settings.build_type
126 self._cmake.definitions["MI_BUILD_TESTS"] = "OFF"
127 self._cmake.definitions["MI_BUILD_SHARED"] = self.options.shared
128 self._cmake.definitions["MI_BUILD_STATIC"] = not self.options.shared
129 self._cmake.definitions["MI_BUILD_OBJECT"] = self.options.get_safe("single_object", False)
130 self._cmake.definitions["MI_OVERRIDE"] = "ON" if self.options.override else "OFF"
131 self._cmake.definitions["MI_SECURE"] = "ON" if self.options.secure else "OFF"
132 if tools.Version(self.version) >= "1.7.0":
133 self._cmake.definitions["MI_INSTALL_TOPLEVEL"] = "ON"
134 self._cmake.configure(build_folder=self._build_subfolder)
135 return self._cmake
136
137 def build(self):
138 for patch in self.conan_data.get("patches", {}).get(self.version, []):
139 tools.patch(**patch)
140 if self.settings.compiler == "Visual Studio" and self.settings.arch == "x86":
141 tools.replace_path_in_file(os.path.join(self._source_subfolder, "CMakeLists.txt"),
142 "mimalloc-redirect.lib", "mimalloc-redirect32.lib")
143 with tools.vcvars(self.settings) if self.settings.compiler == "Visual Studio" else tools.no_op():
144 cmake = self._configure_cmake()
145 cmake.build()
146
147 def package(self):
148 self.copy("LICENSE", dst="licenses", src=self._source_subfolder)
149 with tools.vcvars(self.settings) if self.settings.compiler == "Visual Studio" else tools.no_op():
150 cmake = self._configure_cmake()
151 cmake.install()
152
153 tools.rmdir(os.path.join(self.package_folder, "cmake"))
154 tools.rmdir(os.path.join(self.package_folder, "lib", "cmake"))
155
156 if self.options.get_safe("single_object"):
157 tools.remove_files_by_mask(os.path.join(self.package_folder, "lib"),
158 "*.a")
159 shutil.move(os.path.join(self.package_folder, self._obj_name + ".o"),
160 os.path.join(self.package_folder, "lib"))
161 shutil.copy(os.path.join(self.package_folder, "lib", self._obj_name + ".o"),
162 os.path.join(self.package_folder, "lib", self._obj_name))
163
164 if self.settings.os == "Windows" and self.options.shared:
165 if self.settings.arch == "x86_64":
166 self.copy("mimalloc-redirect.dll", src=os.path.join(self._source_subfolder, "bin"),
167 dst="bin")
168 elif self.settings.arch == "x86":
169 self.copy("mimalloc-redirect32.dll", src=os.path.join(self._source_subfolder, "bin"),
170 dst="bin")
171
172 tools.rmdir(os.path.join(self.package_folder, "share"))
173
174 cmake_target = "mimalloc" if self.options.shared else "mimalloc-static"
175 self._create_cmake_module_alias_targets(
176 os.path.join(self.package_folder, self._module_file_rel_path),
177 {cmake_target: "mimalloc::mimalloc"}
178 )
179
180 @staticmethod
181 def _create_cmake_module_alias_targets(module_file, targets):
182 content = ""
183 for alias, aliased in targets.items():
184 content += textwrap.dedent("""\
185 if(TARGET {aliased} AND NOT TARGET {alias})
186 add_library({alias} INTERFACE IMPORTED)
187 set_property(TARGET {alias} PROPERTY INTERFACE_LINK_LIBRARIES {aliased})
188 endif()
189 """.format(alias=alias, aliased=aliased))
190 tools.save(module_file, content)
191
192 @property
193 def _module_subfolder(self):
194 return os.path.join("lib", "cmake")
195
196 @property
197 def _module_file_rel_path(self):
198 return os.path.join(self._module_subfolder,
199 "conan-official-{}-targets.cmake".format(self.name))
200
201 @property
202 def _obj_name(self):
203 name = "mimalloc"
204 if self.options.secure:
205 name += "-secure"
206 if self.settings.build_type not in ("Release", "RelWithDebInfo", "MinSizeRel"):
207 name += "-{}".format(str(self.settings.build_type).lower())
208 return name
209
210 @property
211 def _lib_name(self):
212 name = "mimalloc" if self.settings.os == "Windows" else "libmimalloc"
213
214 if self.settings.os == "Windows" and not self.options.shared:
215 name += "-static"
216 if self.options.secure:
217 name += "-secure"
218 if self.settings.build_type not in ("Release", "RelWithDebInfo", "MinSizeRel"):
219 name += "-{}".format(str(self.settings.build_type).lower())
220 return name
221
222 def package_info(self):
223 self.cpp_info.set_property("cmake_file_name", "mimalloc")
224 self.cpp_info.set_property("cmake_target_name", "mimalloc" if self.options.shared else "mimalloc-static")
225
226 self.cpp_info.names["cmake_find_package"] = "mimalloc"
227 self.cpp_info.names["cmake_find_package_multi"] = "mimalloc"
228 self.cpp_info.builddirs.append(self._module_subfolder)
229 self.cpp_info.build_modules["cmake_find_package"] = [self._module_file_rel_path]
230 self.cpp_info.build_modules["cmake_find_package_multi"] = [self._module_file_rel_path]
231
232 if self.options.get_safe("inject"):
233 self.cpp_info.includedirs = []
234 self.cpp_info.libdirs = []
235 self.cpp_info.resdirs = []
236 return
237
238 if self.options.get_safe("single_object"):
239 obj_ext = "o"
240 obj_file = "{}.{}".format(self._obj_name, obj_ext)
241 obj_path = os.path.join(self.package_folder, "lib", obj_file)
242 self.cpp_info.exelinkflags = [obj_path]
243 self.cpp_info.sharedlinkflags = [obj_path]
244 self.cpp_info.libdirs = []
245 self.cpp_info.bindirs = []
246 else:
247 self.cpp_info.libs = tools.collect_libs(self)
248
249 if self.settings.os == "Linux":
250 self.cpp_info.system_libs.append("pthread")
251 if not self.options.shared:
252 if self.settings.os == "Windows":
253 self.cpp_info.system_libs.extend(["psapi", "shell32", "user32", "bcrypt"])
254 elif self.settings.os == "Linux":
255 self.cpp_info.system_libs.append("rt")
256
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/recipes/mimalloc/all/conanfile.py b/recipes/mimalloc/all/conanfile.py
--- a/recipes/mimalloc/all/conanfile.py
+++ b/recipes/mimalloc/all/conanfile.py
@@ -101,7 +101,7 @@
if self.options.override and \
self.options.get_safe("single_object") and \
self.options.get_safe("inject"):
- raise ConanInvalidConfiguration("Single object is incompatible with library injection");
+ raise ConanInvalidConfiguration("Single object is incompatible with library injection")
if self.settings.compiler.get_safe("cppstd"):
tools.check_min_cppstd(self, "17")
|
{"golden_diff": "diff --git a/recipes/mimalloc/all/conanfile.py b/recipes/mimalloc/all/conanfile.py\n--- a/recipes/mimalloc/all/conanfile.py\n+++ b/recipes/mimalloc/all/conanfile.py\n@@ -101,7 +101,7 @@\n if self.options.override and \\\n self.options.get_safe(\"single_object\") and \\\n self.options.get_safe(\"inject\"):\n- raise ConanInvalidConfiguration(\"Single object is incompatible with library injection\");\n+ raise ConanInvalidConfiguration(\"Single object is incompatible with library injection\")\n \n if self.settings.compiler.get_safe(\"cppstd\"):\n tools.check_min_cppstd(self, \"17\")\n", "issue": "[request] mimalloc/1.7.5\n### Package Details\r\n * Package Name/Version: **mimalloc/1.7.5**\r\n * Changelog: **https://github.com/microsoft/mimalloc/releases/tag/v1.7.5**\r\n\r\n\r\nThe above mentioned version is newly released by the upstream project and not yet available as a recipe. Please add this version.\r\n\n", "before_files": [{"content": "from conans import ConanFile, CMake, tools\nfrom conans.errors import ConanInvalidConfiguration\nimport os\nimport shutil\nimport textwrap\n\nrequired_conan_version = \">=1.43.0\"\n\n\nclass MimallocConan(ConanFile):\n name = \"mimalloc\"\n license = \"MIT\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/microsoft/mimalloc\"\n description = \"mimalloc is a compact general purpose allocator with excellent performance.\"\n topics = (\"mimalloc\", \"allocator\", \"performance\", \"microsoft\")\n\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"secure\": [True, False],\n \"override\": [True, False],\n \"inject\": [True, False],\n \"single_object\": [True, False],\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"secure\": False,\n \"override\": False,\n \"inject\": False,\n \"single_object\": False,\n }\n\n generators = \"cmake\"\n _cmake = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n @property\n def _build_subfolder(self):\n return \"build_subfolder\"\n\n @property\n def _compilers_minimum_version(self):\n return {\n \"gcc\": \"7\",\n \"Visual Studio\": \"15\",\n \"clang\": \"5\",\n \"apple-clang\": \"10\",\n }\n\n def export_sources(self):\n self.copy(\"CMakeLists.txt\")\n for patch in self.conan_data.get(\"patches\", {}).get(self.version, []):\n self.copy(patch[\"patch_file\"])\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n # single_object and inject are options\n # only when overriding on Unix-like platforms:\n if self.settings.compiler == \"Visual Studio\":\n del self.options.single_object\n del self.options.inject\n\n def configure(self):\n if self.options.shared:\n del self.options.fPIC\n\n # single_object is valid only for static\n # override:\n if self.options.get_safe(\"single_object\"):\n del self.options.single_object\n\n # inject is valid only for Unix-like dynamic override:\n if not self.options.shared and self.options.get_safe(\"inject\"):\n del self.options.inject\n\n # single_object and inject are valid only when\n # overriding on Unix-like platforms:\n if not self.options.override:\n if self.options.get_safe(\"single_object\"):\n del self.options.single_object\n if self.options.get_safe(\"inject\"):\n del self.options.inject\n\n def validate(self):\n # Shared overriding requires dynamic runtime for MSVC:\n if self.options.override and \\\n self.options.shared and \\\n self.settings.compiler == \"Visual Studio\" and \\\n \"MT\" in str(self.settings.compiler.runtime):\n raise ConanInvalidConfiguration(\n \"Dynamic runtime (MD/MDd) is required when using mimalloc as a shared library for override\")\n\n if self.options.override and \\\n self.options.get_safe(\"single_object\") and \\\n self.options.get_safe(\"inject\"):\n raise ConanInvalidConfiguration(\"Single object is incompatible with library injection\");\n\n if self.settings.compiler.get_safe(\"cppstd\"):\n tools.check_min_cppstd(self, \"17\")\n\n minimum_version = self._compilers_minimum_version.get(str(self.settings.compiler), False)\n\n if not minimum_version:\n self.output.warn(\"mimalloc requires C++17. Your compiler is unknown. Assuming it supports C++17.\")\n elif tools.Version(self.settings.compiler.version) < minimum_version:\n raise ConanInvalidConfiguration(\"mimalloc requires a compiler that supports at least C++17\")\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version],\n destination=self._source_subfolder, strip_root=True)\n\n def _configure_cmake(self):\n if self._cmake:\n return self._cmake\n self._cmake = CMake(self)\n if self._cmake.is_multi_configuration:\n self._cmake.definitions[\"CMAKE_BUILD_TYPE\"] = self.settings.build_type\n self._cmake.definitions[\"MI_BUILD_TESTS\"] = \"OFF\"\n self._cmake.definitions[\"MI_BUILD_SHARED\"] = self.options.shared\n self._cmake.definitions[\"MI_BUILD_STATIC\"] = not self.options.shared\n self._cmake.definitions[\"MI_BUILD_OBJECT\"] = self.options.get_safe(\"single_object\", False)\n self._cmake.definitions[\"MI_OVERRIDE\"] = \"ON\" if self.options.override else \"OFF\"\n self._cmake.definitions[\"MI_SECURE\"] = \"ON\" if self.options.secure else \"OFF\"\n if tools.Version(self.version) >= \"1.7.0\":\n self._cmake.definitions[\"MI_INSTALL_TOPLEVEL\"] = \"ON\"\n self._cmake.configure(build_folder=self._build_subfolder)\n return self._cmake\n\n def build(self):\n for patch in self.conan_data.get(\"patches\", {}).get(self.version, []):\n tools.patch(**patch)\n if self.settings.compiler == \"Visual Studio\" and self.settings.arch == \"x86\":\n tools.replace_path_in_file(os.path.join(self._source_subfolder, \"CMakeLists.txt\"),\n \"mimalloc-redirect.lib\", \"mimalloc-redirect32.lib\")\n with tools.vcvars(self.settings) if self.settings.compiler == \"Visual Studio\" else tools.no_op():\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(\"LICENSE\", dst=\"licenses\", src=self._source_subfolder)\n with tools.vcvars(self.settings) if self.settings.compiler == \"Visual Studio\" else tools.no_op():\n cmake = self._configure_cmake()\n cmake.install()\n\n tools.rmdir(os.path.join(self.package_folder, \"cmake\"))\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n\n if self.options.get_safe(\"single_object\"):\n tools.remove_files_by_mask(os.path.join(self.package_folder, \"lib\"),\n \"*.a\")\n shutil.move(os.path.join(self.package_folder, self._obj_name + \".o\"),\n os.path.join(self.package_folder, \"lib\"))\n shutil.copy(os.path.join(self.package_folder, \"lib\", self._obj_name + \".o\"),\n os.path.join(self.package_folder, \"lib\", self._obj_name))\n\n if self.settings.os == \"Windows\" and self.options.shared:\n if self.settings.arch == \"x86_64\":\n self.copy(\"mimalloc-redirect.dll\", src=os.path.join(self._source_subfolder, \"bin\"),\n dst=\"bin\")\n elif self.settings.arch == \"x86\":\n self.copy(\"mimalloc-redirect32.dll\", src=os.path.join(self._source_subfolder, \"bin\"),\n dst=\"bin\")\n\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n\n cmake_target = \"mimalloc\" if self.options.shared else \"mimalloc-static\"\n self._create_cmake_module_alias_targets(\n os.path.join(self.package_folder, self._module_file_rel_path),\n {cmake_target: \"mimalloc::mimalloc\"}\n )\n\n @staticmethod\n def _create_cmake_module_alias_targets(module_file, targets):\n content = \"\"\n for alias, aliased in targets.items():\n content += textwrap.dedent(\"\"\"\\\n if(TARGET {aliased} AND NOT TARGET {alias})\n add_library({alias} INTERFACE IMPORTED)\n set_property(TARGET {alias} PROPERTY INTERFACE_LINK_LIBRARIES {aliased})\n endif()\n \"\"\".format(alias=alias, aliased=aliased))\n tools.save(module_file, content)\n\n @property\n def _module_subfolder(self):\n return os.path.join(\"lib\", \"cmake\")\n\n @property\n def _module_file_rel_path(self):\n return os.path.join(self._module_subfolder,\n \"conan-official-{}-targets.cmake\".format(self.name))\n\n @property\n def _obj_name(self):\n name = \"mimalloc\"\n if self.options.secure:\n name += \"-secure\"\n if self.settings.build_type not in (\"Release\", \"RelWithDebInfo\", \"MinSizeRel\"):\n name += \"-{}\".format(str(self.settings.build_type).lower())\n return name\n\n @property\n def _lib_name(self):\n name = \"mimalloc\" if self.settings.os == \"Windows\" else \"libmimalloc\"\n\n if self.settings.os == \"Windows\" and not self.options.shared:\n name += \"-static\"\n if self.options.secure:\n name += \"-secure\"\n if self.settings.build_type not in (\"Release\", \"RelWithDebInfo\", \"MinSizeRel\"):\n name += \"-{}\".format(str(self.settings.build_type).lower())\n return name\n\n def package_info(self):\n self.cpp_info.set_property(\"cmake_file_name\", \"mimalloc\")\n self.cpp_info.set_property(\"cmake_target_name\", \"mimalloc\" if self.options.shared else \"mimalloc-static\")\n\n self.cpp_info.names[\"cmake_find_package\"] = \"mimalloc\"\n self.cpp_info.names[\"cmake_find_package_multi\"] = \"mimalloc\"\n self.cpp_info.builddirs.append(self._module_subfolder)\n self.cpp_info.build_modules[\"cmake_find_package\"] = [self._module_file_rel_path]\n self.cpp_info.build_modules[\"cmake_find_package_multi\"] = [self._module_file_rel_path]\n\n if self.options.get_safe(\"inject\"):\n self.cpp_info.includedirs = []\n self.cpp_info.libdirs = []\n self.cpp_info.resdirs = []\n return\n\n if self.options.get_safe(\"single_object\"):\n obj_ext = \"o\"\n obj_file = \"{}.{}\".format(self._obj_name, obj_ext)\n obj_path = os.path.join(self.package_folder, \"lib\", obj_file)\n self.cpp_info.exelinkflags = [obj_path]\n self.cpp_info.sharedlinkflags = [obj_path]\n self.cpp_info.libdirs = []\n self.cpp_info.bindirs = []\n else:\n self.cpp_info.libs = tools.collect_libs(self)\n\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs.append(\"pthread\")\n if not self.options.shared:\n if self.settings.os == \"Windows\":\n self.cpp_info.system_libs.extend([\"psapi\", \"shell32\", \"user32\", \"bcrypt\"])\n elif self.settings.os == \"Linux\":\n self.cpp_info.system_libs.append(\"rt\")\n", "path": "recipes/mimalloc/all/conanfile.py"}], "after_files": [{"content": "from conans import ConanFile, CMake, tools\nfrom conans.errors import ConanInvalidConfiguration\nimport os\nimport shutil\nimport textwrap\n\nrequired_conan_version = \">=1.43.0\"\n\n\nclass MimallocConan(ConanFile):\n name = \"mimalloc\"\n license = \"MIT\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/microsoft/mimalloc\"\n description = \"mimalloc is a compact general purpose allocator with excellent performance.\"\n topics = (\"mimalloc\", \"allocator\", \"performance\", \"microsoft\")\n\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"secure\": [True, False],\n \"override\": [True, False],\n \"inject\": [True, False],\n \"single_object\": [True, False],\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"secure\": False,\n \"override\": False,\n \"inject\": False,\n \"single_object\": False,\n }\n\n generators = \"cmake\"\n _cmake = None\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n @property\n def _build_subfolder(self):\n return \"build_subfolder\"\n\n @property\n def _compilers_minimum_version(self):\n return {\n \"gcc\": \"7\",\n \"Visual Studio\": \"15\",\n \"clang\": \"5\",\n \"apple-clang\": \"10\",\n }\n\n def export_sources(self):\n self.copy(\"CMakeLists.txt\")\n for patch in self.conan_data.get(\"patches\", {}).get(self.version, []):\n self.copy(patch[\"patch_file\"])\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n # single_object and inject are options\n # only when overriding on Unix-like platforms:\n if self.settings.compiler == \"Visual Studio\":\n del self.options.single_object\n del self.options.inject\n\n def configure(self):\n if self.options.shared:\n del self.options.fPIC\n\n # single_object is valid only for static\n # override:\n if self.options.get_safe(\"single_object\"):\n del self.options.single_object\n\n # inject is valid only for Unix-like dynamic override:\n if not self.options.shared and self.options.get_safe(\"inject\"):\n del self.options.inject\n\n # single_object and inject are valid only when\n # overriding on Unix-like platforms:\n if not self.options.override:\n if self.options.get_safe(\"single_object\"):\n del self.options.single_object\n if self.options.get_safe(\"inject\"):\n del self.options.inject\n\n def validate(self):\n # Shared overriding requires dynamic runtime for MSVC:\n if self.options.override and \\\n self.options.shared and \\\n self.settings.compiler == \"Visual Studio\" and \\\n \"MT\" in str(self.settings.compiler.runtime):\n raise ConanInvalidConfiguration(\n \"Dynamic runtime (MD/MDd) is required when using mimalloc as a shared library for override\")\n\n if self.options.override and \\\n self.options.get_safe(\"single_object\") and \\\n self.options.get_safe(\"inject\"):\n raise ConanInvalidConfiguration(\"Single object is incompatible with library injection\")\n\n if self.settings.compiler.get_safe(\"cppstd\"):\n tools.check_min_cppstd(self, \"17\")\n\n minimum_version = self._compilers_minimum_version.get(str(self.settings.compiler), False)\n\n if not minimum_version:\n self.output.warn(\"mimalloc requires C++17. Your compiler is unknown. Assuming it supports C++17.\")\n elif tools.Version(self.settings.compiler.version) < minimum_version:\n raise ConanInvalidConfiguration(\"mimalloc requires a compiler that supports at least C++17\")\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version],\n destination=self._source_subfolder, strip_root=True)\n\n def _configure_cmake(self):\n if self._cmake:\n return self._cmake\n self._cmake = CMake(self)\n if self._cmake.is_multi_configuration:\n self._cmake.definitions[\"CMAKE_BUILD_TYPE\"] = self.settings.build_type\n self._cmake.definitions[\"MI_BUILD_TESTS\"] = \"OFF\"\n self._cmake.definitions[\"MI_BUILD_SHARED\"] = self.options.shared\n self._cmake.definitions[\"MI_BUILD_STATIC\"] = not self.options.shared\n self._cmake.definitions[\"MI_BUILD_OBJECT\"] = self.options.get_safe(\"single_object\", False)\n self._cmake.definitions[\"MI_OVERRIDE\"] = \"ON\" if self.options.override else \"OFF\"\n self._cmake.definitions[\"MI_SECURE\"] = \"ON\" if self.options.secure else \"OFF\"\n if tools.Version(self.version) >= \"1.7.0\":\n self._cmake.definitions[\"MI_INSTALL_TOPLEVEL\"] = \"ON\"\n self._cmake.configure(build_folder=self._build_subfolder)\n return self._cmake\n\n def build(self):\n for patch in self.conan_data.get(\"patches\", {}).get(self.version, []):\n tools.patch(**patch)\n if self.settings.compiler == \"Visual Studio\" and self.settings.arch == \"x86\":\n tools.replace_path_in_file(os.path.join(self._source_subfolder, \"CMakeLists.txt\"),\n \"mimalloc-redirect.lib\", \"mimalloc-redirect32.lib\")\n with tools.vcvars(self.settings) if self.settings.compiler == \"Visual Studio\" else tools.no_op():\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(\"LICENSE\", dst=\"licenses\", src=self._source_subfolder)\n with tools.vcvars(self.settings) if self.settings.compiler == \"Visual Studio\" else tools.no_op():\n cmake = self._configure_cmake()\n cmake.install()\n\n tools.rmdir(os.path.join(self.package_folder, \"cmake\"))\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n\n if self.options.get_safe(\"single_object\"):\n tools.remove_files_by_mask(os.path.join(self.package_folder, \"lib\"),\n \"*.a\")\n shutil.move(os.path.join(self.package_folder, self._obj_name + \".o\"),\n os.path.join(self.package_folder, \"lib\"))\n shutil.copy(os.path.join(self.package_folder, \"lib\", self._obj_name + \".o\"),\n os.path.join(self.package_folder, \"lib\", self._obj_name))\n\n if self.settings.os == \"Windows\" and self.options.shared:\n if self.settings.arch == \"x86_64\":\n self.copy(\"mimalloc-redirect.dll\", src=os.path.join(self._source_subfolder, \"bin\"),\n dst=\"bin\")\n elif self.settings.arch == \"x86\":\n self.copy(\"mimalloc-redirect32.dll\", src=os.path.join(self._source_subfolder, \"bin\"),\n dst=\"bin\")\n\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n\n cmake_target = \"mimalloc\" if self.options.shared else \"mimalloc-static\"\n self._create_cmake_module_alias_targets(\n os.path.join(self.package_folder, self._module_file_rel_path),\n {cmake_target: \"mimalloc::mimalloc\"}\n )\n\n @staticmethod\n def _create_cmake_module_alias_targets(module_file, targets):\n content = \"\"\n for alias, aliased in targets.items():\n content += textwrap.dedent(\"\"\"\\\n if(TARGET {aliased} AND NOT TARGET {alias})\n add_library({alias} INTERFACE IMPORTED)\n set_property(TARGET {alias} PROPERTY INTERFACE_LINK_LIBRARIES {aliased})\n endif()\n \"\"\".format(alias=alias, aliased=aliased))\n tools.save(module_file, content)\n\n @property\n def _module_subfolder(self):\n return os.path.join(\"lib\", \"cmake\")\n\n @property\n def _module_file_rel_path(self):\n return os.path.join(self._module_subfolder,\n \"conan-official-{}-targets.cmake\".format(self.name))\n\n @property\n def _obj_name(self):\n name = \"mimalloc\"\n if self.options.secure:\n name += \"-secure\"\n if self.settings.build_type not in (\"Release\", \"RelWithDebInfo\", \"MinSizeRel\"):\n name += \"-{}\".format(str(self.settings.build_type).lower())\n return name\n\n @property\n def _lib_name(self):\n name = \"mimalloc\" if self.settings.os == \"Windows\" else \"libmimalloc\"\n\n if self.settings.os == \"Windows\" and not self.options.shared:\n name += \"-static\"\n if self.options.secure:\n name += \"-secure\"\n if self.settings.build_type not in (\"Release\", \"RelWithDebInfo\", \"MinSizeRel\"):\n name += \"-{}\".format(str(self.settings.build_type).lower())\n return name\n\n def package_info(self):\n self.cpp_info.set_property(\"cmake_file_name\", \"mimalloc\")\n self.cpp_info.set_property(\"cmake_target_name\", \"mimalloc\" if self.options.shared else \"mimalloc-static\")\n\n self.cpp_info.names[\"cmake_find_package\"] = \"mimalloc\"\n self.cpp_info.names[\"cmake_find_package_multi\"] = \"mimalloc\"\n self.cpp_info.builddirs.append(self._module_subfolder)\n self.cpp_info.build_modules[\"cmake_find_package\"] = [self._module_file_rel_path]\n self.cpp_info.build_modules[\"cmake_find_package_multi\"] = [self._module_file_rel_path]\n\n if self.options.get_safe(\"inject\"):\n self.cpp_info.includedirs = []\n self.cpp_info.libdirs = []\n self.cpp_info.resdirs = []\n return\n\n if self.options.get_safe(\"single_object\"):\n obj_ext = \"o\"\n obj_file = \"{}.{}\".format(self._obj_name, obj_ext)\n obj_path = os.path.join(self.package_folder, \"lib\", obj_file)\n self.cpp_info.exelinkflags = [obj_path]\n self.cpp_info.sharedlinkflags = [obj_path]\n self.cpp_info.libdirs = []\n self.cpp_info.bindirs = []\n else:\n self.cpp_info.libs = tools.collect_libs(self)\n\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs.append(\"pthread\")\n if not self.options.shared:\n if self.settings.os == \"Windows\":\n self.cpp_info.system_libs.extend([\"psapi\", \"shell32\", \"user32\", \"bcrypt\"])\n elif self.settings.os == \"Linux\":\n self.cpp_info.system_libs.append(\"rt\")\n", "path": "recipes/mimalloc/all/conanfile.py"}]}
| 3,361 | 146 |
gh_patches_debug_12198
|
rasdani/github-patches
|
git_diff
|
nilearn__nilearn-2549
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
nistats: dependence between function calls – strange bug may affect reproducibility
Hi,
I've encountered very strange behavior of two functions – `map_threshold` from `nistats.thresholding` module and `get_clusters_table` from `nistats.reporting` module. It seems like the threshold value returned by `map_threshold` depends on the last output value(s) of `get_clusters_table` function or some internal function variables. Impact seems to be severe (the higher the `cluster_threshold` value passed to `get_cluster_table`, the higher the threshold returned by subsequent call of `map_threshold`).
Here is a simple demonstration: https://github.com/kbonna/decidenet/blob/master/activation_analysis/nistats_map_threshold_bug.ipynb
I've tried to find a source of the problem studying code of both functions, but I everything seems to be ok at first glance.
Am I missing something simple?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nilearn/reporting/_get_clusters_table.py`
Content:
```
1 """
2 This module implements plotting functions useful to report analysis results.
3
4 Author: Martin Perez-Guevara, Elvis Dohmatob, 2017
5 """
6
7 import warnings
8 from string import ascii_lowercase
9
10 import numpy as np
11 import pandas as pd
12 import nibabel as nib
13 from scipy import ndimage
14
15 from nilearn.image import get_data
16 from nilearn.image.resampling import coord_transform
17
18
19 def _local_max(data, affine, min_distance):
20 """Find all local maxima of the array, separated by at least min_distance.
21 Adapted from https://stackoverflow.com/a/22631583/2589328
22
23 Parameters
24 ----------
25 data : array_like
26 3D array of with masked values for cluster.
27
28 affine: np.ndarray
29 Square matrix specifying the position of the image array data
30 in a reference space.
31
32 min_distance : `int`
33 Minimum distance between local maxima in ``data``, in terms of mm.
34
35 Returns
36 -------
37 ijk : `numpy.ndarray`
38 (n_foci, 3) array of local maxima indices for cluster.
39
40 vals : `numpy.ndarray`
41 (n_foci,) array of values from data at ijk.
42 """
43 ijk, vals = _identify_subpeaks(data)
44 xyz, ijk, vals = _sort_subpeaks(ijk, vals, affine)
45 ijk, vals = _pare_subpeaks(xyz, ijk, vals, min_distance)
46 return ijk, vals
47
48
49 def _identify_subpeaks(data):
50 # Initial identification of subpeaks with minimal minimum distance
51 data_max = ndimage.filters.maximum_filter(data, 3)
52 maxima = (data == data_max)
53 data_min = ndimage.filters.minimum_filter(data, 3)
54 diff = ((data_max - data_min) > 0)
55 maxima[diff == 0] = 0
56
57 labeled, n_subpeaks = ndimage.label(maxima)
58 labels_index = range(1, n_subpeaks + 1)
59 ijk = np.array(ndimage.center_of_mass(data, labeled, labels_index))
60 ijk = np.round(ijk).astype(int)
61 vals = np.apply_along_axis(arr=ijk, axis=1, func1d=_get_val,
62 input_arr=data)
63 return ijk, vals
64
65
66 def _sort_subpeaks(ijk, vals, affine):
67 # Sort subpeaks in cluster in descending order of stat value
68 order = (-vals).argsort()
69 vals = vals[order]
70 ijk = ijk[order, :]
71 xyz = nib.affines.apply_affine(affine, ijk) # Convert to xyz in mm
72 return xyz, ijk, vals
73
74
75 def _pare_subpeaks(xyz, ijk, vals, min_distance):
76 # Reduce list of subpeaks based on distance
77 keep_idx = np.ones(xyz.shape[0]).astype(bool)
78 for i in range(xyz.shape[0]):
79 for j in range(i + 1, xyz.shape[0]):
80 if keep_idx[i] == 1:
81 dist = np.linalg.norm(xyz[i, :] - xyz[j, :])
82 keep_idx[j] = dist > min_distance
83 ijk = ijk[keep_idx, :]
84 vals = vals[keep_idx]
85 return ijk, vals
86
87
88 def _get_val(row, input_arr):
89 """Small function for extracting values from array based on index.
90 """
91 i, j, k = row
92 return input_arr[i, j, k]
93
94
95 def get_clusters_table(stat_img, stat_threshold, cluster_threshold=None,
96 min_distance=8.):
97 """Creates pandas dataframe with img cluster statistics.
98
99 Parameters
100 ----------
101 stat_img : Niimg-like object,
102 Statistical image (presumably in z- or p-scale).
103
104 stat_threshold: `float`
105 Cluster forming threshold in same scale as `stat_img` (either a
106 p-value or z-scale value).
107
108 cluster_threshold : `int` or `None`, optional
109 Cluster size threshold, in voxels.
110
111 min_distance: `float`, optional
112 Minimum distance between subpeaks in mm. Default is 8 mm.
113
114 Returns
115 -------
116 df : `pandas.DataFrame`
117 Table with peaks and subpeaks from thresholded `stat_img`. For binary
118 clusters (clusters with >1 voxel containing only one value), the table
119 reports the center of mass of the cluster,
120 rather than any peaks/subpeaks.
121 """
122 cols = ['Cluster ID', 'X', 'Y', 'Z', 'Peak Stat', 'Cluster Size (mm3)']
123 stat_map = get_data(stat_img)
124 conn_mat = np.zeros((3, 3, 3), int) # 6-connectivity, aka NN1 or "faces"
125 conn_mat[1, 1, :] = 1
126 conn_mat[1, :, 1] = 1
127 conn_mat[:, 1, 1] = 1
128 voxel_size = np.prod(stat_img.header.get_zooms())
129
130 # Binarize using CDT
131 binarized = stat_map > stat_threshold
132 binarized = binarized.astype(int)
133
134 # If the stat threshold is too high simply return an empty dataframe
135 if np.sum(binarized) == 0:
136 warnings.warn('Attention: No clusters with stat higher than %f' %
137 stat_threshold)
138 return pd.DataFrame(columns=cols)
139
140 # Extract connected components above cluster size threshold
141 label_map = ndimage.measurements.label(binarized, conn_mat)[0]
142 clust_ids = sorted(list(np.unique(label_map)[1:]))
143 for c_val in clust_ids:
144 if cluster_threshold is not None and np.sum(
145 label_map == c_val) < cluster_threshold:
146 stat_map[label_map == c_val] = 0
147 binarized[label_map == c_val] = 0
148
149 # If the cluster threshold is too high simply return an empty dataframe
150 # this checks for stats higher than threshold after small clusters
151 # were removed from stat_map
152 if np.sum(stat_map > stat_threshold) == 0:
153 warnings.warn('Attention: No clusters with more than %d voxels' %
154 cluster_threshold)
155 return pd.DataFrame(columns=cols)
156
157 # Now re-label and create table
158 label_map = ndimage.measurements.label(binarized, conn_mat)[0]
159 clust_ids = sorted(list(np.unique(label_map)[1:]))
160 peak_vals = np.array(
161 [np.max(stat_map * (label_map == c)) for c in clust_ids])
162 clust_ids = [clust_ids[c] for c in
163 (-peak_vals).argsort()] # Sort by descending max value
164
165 rows = []
166 for c_id, c_val in enumerate(clust_ids):
167 cluster_mask = label_map == c_val
168 masked_data = stat_map * cluster_mask
169
170 cluster_size_mm = int(np.sum(cluster_mask) * voxel_size)
171
172 # Get peaks, subpeaks and associated statistics
173 subpeak_ijk, subpeak_vals = _local_max(masked_data, stat_img.affine,
174 min_distance=min_distance)
175 subpeak_xyz = np.asarray(coord_transform(subpeak_ijk[:, 0],
176 subpeak_ijk[:, 1],
177 subpeak_ijk[:, 2],
178 stat_img.affine)).tolist()
179 subpeak_xyz = np.array(subpeak_xyz).T
180
181 # Only report peak and, at most, top 3 subpeaks.
182 n_subpeaks = np.min((len(subpeak_vals), 4))
183 for subpeak in range(n_subpeaks):
184 if subpeak == 0:
185 row = [c_id + 1, subpeak_xyz[subpeak, 0],
186 subpeak_xyz[subpeak, 1], subpeak_xyz[subpeak, 2],
187 subpeak_vals[subpeak], cluster_size_mm]
188 else:
189 # Subpeak naming convention is cluster num+letter: 1a, 1b, etc
190 sp_id = '{0}{1}'.format(c_id + 1, ascii_lowercase[subpeak - 1])
191 row = [sp_id, subpeak_xyz[subpeak, 0], subpeak_xyz[subpeak, 1],
192 subpeak_xyz[subpeak, 2], subpeak_vals[subpeak], '']
193 rows += [row]
194 df = pd.DataFrame(columns=cols, data=rows)
195 return df
196
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nilearn/reporting/_get_clusters_table.py b/nilearn/reporting/_get_clusters_table.py
--- a/nilearn/reporting/_get_clusters_table.py
+++ b/nilearn/reporting/_get_clusters_table.py
@@ -120,7 +120,14 @@
rather than any peaks/subpeaks.
"""
cols = ['Cluster ID', 'X', 'Y', 'Z', 'Peak Stat', 'Cluster Size (mm3)']
- stat_map = get_data(stat_img)
+
+ # If cluster threshold is used, there is chance that stat_map will be
+ # modified, therefore copy is needed
+ if cluster_threshold is None:
+ stat_map = get_data(stat_img)
+ else:
+ stat_map = get_data(stat_img).copy()
+
conn_mat = np.zeros((3, 3, 3), int) # 6-connectivity, aka NN1 or "faces"
conn_mat[1, 1, :] = 1
conn_mat[1, :, 1] = 1
|
{"golden_diff": "diff --git a/nilearn/reporting/_get_clusters_table.py b/nilearn/reporting/_get_clusters_table.py\n--- a/nilearn/reporting/_get_clusters_table.py\n+++ b/nilearn/reporting/_get_clusters_table.py\n@@ -120,7 +120,14 @@\n rather than any peaks/subpeaks.\n \"\"\"\n cols = ['Cluster ID', 'X', 'Y', 'Z', 'Peak Stat', 'Cluster Size (mm3)']\n- stat_map = get_data(stat_img)\n+\n+ # If cluster threshold is used, there is chance that stat_map will be\n+ # modified, therefore copy is needed\n+ if cluster_threshold is None:\n+ stat_map = get_data(stat_img)\n+ else:\n+ stat_map = get_data(stat_img).copy()\n+\n conn_mat = np.zeros((3, 3, 3), int) # 6-connectivity, aka NN1 or \"faces\"\n conn_mat[1, 1, :] = 1\n conn_mat[1, :, 1] = 1\n", "issue": "nistats: dependence between function calls \u2013 strange bug may affect reproducibility\nHi,\r\n\r\nI've encountered very strange behavior of two functions \u2013 `map_threshold` from `nistats.thresholding` module and `get_clusters_table` from `nistats.reporting` module. It seems like the threshold value returned by `map_threshold` depends on the last output value(s) of `get_clusters_table` function or some internal function variables. Impact seems to be severe (the higher the `cluster_threshold` value passed to `get_cluster_table`, the higher the threshold returned by subsequent call of `map_threshold`). \r\n\r\nHere is a simple demonstration: https://github.com/kbonna/decidenet/blob/master/activation_analysis/nistats_map_threshold_bug.ipynb\r\n\r\nI've tried to find a source of the problem studying code of both functions, but I everything seems to be ok at first glance. \r\n\r\nAm I missing something simple?\n", "before_files": [{"content": "\"\"\"\nThis module implements plotting functions useful to report analysis results.\n\nAuthor: Martin Perez-Guevara, Elvis Dohmatob, 2017\n\"\"\"\n\nimport warnings\nfrom string import ascii_lowercase\n\nimport numpy as np\nimport pandas as pd\nimport nibabel as nib\nfrom scipy import ndimage\n\nfrom nilearn.image import get_data\nfrom nilearn.image.resampling import coord_transform\n\n\ndef _local_max(data, affine, min_distance):\n \"\"\"Find all local maxima of the array, separated by at least min_distance.\n Adapted from https://stackoverflow.com/a/22631583/2589328\n\n Parameters\n ----------\n data : array_like\n 3D array of with masked values for cluster.\n\n affine: np.ndarray\n Square matrix specifying the position of the image array data\n in a reference space.\n\n min_distance : `int`\n Minimum distance between local maxima in ``data``, in terms of mm.\n\n Returns\n -------\n ijk : `numpy.ndarray`\n (n_foci, 3) array of local maxima indices for cluster.\n\n vals : `numpy.ndarray`\n (n_foci,) array of values from data at ijk.\n \"\"\"\n ijk, vals = _identify_subpeaks(data)\n xyz, ijk, vals = _sort_subpeaks(ijk, vals, affine)\n ijk, vals = _pare_subpeaks(xyz, ijk, vals, min_distance)\n return ijk, vals\n\n\ndef _identify_subpeaks(data):\n # Initial identification of subpeaks with minimal minimum distance\n data_max = ndimage.filters.maximum_filter(data, 3)\n maxima = (data == data_max)\n data_min = ndimage.filters.minimum_filter(data, 3)\n diff = ((data_max - data_min) > 0)\n maxima[diff == 0] = 0\n\n labeled, n_subpeaks = ndimage.label(maxima)\n labels_index = range(1, n_subpeaks + 1)\n ijk = np.array(ndimage.center_of_mass(data, labeled, labels_index))\n ijk = np.round(ijk).astype(int)\n vals = np.apply_along_axis(arr=ijk, axis=1, func1d=_get_val,\n input_arr=data)\n return ijk, vals\n\n\ndef _sort_subpeaks(ijk, vals, affine):\n # Sort subpeaks in cluster in descending order of stat value\n order = (-vals).argsort()\n vals = vals[order]\n ijk = ijk[order, :]\n xyz = nib.affines.apply_affine(affine, ijk) # Convert to xyz in mm\n return xyz, ijk, vals\n\n\ndef _pare_subpeaks(xyz, ijk, vals, min_distance):\n # Reduce list of subpeaks based on distance\n keep_idx = np.ones(xyz.shape[0]).astype(bool)\n for i in range(xyz.shape[0]):\n for j in range(i + 1, xyz.shape[0]):\n if keep_idx[i] == 1:\n dist = np.linalg.norm(xyz[i, :] - xyz[j, :])\n keep_idx[j] = dist > min_distance\n ijk = ijk[keep_idx, :]\n vals = vals[keep_idx]\n return ijk, vals\n\n\ndef _get_val(row, input_arr):\n \"\"\"Small function for extracting values from array based on index.\n \"\"\"\n i, j, k = row\n return input_arr[i, j, k]\n\n\ndef get_clusters_table(stat_img, stat_threshold, cluster_threshold=None,\n min_distance=8.):\n \"\"\"Creates pandas dataframe with img cluster statistics.\n\n Parameters\n ----------\n stat_img : Niimg-like object,\n Statistical image (presumably in z- or p-scale).\n\n stat_threshold: `float`\n Cluster forming threshold in same scale as `stat_img` (either a\n p-value or z-scale value).\n\n cluster_threshold : `int` or `None`, optional\n Cluster size threshold, in voxels.\n\n min_distance: `float`, optional\n Minimum distance between subpeaks in mm. Default is 8 mm.\n\n Returns\n -------\n df : `pandas.DataFrame`\n Table with peaks and subpeaks from thresholded `stat_img`. For binary\n clusters (clusters with >1 voxel containing only one value), the table\n reports the center of mass of the cluster,\n rather than any peaks/subpeaks.\n \"\"\"\n cols = ['Cluster ID', 'X', 'Y', 'Z', 'Peak Stat', 'Cluster Size (mm3)']\n stat_map = get_data(stat_img)\n conn_mat = np.zeros((3, 3, 3), int) # 6-connectivity, aka NN1 or \"faces\"\n conn_mat[1, 1, :] = 1\n conn_mat[1, :, 1] = 1\n conn_mat[:, 1, 1] = 1\n voxel_size = np.prod(stat_img.header.get_zooms())\n\n # Binarize using CDT\n binarized = stat_map > stat_threshold\n binarized = binarized.astype(int)\n\n # If the stat threshold is too high simply return an empty dataframe\n if np.sum(binarized) == 0:\n warnings.warn('Attention: No clusters with stat higher than %f' %\n stat_threshold)\n return pd.DataFrame(columns=cols)\n\n # Extract connected components above cluster size threshold\n label_map = ndimage.measurements.label(binarized, conn_mat)[0]\n clust_ids = sorted(list(np.unique(label_map)[1:]))\n for c_val in clust_ids:\n if cluster_threshold is not None and np.sum(\n label_map == c_val) < cluster_threshold:\n stat_map[label_map == c_val] = 0\n binarized[label_map == c_val] = 0\n\n # If the cluster threshold is too high simply return an empty dataframe\n # this checks for stats higher than threshold after small clusters\n # were removed from stat_map\n if np.sum(stat_map > stat_threshold) == 0:\n warnings.warn('Attention: No clusters with more than %d voxels' %\n cluster_threshold)\n return pd.DataFrame(columns=cols)\n\n # Now re-label and create table\n label_map = ndimage.measurements.label(binarized, conn_mat)[0]\n clust_ids = sorted(list(np.unique(label_map)[1:]))\n peak_vals = np.array(\n [np.max(stat_map * (label_map == c)) for c in clust_ids])\n clust_ids = [clust_ids[c] for c in\n (-peak_vals).argsort()] # Sort by descending max value\n\n rows = []\n for c_id, c_val in enumerate(clust_ids):\n cluster_mask = label_map == c_val\n masked_data = stat_map * cluster_mask\n\n cluster_size_mm = int(np.sum(cluster_mask) * voxel_size)\n\n # Get peaks, subpeaks and associated statistics\n subpeak_ijk, subpeak_vals = _local_max(masked_data, stat_img.affine,\n min_distance=min_distance)\n subpeak_xyz = np.asarray(coord_transform(subpeak_ijk[:, 0],\n subpeak_ijk[:, 1],\n subpeak_ijk[:, 2],\n stat_img.affine)).tolist()\n subpeak_xyz = np.array(subpeak_xyz).T\n\n # Only report peak and, at most, top 3 subpeaks.\n n_subpeaks = np.min((len(subpeak_vals), 4))\n for subpeak in range(n_subpeaks):\n if subpeak == 0:\n row = [c_id + 1, subpeak_xyz[subpeak, 0],\n subpeak_xyz[subpeak, 1], subpeak_xyz[subpeak, 2],\n subpeak_vals[subpeak], cluster_size_mm]\n else:\n # Subpeak naming convention is cluster num+letter: 1a, 1b, etc\n sp_id = '{0}{1}'.format(c_id + 1, ascii_lowercase[subpeak - 1])\n row = [sp_id, subpeak_xyz[subpeak, 0], subpeak_xyz[subpeak, 1],\n subpeak_xyz[subpeak, 2], subpeak_vals[subpeak], '']\n rows += [row]\n df = pd.DataFrame(columns=cols, data=rows)\n return df\n", "path": "nilearn/reporting/_get_clusters_table.py"}], "after_files": [{"content": "\"\"\"\nThis module implements plotting functions useful to report analysis results.\n\nAuthor: Martin Perez-Guevara, Elvis Dohmatob, 2017\n\"\"\"\n\nimport warnings\nfrom string import ascii_lowercase\n\nimport numpy as np\nimport pandas as pd\nimport nibabel as nib\nfrom scipy import ndimage\n\nfrom nilearn.image import get_data\nfrom nilearn.image.resampling import coord_transform\n\n\ndef _local_max(data, affine, min_distance):\n \"\"\"Find all local maxima of the array, separated by at least min_distance.\n Adapted from https://stackoverflow.com/a/22631583/2589328\n\n Parameters\n ----------\n data : array_like\n 3D array of with masked values for cluster.\n\n affine: np.ndarray\n Square matrix specifying the position of the image array data\n in a reference space.\n\n min_distance : `int`\n Minimum distance between local maxima in ``data``, in terms of mm.\n\n Returns\n -------\n ijk : `numpy.ndarray`\n (n_foci, 3) array of local maxima indices for cluster.\n\n vals : `numpy.ndarray`\n (n_foci,) array of values from data at ijk.\n \"\"\"\n ijk, vals = _identify_subpeaks(data)\n xyz, ijk, vals = _sort_subpeaks(ijk, vals, affine)\n ijk, vals = _pare_subpeaks(xyz, ijk, vals, min_distance)\n return ijk, vals\n\n\ndef _identify_subpeaks(data):\n # Initial identification of subpeaks with minimal minimum distance\n data_max = ndimage.filters.maximum_filter(data, 3)\n maxima = (data == data_max)\n data_min = ndimage.filters.minimum_filter(data, 3)\n diff = ((data_max - data_min) > 0)\n maxima[diff == 0] = 0\n\n labeled, n_subpeaks = ndimage.label(maxima)\n labels_index = range(1, n_subpeaks + 1)\n ijk = np.array(ndimage.center_of_mass(data, labeled, labels_index))\n ijk = np.round(ijk).astype(int)\n vals = np.apply_along_axis(arr=ijk, axis=1, func1d=_get_val,\n input_arr=data)\n return ijk, vals\n\n\ndef _sort_subpeaks(ijk, vals, affine):\n # Sort subpeaks in cluster in descending order of stat value\n order = (-vals).argsort()\n vals = vals[order]\n ijk = ijk[order, :]\n xyz = nib.affines.apply_affine(affine, ijk) # Convert to xyz in mm\n return xyz, ijk, vals\n\n\ndef _pare_subpeaks(xyz, ijk, vals, min_distance):\n # Reduce list of subpeaks based on distance\n keep_idx = np.ones(xyz.shape[0]).astype(bool)\n for i in range(xyz.shape[0]):\n for j in range(i + 1, xyz.shape[0]):\n if keep_idx[i] == 1:\n dist = np.linalg.norm(xyz[i, :] - xyz[j, :])\n keep_idx[j] = dist > min_distance\n ijk = ijk[keep_idx, :]\n vals = vals[keep_idx]\n return ijk, vals\n\n\ndef _get_val(row, input_arr):\n \"\"\"Small function for extracting values from array based on index.\n \"\"\"\n i, j, k = row\n return input_arr[i, j, k]\n\n\ndef get_clusters_table(stat_img, stat_threshold, cluster_threshold=None,\n min_distance=8.):\n \"\"\"Creates pandas dataframe with img cluster statistics.\n\n Parameters\n ----------\n stat_img : Niimg-like object,\n Statistical image (presumably in z- or p-scale).\n\n stat_threshold: `float`\n Cluster forming threshold in same scale as `stat_img` (either a\n p-value or z-scale value).\n\n cluster_threshold : `int` or `None`, optional\n Cluster size threshold, in voxels.\n\n min_distance: `float`, optional\n Minimum distance between subpeaks in mm. Default is 8 mm.\n\n Returns\n -------\n df : `pandas.DataFrame`\n Table with peaks and subpeaks from thresholded `stat_img`. For binary\n clusters (clusters with >1 voxel containing only one value), the table\n reports the center of mass of the cluster,\n rather than any peaks/subpeaks.\n \"\"\"\n cols = ['Cluster ID', 'X', 'Y', 'Z', 'Peak Stat', 'Cluster Size (mm3)']\n\n # If cluster threshold is used, there is chance that stat_map will be\n # modified, therefore copy is needed\n if cluster_threshold is None:\n stat_map = get_data(stat_img)\n else:\n stat_map = get_data(stat_img).copy()\n\n conn_mat = np.zeros((3, 3, 3), int) # 6-connectivity, aka NN1 or \"faces\"\n conn_mat[1, 1, :] = 1\n conn_mat[1, :, 1] = 1\n conn_mat[:, 1, 1] = 1\n voxel_size = np.prod(stat_img.header.get_zooms())\n\n # Binarize using CDT\n binarized = stat_map > stat_threshold\n binarized = binarized.astype(int)\n\n # If the stat threshold is too high simply return an empty dataframe\n if np.sum(binarized) == 0:\n warnings.warn('Attention: No clusters with stat higher than %f' %\n stat_threshold)\n return pd.DataFrame(columns=cols)\n\n # Extract connected components above cluster size threshold\n label_map = ndimage.measurements.label(binarized, conn_mat)[0]\n clust_ids = sorted(list(np.unique(label_map)[1:]))\n for c_val in clust_ids:\n if cluster_threshold is not None and np.sum(\n label_map == c_val) < cluster_threshold:\n stat_map[label_map == c_val] = 0\n binarized[label_map == c_val] = 0\n\n # If the cluster threshold is too high simply return an empty dataframe\n # this checks for stats higher than threshold after small clusters\n # were removed from stat_map\n if np.sum(stat_map > stat_threshold) == 0:\n warnings.warn('Attention: No clusters with more than %d voxels' %\n cluster_threshold)\n return pd.DataFrame(columns=cols)\n\n # Now re-label and create table\n label_map = ndimage.measurements.label(binarized, conn_mat)[0]\n clust_ids = sorted(list(np.unique(label_map)[1:]))\n peak_vals = np.array(\n [np.max(stat_map * (label_map == c)) for c in clust_ids])\n clust_ids = [clust_ids[c] for c in\n (-peak_vals).argsort()] # Sort by descending max value\n\n rows = []\n for c_id, c_val in enumerate(clust_ids):\n cluster_mask = label_map == c_val\n masked_data = stat_map * cluster_mask\n\n cluster_size_mm = int(np.sum(cluster_mask) * voxel_size)\n\n # Get peaks, subpeaks and associated statistics\n subpeak_ijk, subpeak_vals = _local_max(masked_data, stat_img.affine,\n min_distance=min_distance)\n subpeak_xyz = np.asarray(coord_transform(subpeak_ijk[:, 0],\n subpeak_ijk[:, 1],\n subpeak_ijk[:, 2],\n stat_img.affine)).tolist()\n subpeak_xyz = np.array(subpeak_xyz).T\n\n # Only report peak and, at most, top 3 subpeaks.\n n_subpeaks = np.min((len(subpeak_vals), 4))\n for subpeak in range(n_subpeaks):\n if subpeak == 0:\n row = [c_id + 1, subpeak_xyz[subpeak, 0],\n subpeak_xyz[subpeak, 1], subpeak_xyz[subpeak, 2],\n subpeak_vals[subpeak], cluster_size_mm]\n else:\n # Subpeak naming convention is cluster num+letter: 1a, 1b, etc\n sp_id = '{0}{1}'.format(c_id + 1, ascii_lowercase[subpeak - 1])\n row = [sp_id, subpeak_xyz[subpeak, 0], subpeak_xyz[subpeak, 1],\n subpeak_xyz[subpeak, 2], subpeak_vals[subpeak], '']\n rows += [row]\n df = pd.DataFrame(columns=cols, data=rows)\n return df\n", "path": "nilearn/reporting/_get_clusters_table.py"}]}
| 2,801 | 239 |
gh_patches_debug_7894
|
rasdani/github-patches
|
git_diff
|
vega__altair-390
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pin vega version in requirements
To make sure things still work when ipyvega is updated (as it already has been)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 LONG_DESCRIPTION = """
2 Altair: A declarative statistical visualization library for Python.
3
4 http://altair-viz.github.io/
5
6 This package provides a Python API for building statistical visualizations
7 in a declarative manner. This API contains no actual visualization rendering
8 code, but instead emits JSON data structures following the `Vega-Lite`_
9 specification. For convenience, Altair can optionally use `ipyvega`_ to
10 seamlessly display client-side renderings in the Jupyter notebook.
11
12 .. image:: https://raw.githubusercontent.com/altair-viz/altair/master/images/cars.png
13
14 Please note that if you wish to use altair in the Jupyter Notebook, the
15 `ipyvega`_ notebook extension must be enabled as follows::
16
17 $ pip install altair
18 $ pip install --upgrade notebook
19 $ jupyter nbextension install --sys-prefix --py vega
20
21 See the `Altair Documentation`_ for tutorials, detailed installation
22 instructions, and examples.
23 See the `Altair Github Repository`_ for issues, bug reports, and contributions.
24
25 .. _Altair Github Repository: http://github.com/altair-viz/altair/
26 .. _Altair Documentation: http://altair-viz.github.io/
27 .. _Vega-Lite: https://github.com/vega/vega-lite
28 .. _ipyvega: https://github.com/vega/ipyvega
29 """
30
31 DESCRIPTION = "Altair: A declarative statistical visualization library for Python."
32 NAME = "altair"
33 PACKAGES = ['altair',
34 'altair.v1',
35 'altair.v1.tests',
36 'altair.v1.schema',
37 'altair.v1.schema._interface',
38 'altair.v1.schema._interface.tests',
39 'altair.v1.examples',
40 'altair.v1.examples.tests',
41 'altair.datasets',
42 'altair.datasets.tests',
43 'altair.expr',
44 'altair.expr.tests',
45 'altair.tests',
46 'altair.utils',
47 'altair.utils.tests',
48 ]
49 PACKAGE_DATA = {'altair': ['notebooks/*.ipynb',
50 'notebooks/*.html',
51 'notebooks/auto_examples/*.ipynb',
52 'v1/schema/*.json',
53 'v1/examples/*.json',
54 'v1/examples/json/*.json',
55 'datasets/*.json',
56 'expr/*.json']}
57 AUTHOR = "Brian E. Granger / Jake VanderPlas"
58 AUTHOR_EMAIL = "[email protected] / [email protected]"
59 URL = 'http://altair-viz.github.io'
60 DOWNLOAD_URL = 'http://github.com/altair-viz/altair/'
61 LICENSE = 'BSD 3-clause'
62 INSTALL_REQUIRES = ['traitlets>=4.3.1','ipython','pandas','vega>=0.4.4']
63
64
65 import io
66 import os
67 import re
68
69 try:
70 from setuptools import setup
71 except ImportError:
72 from distutils.core import setup
73
74
75 def read(path, encoding='utf-8'):
76 path = os.path.join(os.path.dirname(__file__), path)
77 with io.open(path, encoding=encoding) as fp:
78 return fp.read()
79
80
81 def version(path):
82 """Obtain the packge version from a python file e.g. pkg/__init__.py
83
84 See <https://packaging.python.org/en/latest/single_source_version.html>.
85 """
86 version_file = read(path)
87 version_match = re.search(r"""^__version__ = ['"]([^'"]*)['"]""",
88 version_file, re.M)
89 if version_match:
90 return version_match.group(1)
91 raise RuntimeError("Unable to find version string.")
92
93
94 VERSION = version('altair/__init__.py')
95
96
97 setup(name=NAME,
98 version=VERSION,
99 description=DESCRIPTION,
100 long_description=LONG_DESCRIPTION,
101 author=AUTHOR,
102 author_email=AUTHOR_EMAIL,
103 url=URL,
104 download_url=DOWNLOAD_URL,
105 license=LICENSE,
106 packages=PACKAGES,
107 package_data=PACKAGE_DATA,
108 install_requires=INSTALL_REQUIRES,
109 classifiers=[
110 'Development Status :: 4 - Beta',
111 'Environment :: Console',
112 'Intended Audience :: Science/Research',
113 'License :: OSI Approved :: BSD License',
114 'Natural Language :: English',
115 'Programming Language :: Python :: 2.7',
116 'Programming Language :: Python :: 3.4',
117 'Programming Language :: Python :: 3.5'],
118 )
119
```
Path: `altair/__init__.py`
Content:
```
1 __version__ = '1.3.0.dev0'
2
3 from .v1 import *
4
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/altair/__init__.py b/altair/__init__.py
--- a/altair/__init__.py
+++ b/altair/__init__.py
@@ -1,3 +1,3 @@
-__version__ = '1.3.0.dev0'
+__version__ = '1.2.1.dev0'
from .v1 import *
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -59,7 +59,7 @@
URL = 'http://altair-viz.github.io'
DOWNLOAD_URL = 'http://github.com/altair-viz/altair/'
LICENSE = 'BSD 3-clause'
-INSTALL_REQUIRES = ['traitlets>=4.3.1','ipython','pandas','vega>=0.4.4']
+INSTALL_REQUIRES = ['traitlets>=4.3.1','ipython','pandas','vega==0.4.4']
import io
|
{"golden_diff": "diff --git a/altair/__init__.py b/altair/__init__.py\n--- a/altair/__init__.py\n+++ b/altair/__init__.py\n@@ -1,3 +1,3 @@\n-__version__ = '1.3.0.dev0'\n+__version__ = '1.2.1.dev0'\n \n from .v1 import *\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -59,7 +59,7 @@\n URL = 'http://altair-viz.github.io'\n DOWNLOAD_URL = 'http://github.com/altair-viz/altair/'\n LICENSE = 'BSD 3-clause'\n-INSTALL_REQUIRES = ['traitlets>=4.3.1','ipython','pandas','vega>=0.4.4']\n+INSTALL_REQUIRES = ['traitlets>=4.3.1','ipython','pandas','vega==0.4.4']\n \n \n import io\n", "issue": "Pin vega version in requirements\nTo make sure things still work when ipyvega is updated (as it already has been)\n", "before_files": [{"content": "LONG_DESCRIPTION = \"\"\"\nAltair: A declarative statistical visualization library for Python.\n\nhttp://altair-viz.github.io/\n\nThis package provides a Python API for building statistical visualizations\nin a declarative manner. This API contains no actual visualization rendering\ncode, but instead emits JSON data structures following the `Vega-Lite`_\nspecification. For convenience, Altair can optionally use `ipyvega`_ to\nseamlessly display client-side renderings in the Jupyter notebook.\n\n.. image:: https://raw.githubusercontent.com/altair-viz/altair/master/images/cars.png\n\nPlease note that if you wish to use altair in the Jupyter Notebook, the\n`ipyvega`_ notebook extension must be enabled as follows::\n\n $ pip install altair\n $ pip install --upgrade notebook\n $ jupyter nbextension install --sys-prefix --py vega\n\nSee the `Altair Documentation`_ for tutorials, detailed installation\ninstructions, and examples.\nSee the `Altair Github Repository`_ for issues, bug reports, and contributions.\n\n.. _Altair Github Repository: http://github.com/altair-viz/altair/\n.. _Altair Documentation: http://altair-viz.github.io/\n.. _Vega-Lite: https://github.com/vega/vega-lite\n.. _ipyvega: https://github.com/vega/ipyvega\n\"\"\"\n\nDESCRIPTION = \"Altair: A declarative statistical visualization library for Python.\"\nNAME = \"altair\"\nPACKAGES = ['altair',\n 'altair.v1',\n 'altair.v1.tests',\n 'altair.v1.schema',\n 'altair.v1.schema._interface',\n 'altair.v1.schema._interface.tests',\n 'altair.v1.examples',\n 'altair.v1.examples.tests',\n 'altair.datasets',\n 'altair.datasets.tests',\n 'altair.expr',\n 'altair.expr.tests',\n 'altair.tests',\n 'altair.utils',\n 'altair.utils.tests',\n ]\nPACKAGE_DATA = {'altair': ['notebooks/*.ipynb',\n 'notebooks/*.html',\n 'notebooks/auto_examples/*.ipynb',\n 'v1/schema/*.json',\n 'v1/examples/*.json',\n 'v1/examples/json/*.json',\n 'datasets/*.json',\n 'expr/*.json']}\nAUTHOR = \"Brian E. Granger / Jake VanderPlas\"\nAUTHOR_EMAIL = \"[email protected] / [email protected]\"\nURL = 'http://altair-viz.github.io'\nDOWNLOAD_URL = 'http://github.com/altair-viz/altair/'\nLICENSE = 'BSD 3-clause'\nINSTALL_REQUIRES = ['traitlets>=4.3.1','ipython','pandas','vega>=0.4.4']\n\n\nimport io\nimport os\nimport re\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n\ndef read(path, encoding='utf-8'):\n path = os.path.join(os.path.dirname(__file__), path)\n with io.open(path, encoding=encoding) as fp:\n return fp.read()\n\n\ndef version(path):\n \"\"\"Obtain the packge version from a python file e.g. pkg/__init__.py\n\n See <https://packaging.python.org/en/latest/single_source_version.html>.\n \"\"\"\n version_file = read(path)\n version_match = re.search(r\"\"\"^__version__ = ['\"]([^'\"]*)['\"]\"\"\",\n version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nVERSION = version('altair/__init__.py')\n\n\nsetup(name=NAME,\n version=VERSION,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n url=URL,\n download_url=DOWNLOAD_URL,\n license=LICENSE,\n packages=PACKAGES,\n package_data=PACKAGE_DATA,\n install_requires=INSTALL_REQUIRES,\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5'],\n )\n", "path": "setup.py"}, {"content": "__version__ = '1.3.0.dev0'\n\nfrom .v1 import *\n", "path": "altair/__init__.py"}], "after_files": [{"content": "LONG_DESCRIPTION = \"\"\"\nAltair: A declarative statistical visualization library for Python.\n\nhttp://altair-viz.github.io/\n\nThis package provides a Python API for building statistical visualizations\nin a declarative manner. This API contains no actual visualization rendering\ncode, but instead emits JSON data structures following the `Vega-Lite`_\nspecification. For convenience, Altair can optionally use `ipyvega`_ to\nseamlessly display client-side renderings in the Jupyter notebook.\n\n.. image:: https://raw.githubusercontent.com/altair-viz/altair/master/images/cars.png\n\nPlease note that if you wish to use altair in the Jupyter Notebook, the\n`ipyvega`_ notebook extension must be enabled as follows::\n\n $ pip install altair\n $ pip install --upgrade notebook\n $ jupyter nbextension install --sys-prefix --py vega\n\nSee the `Altair Documentation`_ for tutorials, detailed installation\ninstructions, and examples.\nSee the `Altair Github Repository`_ for issues, bug reports, and contributions.\n\n.. _Altair Github Repository: http://github.com/altair-viz/altair/\n.. _Altair Documentation: http://altair-viz.github.io/\n.. _Vega-Lite: https://github.com/vega/vega-lite\n.. _ipyvega: https://github.com/vega/ipyvega\n\"\"\"\n\nDESCRIPTION = \"Altair: A declarative statistical visualization library for Python.\"\nNAME = \"altair\"\nPACKAGES = ['altair',\n 'altair.v1',\n 'altair.v1.tests',\n 'altair.v1.schema',\n 'altair.v1.schema._interface',\n 'altair.v1.schema._interface.tests',\n 'altair.v1.examples',\n 'altair.v1.examples.tests',\n 'altair.datasets',\n 'altair.datasets.tests',\n 'altair.expr',\n 'altair.expr.tests',\n 'altair.tests',\n 'altair.utils',\n 'altair.utils.tests',\n ]\nPACKAGE_DATA = {'altair': ['notebooks/*.ipynb',\n 'notebooks/*.html',\n 'notebooks/auto_examples/*.ipynb',\n 'v1/schema/*.json',\n 'v1/examples/*.json',\n 'v1/examples/json/*.json',\n 'datasets/*.json',\n 'expr/*.json']}\nAUTHOR = \"Brian E. Granger / Jake VanderPlas\"\nAUTHOR_EMAIL = \"[email protected] / [email protected]\"\nURL = 'http://altair-viz.github.io'\nDOWNLOAD_URL = 'http://github.com/altair-viz/altair/'\nLICENSE = 'BSD 3-clause'\nINSTALL_REQUIRES = ['traitlets>=4.3.1','ipython','pandas','vega==0.4.4']\n\n\nimport io\nimport os\nimport re\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n\ndef read(path, encoding='utf-8'):\n path = os.path.join(os.path.dirname(__file__), path)\n with io.open(path, encoding=encoding) as fp:\n return fp.read()\n\n\ndef version(path):\n \"\"\"Obtain the packge version from a python file e.g. pkg/__init__.py\n\n See <https://packaging.python.org/en/latest/single_source_version.html>.\n \"\"\"\n version_file = read(path)\n version_match = re.search(r\"\"\"^__version__ = ['\"]([^'\"]*)['\"]\"\"\",\n version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nVERSION = version('altair/__init__.py')\n\n\nsetup(name=NAME,\n version=VERSION,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n url=URL,\n download_url=DOWNLOAD_URL,\n license=LICENSE,\n packages=PACKAGES,\n package_data=PACKAGE_DATA,\n install_requires=INSTALL_REQUIRES,\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5'],\n )\n", "path": "setup.py"}, {"content": "__version__ = '1.2.1.dev0'\n\nfrom .v1 import *\n", "path": "altair/__init__.py"}]}
| 1,544 | 226 |
gh_patches_debug_26558
|
rasdani/github-patches
|
git_diff
|
jupyterhub__jupyterhub-4522
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix JUPYTERHUB_SINGLEUSER_APP after Notebook 7 release
### Bug description
With `notebook 6.5.4` it was possible to specify JUPYTERHUB_SINGLEUSER_APP='notebook' to run `Jupyter Notebook` instead of `JupyterLab`.
#### Expected behaviour
Jupyter Notebook is run in singleuser
#### Actual behaviour
`jupyterhub-singleuser` fails
### How to reproduce
Working image: `jupyter/base-notebook:notebook-6.5.4`
Failing image: `jupyter/base-notebook:notebook-7.0.0`
1. Run image: `docker run -it --rm jupyter/base-notebook:notebook-7.0.0 bash`
2. Run: `JUPYTERHUB_SINGLEUSER_APP='notebook' JUPYTERHUB_SERVICE_URL="127.0.0.1" jupyterhub-singleuser`
JupyterHub is not running inside the image, but I don't think that's the problem.
Output with Jupyter Notebook 7:
```
Traceback (most recent call last):
File "/opt/conda/bin/jupyterhub-singleuser", line 6, in <module>
from jupyterhub.singleuser import main
File "/opt/conda/lib/python3.11/site-packages/jupyterhub/singleuser/__init__.py", line 67, in <module>
from .app import SingleUserNotebookApp, main
File "/opt/conda/lib/python3.11/site-packages/jupyterhub/singleuser/app.py", line 31, in <module>
App = import_item(JUPYTERHUB_SINGLEUSER_APP)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/traitlets/utils/importstring.py", line 30, in import_item
module = __import__(package, fromlist=[obj])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named 'notebook.notebookapp'
```
Output with Jupyter Notebook 6:
```
[I 2023-07-25 20:59:48.574 SingleUserNotebookApp mixins:547] Starting jupyterhub single-user server version 4.0.1
[I 2023-07-25 20:59:48.574 SingleUserNotebookApp mixins:561] Extending notebook.notebookapp.NotebookApp from notebook 6.5.4
[W 2023-07-25 20:59:48.578 SingleUserNotebookApp configurable:200] Config option `open_browser` not recognized by `SingleUserNotebookApp`. Did you mean `browser`?
JUPYTERHUB_API_TOKEN env is required to run jupyterhub-singleuser. Did you launch it manually?
```
### Your personal set up
- OS:
<!-- [e.g. ubuntu 20.04, macOS 11.0] -->
- Version(s):
<!-- e.g. jupyterhub --version, python --version --->
<details><summary>Full environment</summary>
<!-- For reproduction, it's useful to have the full environment. For example, the output of `pip freeze` or `conda list` --->
```
# paste output of `pip freeze` or `conda list` here
```
</details>
<details><summary>Configuration</summary>
<!--
For JupyterHub, especially include information such as what Spawner and Authenticator are being used.
Be careful not to share any sensitive information.
You can paste jupyterhub_config.py below.
To exclude lots of comments and empty lines from auto-generated jupyterhub_config.py, you can do:
grep -v '\(^#\|^[[:space:]]*$\)' jupyterhub_config.py
-->
```python
# jupyterhub_config.py
```
</details>
<details><summary>Logs</summary>
<!--
Errors are often logged by jupytehub. How you get logs depends on your deployment.
With kubernetes it might be:
kubectl get pod # hub pod name starts with hub...
kubectl logs hub-...
# or for a single-user server
kubectl logs jupyter-username
Or the-littlest-jupyterhub:
journalctl -u jupyterhub
# or for a single-user server
journalctl -u jupyter-username
-->
```
# paste relevant logs here, if any
```
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `jupyterhub/singleuser/app.py`
Content:
```
1 """Make a single-user app based on the environment:
2
3 - $JUPYTERHUB_SINGLEUSER_APP, the base Application class, to be wrapped in JupyterHub authentication.
4 default: jupyter_server.serverapp.ServerApp
5
6 .. versionchanged:: 2.0
7
8 Default app changed to launch `jupyter labhub`.
9 Use JUPYTERHUB_SINGLEUSER_APP=notebook.notebookapp.NotebookApp for the legacy 'classic' notebook server.
10 """
11 import os
12
13 from traitlets import import_item
14
15 from .mixins import make_singleuser_app
16
17 JUPYTERHUB_SINGLEUSER_APP = os.environ.get("JUPYTERHUB_SINGLEUSER_APP", "")
18
19 # allow shortcut references
20 _app_shortcuts = {
21 "notebook": "notebook.notebookapp.NotebookApp",
22 "jupyter-server": "jupyter_server.serverapp.ServerApp",
23 "extension": "jupyter_server.serverapp.ServerApp",
24 }
25
26 JUPYTERHUB_SINGLEUSER_APP = _app_shortcuts.get(
27 JUPYTERHUB_SINGLEUSER_APP.replace("_", "-"), JUPYTERHUB_SINGLEUSER_APP
28 )
29
30 if JUPYTERHUB_SINGLEUSER_APP:
31 App = import_item(JUPYTERHUB_SINGLEUSER_APP)
32 else:
33 App = None
34 _import_error = None
35 for JUPYTERHUB_SINGLEUSER_APP in (
36 "jupyter_server.serverapp.ServerApp",
37 "notebook.notebookapp.NotebookApp",
38 ):
39 try:
40 App = import_item(JUPYTERHUB_SINGLEUSER_APP)
41 except ImportError as e:
42 if _import_error is None:
43 _import_error = e
44 continue
45 else:
46 break
47 if App is None:
48 raise _import_error
49
50
51 SingleUserNotebookApp = make_singleuser_app(App)
52
53
54 def main():
55 """Launch a jupyterhub single-user server"""
56 if not os.environ.get("JUPYTERHUB_SINGLEUSER_APP"):
57 # app not specified, launch jupyter-labhub by default,
58 # if jupyterlab is recent enough (3.1).
59 # This is a minimally extended ServerApp that does:
60 # 1. ensure lab extension is enabled, and
61 # 2. set default URL to `/lab`
62 import re
63
64 _version_pat = re.compile(r"(\d+)\.(\d+)")
65 try:
66 import jupyterlab
67 from jupyterlab.labhubapp import SingleUserLabApp
68
69 m = _version_pat.match(jupyterlab.__version__)
70 except Exception:
71 m = None
72
73 if m is not None:
74 version_tuple = tuple(int(v) for v in m.groups())
75 if version_tuple >= (3, 1):
76 return SingleUserLabApp.launch_instance()
77
78 return SingleUserNotebookApp.launch_instance()
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/jupyterhub/singleuser/app.py b/jupyterhub/singleuser/app.py
--- a/jupyterhub/singleuser/app.py
+++ b/jupyterhub/singleuser/app.py
@@ -6,7 +6,7 @@
.. versionchanged:: 2.0
Default app changed to launch `jupyter labhub`.
- Use JUPYTERHUB_SINGLEUSER_APP=notebook.notebookapp.NotebookApp for the legacy 'classic' notebook server.
+ Use JUPYTERHUB_SINGLEUSER_APP='notebook' for the legacy 'classic' notebook server (requires notebook<7).
"""
import os
@@ -27,7 +27,25 @@
JUPYTERHUB_SINGLEUSER_APP.replace("_", "-"), JUPYTERHUB_SINGLEUSER_APP
)
+
if JUPYTERHUB_SINGLEUSER_APP:
+ if JUPYTERHUB_SINGLEUSER_APP in {"notebook", _app_shortcuts["notebook"]}:
+ # better error for notebook v7, which uses jupyter-server
+ # when the legacy notebook server is requested
+ try:
+ from notebook import __version__
+ except ImportError:
+ # will raise later
+ pass
+ else:
+ # check if this failed because of notebook v7
+ _notebook_major_version = int(__version__.split(".", 1)[0])
+ if _notebook_major_version >= 7:
+ raise ImportError(
+ f"JUPYTERHUB_SINGLEUSER_APP={JUPYTERHUB_SINGLEUSER_APP} is not valid with notebook>=7 (have notebook=={__version__}).\n"
+ f"Leave $JUPYTERHUB_SINGLEUSER_APP unspecified (or use the default JUPYTERHUB_SINGLEUSER_APP=jupyter-server), "
+ 'and set `c.Spawner.default_url = "/tree"` to make notebook v7 the default UI.'
+ )
App = import_item(JUPYTERHUB_SINGLEUSER_APP)
else:
App = None
|
{"golden_diff": "diff --git a/jupyterhub/singleuser/app.py b/jupyterhub/singleuser/app.py\n--- a/jupyterhub/singleuser/app.py\n+++ b/jupyterhub/singleuser/app.py\n@@ -6,7 +6,7 @@\n .. versionchanged:: 2.0\n \n Default app changed to launch `jupyter labhub`.\n- Use JUPYTERHUB_SINGLEUSER_APP=notebook.notebookapp.NotebookApp for the legacy 'classic' notebook server.\n+ Use JUPYTERHUB_SINGLEUSER_APP='notebook' for the legacy 'classic' notebook server (requires notebook<7).\n \"\"\"\n import os\n \n@@ -27,7 +27,25 @@\n JUPYTERHUB_SINGLEUSER_APP.replace(\"_\", \"-\"), JUPYTERHUB_SINGLEUSER_APP\n )\n \n+\n if JUPYTERHUB_SINGLEUSER_APP:\n+ if JUPYTERHUB_SINGLEUSER_APP in {\"notebook\", _app_shortcuts[\"notebook\"]}:\n+ # better error for notebook v7, which uses jupyter-server\n+ # when the legacy notebook server is requested\n+ try:\n+ from notebook import __version__\n+ except ImportError:\n+ # will raise later\n+ pass\n+ else:\n+ # check if this failed because of notebook v7\n+ _notebook_major_version = int(__version__.split(\".\", 1)[0])\n+ if _notebook_major_version >= 7:\n+ raise ImportError(\n+ f\"JUPYTERHUB_SINGLEUSER_APP={JUPYTERHUB_SINGLEUSER_APP} is not valid with notebook>=7 (have notebook=={__version__}).\\n\"\n+ f\"Leave $JUPYTERHUB_SINGLEUSER_APP unspecified (or use the default JUPYTERHUB_SINGLEUSER_APP=jupyter-server), \"\n+ 'and set `c.Spawner.default_url = \"/tree\"` to make notebook v7 the default UI.'\n+ )\n App = import_item(JUPYTERHUB_SINGLEUSER_APP)\n else:\n App = None\n", "issue": "Fix JUPYTERHUB_SINGLEUSER_APP after Notebook 7 release\n### Bug description\r\n\r\nWith `notebook 6.5.4` it was possible to specify JUPYTERHUB_SINGLEUSER_APP='notebook' to run `Jupyter Notebook` instead of `JupyterLab`.\r\n\r\n#### Expected behaviour\r\n\r\nJupyter Notebook is run in singleuser\r\n\r\n#### Actual behaviour\r\n\r\n`jupyterhub-singleuser` fails\r\n\r\n### How to reproduce\r\n\r\nWorking image: `jupyter/base-notebook:notebook-6.5.4`\r\nFailing image: `jupyter/base-notebook:notebook-7.0.0`\r\n\r\n1. Run image: `docker run -it --rm jupyter/base-notebook:notebook-7.0.0 bash`\r\n2. Run: `JUPYTERHUB_SINGLEUSER_APP='notebook' JUPYTERHUB_SERVICE_URL=\"127.0.0.1\" jupyterhub-singleuser`\r\n\r\nJupyterHub is not running inside the image, but I don't think that's the problem.\r\n\r\nOutput with Jupyter Notebook 7:\r\n```\r\nTraceback (most recent call last):\r\n File \"/opt/conda/bin/jupyterhub-singleuser\", line 6, in <module>\r\n from jupyterhub.singleuser import main\r\n File \"/opt/conda/lib/python3.11/site-packages/jupyterhub/singleuser/__init__.py\", line 67, in <module>\r\n from .app import SingleUserNotebookApp, main\r\n File \"/opt/conda/lib/python3.11/site-packages/jupyterhub/singleuser/app.py\", line 31, in <module>\r\n App = import_item(JUPYTERHUB_SINGLEUSER_APP)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/opt/conda/lib/python3.11/site-packages/traitlets/utils/importstring.py\", line 30, in import_item\r\n module = __import__(package, fromlist=[obj])\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nModuleNotFoundError: No module named 'notebook.notebookapp'\r\n```\r\n\r\nOutput with Jupyter Notebook 6:\r\n```\r\n[I 2023-07-25 20:59:48.574 SingleUserNotebookApp mixins:547] Starting jupyterhub single-user server version 4.0.1\r\n[I 2023-07-25 20:59:48.574 SingleUserNotebookApp mixins:561] Extending notebook.notebookapp.NotebookApp from notebook 6.5.4\r\n[W 2023-07-25 20:59:48.578 SingleUserNotebookApp configurable:200] Config option `open_browser` not recognized by `SingleUserNotebookApp`. Did you mean `browser`?\r\nJUPYTERHUB_API_TOKEN env is required to run jupyterhub-singleuser. Did you launch it manually?\r\n```\r\n\r\n### Your personal set up\r\n\r\n\r\n\r\n - OS:\r\n <!-- [e.g. ubuntu 20.04, macOS 11.0] -->\r\n - Version(s):\r\n <!-- e.g. jupyterhub --version, python --version --->\r\n\r\n<details><summary>Full environment</summary>\r\n<!-- For reproduction, it's useful to have the full environment. For example, the output of `pip freeze` or `conda list` --->\r\n\r\n```\r\n# paste output of `pip freeze` or `conda list` here\r\n```\r\n</details>\r\n\r\n<details><summary>Configuration</summary>\r\n<!--\r\nFor JupyterHub, especially include information such as what Spawner and Authenticator are being used.\r\nBe careful not to share any sensitive information.\r\nYou can paste jupyterhub_config.py below.\r\nTo exclude lots of comments and empty lines from auto-generated jupyterhub_config.py, you can do:\r\n grep -v '\\(^#\\|^[[:space:]]*$\\)' jupyterhub_config.py\r\n-->\r\n\r\n```python\r\n# jupyterhub_config.py\r\n```\r\n</details>\r\n\r\n<details><summary>Logs</summary>\r\n<!--\r\nErrors are often logged by jupytehub. How you get logs depends on your deployment.\r\nWith kubernetes it might be:\r\n\r\n kubectl get pod # hub pod name starts with hub...\r\n kubectl logs hub-...\r\n # or for a single-user server\r\n kubectl logs jupyter-username\r\n\r\nOr the-littlest-jupyterhub:\r\n\r\n journalctl -u jupyterhub\r\n # or for a single-user server\r\n journalctl -u jupyter-username\r\n-->\r\n\r\n```\r\n# paste relevant logs here, if any\r\n```\r\n</details>\r\n\n", "before_files": [{"content": "\"\"\"Make a single-user app based on the environment:\n\n- $JUPYTERHUB_SINGLEUSER_APP, the base Application class, to be wrapped in JupyterHub authentication.\n default: jupyter_server.serverapp.ServerApp\n\n.. versionchanged:: 2.0\n\n Default app changed to launch `jupyter labhub`.\n Use JUPYTERHUB_SINGLEUSER_APP=notebook.notebookapp.NotebookApp for the legacy 'classic' notebook server.\n\"\"\"\nimport os\n\nfrom traitlets import import_item\n\nfrom .mixins import make_singleuser_app\n\nJUPYTERHUB_SINGLEUSER_APP = os.environ.get(\"JUPYTERHUB_SINGLEUSER_APP\", \"\")\n\n# allow shortcut references\n_app_shortcuts = {\n \"notebook\": \"notebook.notebookapp.NotebookApp\",\n \"jupyter-server\": \"jupyter_server.serverapp.ServerApp\",\n \"extension\": \"jupyter_server.serverapp.ServerApp\",\n}\n\nJUPYTERHUB_SINGLEUSER_APP = _app_shortcuts.get(\n JUPYTERHUB_SINGLEUSER_APP.replace(\"_\", \"-\"), JUPYTERHUB_SINGLEUSER_APP\n)\n\nif JUPYTERHUB_SINGLEUSER_APP:\n App = import_item(JUPYTERHUB_SINGLEUSER_APP)\nelse:\n App = None\n _import_error = None\n for JUPYTERHUB_SINGLEUSER_APP in (\n \"jupyter_server.serverapp.ServerApp\",\n \"notebook.notebookapp.NotebookApp\",\n ):\n try:\n App = import_item(JUPYTERHUB_SINGLEUSER_APP)\n except ImportError as e:\n if _import_error is None:\n _import_error = e\n continue\n else:\n break\n if App is None:\n raise _import_error\n\n\nSingleUserNotebookApp = make_singleuser_app(App)\n\n\ndef main():\n \"\"\"Launch a jupyterhub single-user server\"\"\"\n if not os.environ.get(\"JUPYTERHUB_SINGLEUSER_APP\"):\n # app not specified, launch jupyter-labhub by default,\n # if jupyterlab is recent enough (3.1).\n # This is a minimally extended ServerApp that does:\n # 1. ensure lab extension is enabled, and\n # 2. set default URL to `/lab`\n import re\n\n _version_pat = re.compile(r\"(\\d+)\\.(\\d+)\")\n try:\n import jupyterlab\n from jupyterlab.labhubapp import SingleUserLabApp\n\n m = _version_pat.match(jupyterlab.__version__)\n except Exception:\n m = None\n\n if m is not None:\n version_tuple = tuple(int(v) for v in m.groups())\n if version_tuple >= (3, 1):\n return SingleUserLabApp.launch_instance()\n\n return SingleUserNotebookApp.launch_instance()\n", "path": "jupyterhub/singleuser/app.py"}], "after_files": [{"content": "\"\"\"Make a single-user app based on the environment:\n\n- $JUPYTERHUB_SINGLEUSER_APP, the base Application class, to be wrapped in JupyterHub authentication.\n default: jupyter_server.serverapp.ServerApp\n\n.. versionchanged:: 2.0\n\n Default app changed to launch `jupyter labhub`.\n Use JUPYTERHUB_SINGLEUSER_APP='notebook' for the legacy 'classic' notebook server (requires notebook<7).\n\"\"\"\nimport os\n\nfrom traitlets import import_item\n\nfrom .mixins import make_singleuser_app\n\nJUPYTERHUB_SINGLEUSER_APP = os.environ.get(\"JUPYTERHUB_SINGLEUSER_APP\", \"\")\n\n# allow shortcut references\n_app_shortcuts = {\n \"notebook\": \"notebook.notebookapp.NotebookApp\",\n \"jupyter-server\": \"jupyter_server.serverapp.ServerApp\",\n \"extension\": \"jupyter_server.serverapp.ServerApp\",\n}\n\nJUPYTERHUB_SINGLEUSER_APP = _app_shortcuts.get(\n JUPYTERHUB_SINGLEUSER_APP.replace(\"_\", \"-\"), JUPYTERHUB_SINGLEUSER_APP\n)\n\n\nif JUPYTERHUB_SINGLEUSER_APP:\n if JUPYTERHUB_SINGLEUSER_APP in {\"notebook\", _app_shortcuts[\"notebook\"]}:\n # better error for notebook v7, which uses jupyter-server\n # when the legacy notebook server is requested\n try:\n from notebook import __version__\n except ImportError:\n # will raise later\n pass\n else:\n # check if this failed because of notebook v7\n _notebook_major_version = int(__version__.split(\".\", 1)[0])\n if _notebook_major_version >= 7:\n raise ImportError(\n f\"JUPYTERHUB_SINGLEUSER_APP={JUPYTERHUB_SINGLEUSER_APP} is not valid with notebook>=7 (have notebook=={__version__}).\\n\"\n f\"Leave $JUPYTERHUB_SINGLEUSER_APP unspecified (or use the default JUPYTERHUB_SINGLEUSER_APP=jupyter-server), \"\n 'and set `c.Spawner.default_url = \"/tree\"` to make notebook v7 the default UI.'\n )\n App = import_item(JUPYTERHUB_SINGLEUSER_APP)\nelse:\n App = None\n _import_error = None\n for JUPYTERHUB_SINGLEUSER_APP in (\n \"jupyter_server.serverapp.ServerApp\",\n \"notebook.notebookapp.NotebookApp\",\n ):\n try:\n App = import_item(JUPYTERHUB_SINGLEUSER_APP)\n except ImportError as e:\n if _import_error is None:\n _import_error = e\n continue\n else:\n break\n if App is None:\n raise _import_error\n\n\nSingleUserNotebookApp = make_singleuser_app(App)\n\n\ndef main():\n \"\"\"Launch a jupyterhub single-user server\"\"\"\n if not os.environ.get(\"JUPYTERHUB_SINGLEUSER_APP\"):\n # app not specified, launch jupyter-labhub by default,\n # if jupyterlab is recent enough (3.1).\n # This is a minimally extended ServerApp that does:\n # 1. ensure lab extension is enabled, and\n # 2. set default URL to `/lab`\n import re\n\n _version_pat = re.compile(r\"(\\d+)\\.(\\d+)\")\n try:\n import jupyterlab\n from jupyterlab.labhubapp import SingleUserLabApp\n\n m = _version_pat.match(jupyterlab.__version__)\n except Exception:\n m = None\n\n if m is not None:\n version_tuple = tuple(int(v) for v in m.groups())\n if version_tuple >= (3, 1):\n return SingleUserLabApp.launch_instance()\n\n return SingleUserNotebookApp.launch_instance()\n", "path": "jupyterhub/singleuser/app.py"}]}
| 2,040 | 448 |
gh_patches_debug_14864
|
rasdani/github-patches
|
git_diff
|
benoitc__gunicorn-1931
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Must explicitly define `setuptools` as a dependency
When running gunicorn in a hardened Python docker image (with most of the dependencies removed) `setuptools` might be missing.
For instance:
```
Traceback (most recent call last):
File "/app/manage-docker.binary.runfiles/__main__/server.py", line 1, in <module>
from gunicorn.app.base import BaseApplication
File "/app/manage-docker.binary.runfiles/pypi__gunicorn_19_7_1/gunicorn/app/base.py", line 12, in <module>
from gunicorn import util
File "/app/manage-docker.binary.runfiles/pypi__gunicorn_19_7_1/gunicorn/util.py", line 12, in <module>
import pkg_resources
ImportError: No module named pkg_resources
```
Can be fixed by defining `setuptools` as a direct dependency within the project' `requirements.txt` file, however, it could be fix at the gunicorn codebase level by using `install_requires = ['setuptools']` in setup.py.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -
2 #
3 # This file is part of gunicorn released under the MIT license.
4 # See the NOTICE for more information.
5
6 import os
7 import sys
8
9 from setuptools import setup, find_packages
10 from setuptools.command.test import test as TestCommand
11
12 from gunicorn import __version__
13
14
15 CLASSIFIERS = [
16 'Development Status :: 4 - Beta',
17 'Environment :: Other Environment',
18 'Intended Audience :: Developers',
19 'License :: OSI Approved :: MIT License',
20 'Operating System :: MacOS :: MacOS X',
21 'Operating System :: POSIX',
22 'Programming Language :: Python',
23 'Programming Language :: Python :: 3',
24 'Programming Language :: Python :: 3.4',
25 'Programming Language :: Python :: 3.5',
26 'Programming Language :: Python :: 3.6',
27 'Programming Language :: Python :: 3.7',
28 'Programming Language :: Python :: 3 :: Only',
29 'Topic :: Internet',
30 'Topic :: Utilities',
31 'Topic :: Software Development :: Libraries :: Python Modules',
32 'Topic :: Internet :: WWW/HTTP',
33 'Topic :: Internet :: WWW/HTTP :: WSGI',
34 'Topic :: Internet :: WWW/HTTP :: WSGI :: Server',
35 'Topic :: Internet :: WWW/HTTP :: Dynamic Content']
36
37 # read long description
38 with open(os.path.join(os.path.dirname(__file__), 'README.rst')) as f:
39 long_description = f.read()
40
41 # read dev requirements
42 fname = os.path.join(os.path.dirname(__file__), 'requirements_test.txt')
43 with open(fname) as f:
44 tests_require = [l.strip() for l in f.readlines()]
45
46 class PyTestCommand(TestCommand):
47 user_options = [
48 ("cov", None, "measure coverage")
49 ]
50
51 def initialize_options(self):
52 TestCommand.initialize_options(self)
53 self.cov = None
54
55 def finalize_options(self):
56 TestCommand.finalize_options(self)
57 self.test_args = ['tests']
58 if self.cov:
59 self.test_args += ['--cov', 'gunicorn']
60 self.test_suite = True
61
62 def run_tests(self):
63 import pytest
64 errno = pytest.main(self.test_args)
65 sys.exit(errno)
66
67
68 extra_require = {
69 'gevent': ['gevent>=0.13'],
70 'eventlet': ['eventlet>=0.9.7'],
71 'tornado': ['tornado>=0.2'],
72 'gthread': [],
73 }
74
75 setup(
76 name='gunicorn',
77 version=__version__,
78
79 description='WSGI HTTP Server for UNIX',
80 long_description=long_description,
81 author='Benoit Chesneau',
82 author_email='[email protected]',
83 license='MIT',
84 url='http://gunicorn.org',
85
86 python_requires='>=3.4',
87 classifiers=CLASSIFIERS,
88 zip_safe=False,
89 packages=find_packages(exclude=['examples', 'tests']),
90 include_package_data=True,
91
92 tests_require=tests_require,
93 cmdclass={'test': PyTestCommand},
94
95 entry_points="""
96 [console_scripts]
97 gunicorn=gunicorn.app.wsgiapp:run
98 gunicorn_paster=gunicorn.app.pasterapp:run
99
100 [paste.server_runner]
101 main=gunicorn.app.pasterapp:paste_server
102 """,
103 extras_require=extra_require,
104 )
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -65,6 +65,14 @@
sys.exit(errno)
+install_requires = [
+ # We depend on functioning pkg_resources.working_set.add_entry() and
+ # pkg_resources.load_entry_point(). These both work as of 3.0 which
+ # is the first version to support Python 3.4 which we require as a
+ # floor.
+ 'setuptools>=3.0',
+]
+
extra_require = {
'gevent': ['gevent>=0.13'],
'eventlet': ['eventlet>=0.9.7'],
@@ -84,6 +92,7 @@
url='http://gunicorn.org',
python_requires='>=3.4',
+ install_requires=install_requires,
classifiers=CLASSIFIERS,
zip_safe=False,
packages=find_packages(exclude=['examples', 'tests']),
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -65,6 +65,14 @@\n sys.exit(errno)\n \n \n+install_requires = [\n+ # We depend on functioning pkg_resources.working_set.add_entry() and\n+ # pkg_resources.load_entry_point(). These both work as of 3.0 which\n+ # is the first version to support Python 3.4 which we require as a\n+ # floor.\n+ 'setuptools>=3.0',\n+]\n+\n extra_require = {\n 'gevent': ['gevent>=0.13'],\n 'eventlet': ['eventlet>=0.9.7'],\n@@ -84,6 +92,7 @@\n url='http://gunicorn.org',\n \n python_requires='>=3.4',\n+ install_requires=install_requires,\n classifiers=CLASSIFIERS,\n zip_safe=False,\n packages=find_packages(exclude=['examples', 'tests']),\n", "issue": "Must explicitly define `setuptools` as a dependency\nWhen running gunicorn in a hardened Python docker image (with most of the dependencies removed) `setuptools` might be missing.\r\n\r\nFor instance:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/app/manage-docker.binary.runfiles/__main__/server.py\", line 1, in <module>\r\n from gunicorn.app.base import BaseApplication\r\n File \"/app/manage-docker.binary.runfiles/pypi__gunicorn_19_7_1/gunicorn/app/base.py\", line 12, in <module>\r\n from gunicorn import util\r\n File \"/app/manage-docker.binary.runfiles/pypi__gunicorn_19_7_1/gunicorn/util.py\", line 12, in <module>\r\n import pkg_resources\r\nImportError: No module named pkg_resources\r\n```\r\n\r\nCan be fixed by defining `setuptools` as a direct dependency within the project' `requirements.txt` file, however, it could be fix at the gunicorn codebase level by using `install_requires = ['setuptools']` in setup.py. \n", "before_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\nfrom setuptools.command.test import test as TestCommand\n\nfrom gunicorn import __version__\n\n\nCLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Internet',\n 'Topic :: Utilities',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Internet :: WWW/HTTP :: WSGI',\n 'Topic :: Internet :: WWW/HTTP :: WSGI :: Server',\n 'Topic :: Internet :: WWW/HTTP :: Dynamic Content']\n\n# read long description\nwith open(os.path.join(os.path.dirname(__file__), 'README.rst')) as f:\n long_description = f.read()\n\n# read dev requirements\nfname = os.path.join(os.path.dirname(__file__), 'requirements_test.txt')\nwith open(fname) as f:\n tests_require = [l.strip() for l in f.readlines()]\n\nclass PyTestCommand(TestCommand):\n user_options = [\n (\"cov\", None, \"measure coverage\")\n ]\n\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.cov = None\n\n def finalize_options(self):\n TestCommand.finalize_options(self)\n self.test_args = ['tests']\n if self.cov:\n self.test_args += ['--cov', 'gunicorn']\n self.test_suite = True\n\n def run_tests(self):\n import pytest\n errno = pytest.main(self.test_args)\n sys.exit(errno)\n\n\nextra_require = {\n 'gevent': ['gevent>=0.13'],\n 'eventlet': ['eventlet>=0.9.7'],\n 'tornado': ['tornado>=0.2'],\n 'gthread': [],\n}\n\nsetup(\n name='gunicorn',\n version=__version__,\n\n description='WSGI HTTP Server for UNIX',\n long_description=long_description,\n author='Benoit Chesneau',\n author_email='[email protected]',\n license='MIT',\n url='http://gunicorn.org',\n\n python_requires='>=3.4',\n classifiers=CLASSIFIERS,\n zip_safe=False,\n packages=find_packages(exclude=['examples', 'tests']),\n include_package_data=True,\n\n tests_require=tests_require,\n cmdclass={'test': PyTestCommand},\n\n entry_points=\"\"\"\n [console_scripts]\n gunicorn=gunicorn.app.wsgiapp:run\n gunicorn_paster=gunicorn.app.pasterapp:run\n\n [paste.server_runner]\n main=gunicorn.app.pasterapp:paste_server\n \"\"\",\n extras_require=extra_require,\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\nfrom setuptools.command.test import test as TestCommand\n\nfrom gunicorn import __version__\n\n\nCLASSIFIERS = [\n 'Development Status :: 4 - Beta',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Internet',\n 'Topic :: Utilities',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Internet :: WWW/HTTP :: WSGI',\n 'Topic :: Internet :: WWW/HTTP :: WSGI :: Server',\n 'Topic :: Internet :: WWW/HTTP :: Dynamic Content']\n\n# read long description\nwith open(os.path.join(os.path.dirname(__file__), 'README.rst')) as f:\n long_description = f.read()\n\n# read dev requirements\nfname = os.path.join(os.path.dirname(__file__), 'requirements_test.txt')\nwith open(fname) as f:\n tests_require = [l.strip() for l in f.readlines()]\n\nclass PyTestCommand(TestCommand):\n user_options = [\n (\"cov\", None, \"measure coverage\")\n ]\n\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.cov = None\n\n def finalize_options(self):\n TestCommand.finalize_options(self)\n self.test_args = ['tests']\n if self.cov:\n self.test_args += ['--cov', 'gunicorn']\n self.test_suite = True\n\n def run_tests(self):\n import pytest\n errno = pytest.main(self.test_args)\n sys.exit(errno)\n\n\ninstall_requires = [\n # We depend on functioning pkg_resources.working_set.add_entry() and\n # pkg_resources.load_entry_point(). These both work as of 3.0 which\n # is the first version to support Python 3.4 which we require as a\n # floor.\n 'setuptools>=3.0',\n]\n\nextra_require = {\n 'gevent': ['gevent>=0.13'],\n 'eventlet': ['eventlet>=0.9.7'],\n 'tornado': ['tornado>=0.2'],\n 'gthread': [],\n}\n\nsetup(\n name='gunicorn',\n version=__version__,\n\n description='WSGI HTTP Server for UNIX',\n long_description=long_description,\n author='Benoit Chesneau',\n author_email='[email protected]',\n license='MIT',\n url='http://gunicorn.org',\n\n python_requires='>=3.4',\n install_requires=install_requires,\n classifiers=CLASSIFIERS,\n zip_safe=False,\n packages=find_packages(exclude=['examples', 'tests']),\n include_package_data=True,\n\n tests_require=tests_require,\n cmdclass={'test': PyTestCommand},\n\n entry_points=\"\"\"\n [console_scripts]\n gunicorn=gunicorn.app.wsgiapp:run\n gunicorn_paster=gunicorn.app.pasterapp:run\n\n [paste.server_runner]\n main=gunicorn.app.pasterapp:paste_server\n \"\"\",\n extras_require=extra_require,\n)\n", "path": "setup.py"}]}
| 1,417 | 216 |
gh_patches_debug_4585
|
rasdani/github-patches
|
git_diff
|
spack__spack-10984
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Installation Issue: bowtie build error
### Steps to reproduce the issue
```console
[centos] ~: spack install bowtie
==> Installing bowtie
==> Searching for binary cache of bowtie
==> Warning: No Spack mirrors are currently configured
==> No binary for bowtie found: installing from source
==> Fetching https://github.com/BenLangmead/bowtie/archive/v1.2.2_p1.tar.gz
######################################################################## 100.0%
==> Staging archive: /spack/var/spack/stage/bowtie-1.2.2_p1-se66bd5p6mfiop65vwqpr4jh6uwvpxsr/v1.2.2_p1.tar.gz
==> Created stage in /spack/var/spack/stage/bowtie-1.2.2_p1-se66bd5p6mfiop65vwqpr4jh6uwvpxsr
==> No patches needed for bowtie
==> Building bowtie [MakefilePackage]
==> Executing phase: 'edit'
==> Executing phase: 'build'
==> Error: ProcessError: Command exited with status 2:
'make' '-j2' 'NO_TBB=1'
4 errors found in build log:
18 In file included from sequence_io.h:12:0,
19 from multikey_qsort.h:8,
20 from diff_sample.h:13,
21 from blockwise_sa.h:19,
22 from ebwt.h:27,
23 from ebwt_build.cpp:11:
>> 24 pat.h:6:18: fatal error: zlib.h: No such file or directory
25 #include <zlib.h>
26 ^
27 compilation terminated.
28 In file included from sequence_io.h:12:0,
29 from multikey_qsort.h:8,
30 from diff_sample.h:13,
31 from blockwise_sa.h:19,
32 from ebwt.h:27,
33 from ebwt_build.cpp:11:
>> 34 pat.h:6:18: fatal error: zlib.h: No such file or directory
35 #include <zlib.h>
36 ^
37 compilation terminated.
>> 38 make: *** [bowtie-build-l] Error 1
39 make: *** Waiting for unfinished jobs....
>> 40 make: *** [bowtie-build-s] Error 1
See build log for details:
/spack/var/spack/stage/bowtie-1.2.2_p1-se66bd5p6mfiop65vwqpr4jh6uwvpxsr/bowtie-1.2.2_p1/spack-build.out
```
### Platform and user environment
Please report your OS here:
```commandline
$ uname -a
Linux 4b5226354c71 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
```
Bowtie installation fails with missing zlib dependency.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `var/spack/repos/builtin/packages/bowtie/package.py`
Content:
```
1 # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack import *
7
8
9 class Bowtie(MakefilePackage):
10 """Bowtie is an ultrafast, memory-efficient short read aligner
11 for short DNA sequences (reads) from next-gen sequencers."""
12
13 homepage = "https://sourceforge.net/projects/bowtie-bio/"
14 url = "https://github.com/BenLangmead/bowtie/archive/v1.2.0.tar.gz"
15
16 # The bowtie project git tagged and GitHub released a v1.2.2,
17 # discovered/fixed a bug, git tagged a v1.2.2_p1 and moved the
18 # 1.2.2 release to use it rather than making a new `1.2.2_p1`
19 # release.
20 #
21 # We point both of the Spack versions at the same tarball so they
22 # build the binaries that are on the release page as v1.2.2
23 version('1.2.2_p1', sha256='e1b02b2e77a0d44a3dd411209fa1f44f0c4ee304ef5cc83f098275085740d5a1')
24 version('1.2.2', sha256='e1b02b2e77a0d44a3dd411209fa1f44f0c4ee304ef5cc83f098275085740d5a1', url="https://github.com/BenLangmead/bowtie/archive/v1.2.2_p1.tar.gz")
25 version('1.2.1.1', sha256='1b38408b88f61d18d7ff28b2470a8cfeefccb3fc59fd46e4cc62e23874e52c20')
26 version('1.2.1', sha256='b2a7c8c879cb08f00a82665bee43e1d4861de44a87912c54d168e44c90869728')
27 version('1.2.0', sha256='dc4e7951b8eca56ce7714c47fd4e84f72badd5312ee9546c912af1963570f894')
28 # Keeping the old 1.2 version around for reproducibility, it's not
29 # clearly identical to 1.2.0.
30 version('1.2', md5='6d97f0ea1a65af11d17cc270cfac4af9', url='https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.0/bowtie-1.2-source.zip')
31
32 # Feel free to tighten this. I know that v1.2.2 (aka v1.2.2_p1)
33 # builds with %[email protected] and fails to build with %[email protected]. I'm
34 # not sure whether or not it works with other versions in the
35 # interval.
36 conflicts('%gcc@8:', when='@1.2.2:')
37
38 variant('tbb', default=False, description='Use Intel thread building block')
39
40 depends_on('tbb', when='+tbb')
41
42 # See: https://github.com/BenLangmead/bowtie/issues/87, a
43 # different fix is in the FreeBSD ports/package tree
44 # https://svnweb.freebsd.org/ports?view=revision&revision=483954
45 patch('issue-87.patch', when='%[email protected]:')
46
47 def edit(self, spec, prefix):
48 makefile = FileFilter('Makefile')
49 makefile.filter('CC = .*', 'CC = ' + env['CC'])
50 makefile.filter('CXX = .*', 'CPP = ' + env['CXX'])
51
52 def build(self, spec, prefix):
53 if '+tbb' in spec:
54 make()
55 else:
56 make('NO_TBB=1')
57
58 def install(self, spec, prefix):
59 make('prefix={0}'.format(self.prefix), 'install')
60
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/var/spack/repos/builtin/packages/bowtie/package.py b/var/spack/repos/builtin/packages/bowtie/package.py
--- a/var/spack/repos/builtin/packages/bowtie/package.py
+++ b/var/spack/repos/builtin/packages/bowtie/package.py
@@ -38,6 +38,7 @@
variant('tbb', default=False, description='Use Intel thread building block')
depends_on('tbb', when='+tbb')
+ depends_on('zlib')
# See: https://github.com/BenLangmead/bowtie/issues/87, a
# different fix is in the FreeBSD ports/package tree
|
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/bowtie/package.py b/var/spack/repos/builtin/packages/bowtie/package.py\n--- a/var/spack/repos/builtin/packages/bowtie/package.py\n+++ b/var/spack/repos/builtin/packages/bowtie/package.py\n@@ -38,6 +38,7 @@\n variant('tbb', default=False, description='Use Intel thread building block')\n \n depends_on('tbb', when='+tbb')\n+ depends_on('zlib')\n \n # See: https://github.com/BenLangmead/bowtie/issues/87, a\n # different fix is in the FreeBSD ports/package tree\n", "issue": "Installation Issue: bowtie build error\n### Steps to reproduce the issue\r\n\r\n```console\r\n[centos] ~: spack install bowtie\r\n==> Installing bowtie\r\n==> Searching for binary cache of bowtie\r\n==> Warning: No Spack mirrors are currently configured\r\n==> No binary for bowtie found: installing from source\r\n==> Fetching https://github.com/BenLangmead/bowtie/archive/v1.2.2_p1.tar.gz\r\n######################################################################## 100.0%\r\n==> Staging archive: /spack/var/spack/stage/bowtie-1.2.2_p1-se66bd5p6mfiop65vwqpr4jh6uwvpxsr/v1.2.2_p1.tar.gz\r\n==> Created stage in /spack/var/spack/stage/bowtie-1.2.2_p1-se66bd5p6mfiop65vwqpr4jh6uwvpxsr\r\n==> No patches needed for bowtie\r\n==> Building bowtie [MakefilePackage]\r\n==> Executing phase: 'edit'\r\n==> Executing phase: 'build'\r\n==> Error: ProcessError: Command exited with status 2:\r\n 'make' '-j2' 'NO_TBB=1'\r\n\r\n4 errors found in build log:\r\n 18 In file included from sequence_io.h:12:0,\r\n 19 from multikey_qsort.h:8,\r\n 20 from diff_sample.h:13,\r\n 21 from blockwise_sa.h:19,\r\n 22 from ebwt.h:27,\r\n 23 from ebwt_build.cpp:11:\r\n >> 24 pat.h:6:18: fatal error: zlib.h: No such file or directory\r\n 25 #include <zlib.h>\r\n 26 ^\r\n\r\n 27 compilation terminated.\r\n 28 In file included from sequence_io.h:12:0,\r\n 29 from multikey_qsort.h:8,\r\n 30 from diff_sample.h:13,\r\n 31 from blockwise_sa.h:19,\r\n 32 from ebwt.h:27,\r\n 33 from ebwt_build.cpp:11:\r\n >> 34 pat.h:6:18: fatal error: zlib.h: No such file or directory\r\n 35 #include <zlib.h>\r\n 36 ^\r\n 37 compilation terminated.\r\n >> 38 make: *** [bowtie-build-l] Error 1\r\n 39 make: *** Waiting for unfinished jobs....\r\n >> 40 make: *** [bowtie-build-s] Error 1\r\n\r\nSee build log for details:\r\n /spack/var/spack/stage/bowtie-1.2.2_p1-se66bd5p6mfiop65vwqpr4jh6uwvpxsr/bowtie-1.2.2_p1/spack-build.out\r\n```\r\n\r\n### Platform and user environment\r\n\r\nPlease report your OS here:\r\n```commandline\r\n$ uname -a\r\nLinux 4b5226354c71 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux\r\n``` \r\nBowtie installation fails with missing zlib dependency. \r\n\r\n\n", "before_files": [{"content": "# Copyright 2013-2019 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass Bowtie(MakefilePackage):\n \"\"\"Bowtie is an ultrafast, memory-efficient short read aligner\n for short DNA sequences (reads) from next-gen sequencers.\"\"\"\n\n homepage = \"https://sourceforge.net/projects/bowtie-bio/\"\n url = \"https://github.com/BenLangmead/bowtie/archive/v1.2.0.tar.gz\"\n\n # The bowtie project git tagged and GitHub released a v1.2.2,\n # discovered/fixed a bug, git tagged a v1.2.2_p1 and moved the\n # 1.2.2 release to use it rather than making a new `1.2.2_p1`\n # release.\n #\n # We point both of the Spack versions at the same tarball so they\n # build the binaries that are on the release page as v1.2.2\n version('1.2.2_p1', sha256='e1b02b2e77a0d44a3dd411209fa1f44f0c4ee304ef5cc83f098275085740d5a1')\n version('1.2.2', sha256='e1b02b2e77a0d44a3dd411209fa1f44f0c4ee304ef5cc83f098275085740d5a1', url=\"https://github.com/BenLangmead/bowtie/archive/v1.2.2_p1.tar.gz\")\n version('1.2.1.1', sha256='1b38408b88f61d18d7ff28b2470a8cfeefccb3fc59fd46e4cc62e23874e52c20')\n version('1.2.1', sha256='b2a7c8c879cb08f00a82665bee43e1d4861de44a87912c54d168e44c90869728')\n version('1.2.0', sha256='dc4e7951b8eca56ce7714c47fd4e84f72badd5312ee9546c912af1963570f894')\n # Keeping the old 1.2 version around for reproducibility, it's not\n # clearly identical to 1.2.0.\n version('1.2', md5='6d97f0ea1a65af11d17cc270cfac4af9', url='https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.0/bowtie-1.2-source.zip')\n\n # Feel free to tighten this. I know that v1.2.2 (aka v1.2.2_p1)\n # builds with %[email protected] and fails to build with %[email protected]. I'm\n # not sure whether or not it works with other versions in the\n # interval.\n conflicts('%gcc@8:', when='@1.2.2:')\n\n variant('tbb', default=False, description='Use Intel thread building block')\n\n depends_on('tbb', when='+tbb')\n\n # See: https://github.com/BenLangmead/bowtie/issues/87, a\n # different fix is in the FreeBSD ports/package tree\n # https://svnweb.freebsd.org/ports?view=revision&revision=483954\n patch('issue-87.patch', when='%[email protected]:')\n\n def edit(self, spec, prefix):\n makefile = FileFilter('Makefile')\n makefile.filter('CC = .*', 'CC = ' + env['CC'])\n makefile.filter('CXX = .*', 'CPP = ' + env['CXX'])\n\n def build(self, spec, prefix):\n if '+tbb' in spec:\n make()\n else:\n make('NO_TBB=1')\n\n def install(self, spec, prefix):\n make('prefix={0}'.format(self.prefix), 'install')\n", "path": "var/spack/repos/builtin/packages/bowtie/package.py"}], "after_files": [{"content": "# Copyright 2013-2019 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass Bowtie(MakefilePackage):\n \"\"\"Bowtie is an ultrafast, memory-efficient short read aligner\n for short DNA sequences (reads) from next-gen sequencers.\"\"\"\n\n homepage = \"https://sourceforge.net/projects/bowtie-bio/\"\n url = \"https://github.com/BenLangmead/bowtie/archive/v1.2.0.tar.gz\"\n\n # The bowtie project git tagged and GitHub released a v1.2.2,\n # discovered/fixed a bug, git tagged a v1.2.2_p1 and moved the\n # 1.2.2 release to use it rather than making a new `1.2.2_p1`\n # release.\n #\n # We point both of the Spack versions at the same tarball so they\n # build the binaries that are on the release page as v1.2.2\n version('1.2.2_p1', sha256='e1b02b2e77a0d44a3dd411209fa1f44f0c4ee304ef5cc83f098275085740d5a1')\n version('1.2.2', sha256='e1b02b2e77a0d44a3dd411209fa1f44f0c4ee304ef5cc83f098275085740d5a1', url=\"https://github.com/BenLangmead/bowtie/archive/v1.2.2_p1.tar.gz\")\n version('1.2.1.1', sha256='1b38408b88f61d18d7ff28b2470a8cfeefccb3fc59fd46e4cc62e23874e52c20')\n version('1.2.1', sha256='b2a7c8c879cb08f00a82665bee43e1d4861de44a87912c54d168e44c90869728')\n version('1.2.0', sha256='dc4e7951b8eca56ce7714c47fd4e84f72badd5312ee9546c912af1963570f894')\n # Keeping the old 1.2 version around for reproducibility, it's not\n # clearly identical to 1.2.0.\n version('1.2', md5='6d97f0ea1a65af11d17cc270cfac4af9', url='https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.0/bowtie-1.2-source.zip')\n\n # Feel free to tighten this. I know that v1.2.2 (aka v1.2.2_p1)\n # builds with %[email protected] and fails to build with %[email protected]. I'm\n # not sure whether or not it works with other versions in the\n # interval.\n conflicts('%gcc@8:', when='@1.2.2:')\n\n variant('tbb', default=False, description='Use Intel thread building block')\n\n depends_on('tbb', when='+tbb')\n depends_on('zlib')\n\n # See: https://github.com/BenLangmead/bowtie/issues/87, a\n # different fix is in the FreeBSD ports/package tree\n # https://svnweb.freebsd.org/ports?view=revision&revision=483954\n patch('issue-87.patch', when='%[email protected]:')\n\n def edit(self, spec, prefix):\n makefile = FileFilter('Makefile')\n makefile.filter('CC = .*', 'CC = ' + env['CC'])\n makefile.filter('CXX = .*', 'CPP = ' + env['CXX'])\n\n def build(self, spec, prefix):\n if '+tbb' in spec:\n make()\n else:\n make('NO_TBB=1')\n\n def install(self, spec, prefix):\n make('prefix={0}'.format(self.prefix), 'install')\n", "path": "var/spack/repos/builtin/packages/bowtie/package.py"}]}
| 2,219 | 146 |
gh_patches_debug_4178
|
rasdani/github-patches
|
git_diff
|
learningequality__kolibri-12049
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
'On my own' device - Merging a user is not working
## Observed behavior
Observed while integration testing the [v0.16.1-beta1 ](https://github.com/learningequality/kolibri/releases/tag/v0.16.1-beta1) release.
When I try to merge a user created through 'On my own' I am getting an "Invalid URL" error in the console. Note that creating a new account through the same flow is working correctly. This issue is caused by the changes made in https://github.com/learningequality/kolibri/pull/12028 and is not extant in [v0.16.1-beta0](https://github.com/learningequality/kolibri/releases/tag/v0.16.1-beta0).
https://github.com/learningequality/kolibri/assets/79847249/30daa3ca-918c-4c15-901b-c74c08b96466
## Expected behavior
Fully functional 'Merge accounts' user flow.
## Steps to reproduce the issue
1. Install [v0.16.1-beta1 ](https://github.com/learningequality/kolibri/releases/tag/v0.16.1-beta1).
2. Setup a full device as a server and another device by going through the 'On my own' setup flow.
3. Attempt to merge the user from the 'On my own' device' to the server facility.
## Logs
[logs.zip](https://github.com/learningequality/kolibri/files/14850735/logs.zip)
## Usage Details
[v0.16.1-beta1 ](https://github.com/learningequality/kolibri/releases/tag/v0.16.1-beta1)
Windows 11, Ubuntu 22 - Chrome
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/plugins/user_profile/viewsets.py`
Content:
```
1 import requests
2 from django.contrib.auth import login
3 from django.core.exceptions import ValidationError as DjangoValidationError
4 from rest_framework.exceptions import ValidationError
5 from rest_framework.response import Response
6 from rest_framework.views import APIView
7
8 from .utils import TokenGenerator
9 from kolibri.core.auth.models import FacilityUser
10 from kolibri.core.utils.urls import reverse_remote
11 from kolibri.utils.urls import validator
12
13
14 class OnMyOwnSetupViewset(APIView):
15 """
16 Viewset to determine if the facility has been setup as an "On my own setup" facility.
17 """
18
19 def get(self, request, format=None):
20 if request.user.is_anonymous:
21 self.permission_denied(request)
22 user_facility = self.request.user.facility
23 return Response(
24 {
25 "on_my_own_setup": user_facility.on_my_own_setup,
26 }
27 )
28
29
30 class RemoteFacilityUserViewset(APIView):
31 def get(self, request):
32 baseurl = request.query_params.get("baseurl", "")
33 try:
34 validator(baseurl)
35 except DjangoValidationError as e:
36 raise ValidationError(detail=str(e))
37 username = request.query_params.get("username", None)
38 facility = request.query_params.get("facility", None)
39 if username is None or facility is None:
40 raise ValidationError(detail="Both username and facility are required")
41 url = reverse_remote(baseurl, "kolibri:core:publicsearchuser-list")
42 try:
43 response = requests.get(
44 url, params={"facility": facility, "search": username}
45 )
46 if response.status_code == 200:
47 return Response(response.json())
48 else:
49 return Response({})
50 except Exception as e:
51 raise ValidationError(detail=str(e))
52
53
54 class RemoteFacilityUserAuthenticatedViewset(APIView):
55 def post(self, request, *args, **kwargs):
56 baseurl = request.query_params.get("baseurl", "")
57 try:
58 validator(baseurl)
59 except DjangoValidationError as e:
60 raise ValidationError(detail=str(e))
61 username = request.data.get("username", None)
62 facility = request.data.get("facility", None)
63 password = request.data.get("password", None)
64 if username is None or facility is None:
65 raise ValidationError(detail="Both username and facility are required")
66 url = reverse_remote(baseurl, "kolibri:core:publicuser-list")
67 params = {"facility": facility, "search": username}
68
69 # adding facility so auth works when learners can login without password:
70 username = "username={}&facility={}".format(username, facility)
71
72 auth = requests.auth.HTTPBasicAuth(username, password)
73 try:
74 response = requests.get(url, params=params, verify=False, auth=auth)
75 if response.status_code == 200:
76 return Response(response.json())
77 else:
78 return Response({"error": response.json()["detail"]})
79 except Exception as e:
80 raise ValidationError(detail=str(e))
81
82
83 class LoginMergedUserViewset(APIView):
84 """
85 Viewset to login into kolibri using the merged user,
86 after the old user has been deleted
87 """
88
89 def post(self, request):
90 pk = request.data.get("pk", None)
91 token = request.data.get("token", None)
92 new_user = FacilityUser.objects.get(pk=pk)
93 if not TokenGenerator().check_token(new_user, token):
94 return Response({"error": "Unauthorized"}, status=401)
95 login(request, new_user)
96 return Response({"success": True})
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kolibri/plugins/user_profile/viewsets.py b/kolibri/plugins/user_profile/viewsets.py
--- a/kolibri/plugins/user_profile/viewsets.py
+++ b/kolibri/plugins/user_profile/viewsets.py
@@ -53,7 +53,7 @@
class RemoteFacilityUserAuthenticatedViewset(APIView):
def post(self, request, *args, **kwargs):
- baseurl = request.query_params.get("baseurl", "")
+ baseurl = request.data.get("baseurl", "")
try:
validator(baseurl)
except DjangoValidationError as e:
|
{"golden_diff": "diff --git a/kolibri/plugins/user_profile/viewsets.py b/kolibri/plugins/user_profile/viewsets.py\n--- a/kolibri/plugins/user_profile/viewsets.py\n+++ b/kolibri/plugins/user_profile/viewsets.py\n@@ -53,7 +53,7 @@\n \n class RemoteFacilityUserAuthenticatedViewset(APIView):\n def post(self, request, *args, **kwargs):\n- baseurl = request.query_params.get(\"baseurl\", \"\")\n+ baseurl = request.data.get(\"baseurl\", \"\")\n try:\n validator(baseurl)\n except DjangoValidationError as e:\n", "issue": "'On my own' device - Merging a user is not working\n## Observed behavior\r\nObserved while integration testing the [v0.16.1-beta1 ](https://github.com/learningequality/kolibri/releases/tag/v0.16.1-beta1) release.\r\nWhen I try to merge a user created through 'On my own' I am getting an \"Invalid URL\" error in the console. Note that creating a new account through the same flow is working correctly. This issue is caused by the changes made in https://github.com/learningequality/kolibri/pull/12028 and is not extant in [v0.16.1-beta0](https://github.com/learningequality/kolibri/releases/tag/v0.16.1-beta0).\r\n\r\nhttps://github.com/learningequality/kolibri/assets/79847249/30daa3ca-918c-4c15-901b-c74c08b96466\r\n\r\n## Expected behavior\r\n\r\nFully functional 'Merge accounts' user flow. \r\n\r\n## Steps to reproduce the issue\r\n\r\n1. Install [v0.16.1-beta1 ](https://github.com/learningequality/kolibri/releases/tag/v0.16.1-beta1).\r\n2. Setup a full device as a server and another device by going through the 'On my own' setup flow.\r\n3. Attempt to merge the user from the 'On my own' device' to the server facility.\r\n\r\n## Logs\r\n\r\n[logs.zip](https://github.com/learningequality/kolibri/files/14850735/logs.zip)\r\n\r\n## Usage Details\r\n[v0.16.1-beta1 ](https://github.com/learningequality/kolibri/releases/tag/v0.16.1-beta1)\r\nWindows 11, Ubuntu 22 - Chrome\n", "before_files": [{"content": "import requests\nfrom django.contrib.auth import login\nfrom django.core.exceptions import ValidationError as DjangoValidationError\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom .utils import TokenGenerator\nfrom kolibri.core.auth.models import FacilityUser\nfrom kolibri.core.utils.urls import reverse_remote\nfrom kolibri.utils.urls import validator\n\n\nclass OnMyOwnSetupViewset(APIView):\n \"\"\"\n Viewset to determine if the facility has been setup as an \"On my own setup\" facility.\n \"\"\"\n\n def get(self, request, format=None):\n if request.user.is_anonymous:\n self.permission_denied(request)\n user_facility = self.request.user.facility\n return Response(\n {\n \"on_my_own_setup\": user_facility.on_my_own_setup,\n }\n )\n\n\nclass RemoteFacilityUserViewset(APIView):\n def get(self, request):\n baseurl = request.query_params.get(\"baseurl\", \"\")\n try:\n validator(baseurl)\n except DjangoValidationError as e:\n raise ValidationError(detail=str(e))\n username = request.query_params.get(\"username\", None)\n facility = request.query_params.get(\"facility\", None)\n if username is None or facility is None:\n raise ValidationError(detail=\"Both username and facility are required\")\n url = reverse_remote(baseurl, \"kolibri:core:publicsearchuser-list\")\n try:\n response = requests.get(\n url, params={\"facility\": facility, \"search\": username}\n )\n if response.status_code == 200:\n return Response(response.json())\n else:\n return Response({})\n except Exception as e:\n raise ValidationError(detail=str(e))\n\n\nclass RemoteFacilityUserAuthenticatedViewset(APIView):\n def post(self, request, *args, **kwargs):\n baseurl = request.query_params.get(\"baseurl\", \"\")\n try:\n validator(baseurl)\n except DjangoValidationError as e:\n raise ValidationError(detail=str(e))\n username = request.data.get(\"username\", None)\n facility = request.data.get(\"facility\", None)\n password = request.data.get(\"password\", None)\n if username is None or facility is None:\n raise ValidationError(detail=\"Both username and facility are required\")\n url = reverse_remote(baseurl, \"kolibri:core:publicuser-list\")\n params = {\"facility\": facility, \"search\": username}\n\n # adding facility so auth works when learners can login without password:\n username = \"username={}&facility={}\".format(username, facility)\n\n auth = requests.auth.HTTPBasicAuth(username, password)\n try:\n response = requests.get(url, params=params, verify=False, auth=auth)\n if response.status_code == 200:\n return Response(response.json())\n else:\n return Response({\"error\": response.json()[\"detail\"]})\n except Exception as e:\n raise ValidationError(detail=str(e))\n\n\nclass LoginMergedUserViewset(APIView):\n \"\"\"\n Viewset to login into kolibri using the merged user,\n after the old user has been deleted\n \"\"\"\n\n def post(self, request):\n pk = request.data.get(\"pk\", None)\n token = request.data.get(\"token\", None)\n new_user = FacilityUser.objects.get(pk=pk)\n if not TokenGenerator().check_token(new_user, token):\n return Response({\"error\": \"Unauthorized\"}, status=401)\n login(request, new_user)\n return Response({\"success\": True})\n", "path": "kolibri/plugins/user_profile/viewsets.py"}], "after_files": [{"content": "import requests\nfrom django.contrib.auth import login\nfrom django.core.exceptions import ValidationError as DjangoValidationError\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom .utils import TokenGenerator\nfrom kolibri.core.auth.models import FacilityUser\nfrom kolibri.core.utils.urls import reverse_remote\nfrom kolibri.utils.urls import validator\n\n\nclass OnMyOwnSetupViewset(APIView):\n \"\"\"\n Viewset to determine if the facility has been setup as an \"On my own setup\" facility.\n \"\"\"\n\n def get(self, request, format=None):\n if request.user.is_anonymous:\n self.permission_denied(request)\n user_facility = self.request.user.facility\n return Response(\n {\n \"on_my_own_setup\": user_facility.on_my_own_setup,\n }\n )\n\n\nclass RemoteFacilityUserViewset(APIView):\n def get(self, request):\n baseurl = request.query_params.get(\"baseurl\", \"\")\n try:\n validator(baseurl)\n except DjangoValidationError as e:\n raise ValidationError(detail=str(e))\n username = request.query_params.get(\"username\", None)\n facility = request.query_params.get(\"facility\", None)\n if username is None or facility is None:\n raise ValidationError(detail=\"Both username and facility are required\")\n url = reverse_remote(baseurl, \"kolibri:core:publicsearchuser-list\")\n try:\n response = requests.get(\n url, params={\"facility\": facility, \"search\": username}\n )\n if response.status_code == 200:\n return Response(response.json())\n else:\n return Response({})\n except Exception as e:\n raise ValidationError(detail=str(e))\n\n\nclass RemoteFacilityUserAuthenticatedViewset(APIView):\n def post(self, request, *args, **kwargs):\n baseurl = request.data.get(\"baseurl\", \"\")\n try:\n validator(baseurl)\n except DjangoValidationError as e:\n raise ValidationError(detail=str(e))\n username = request.data.get(\"username\", None)\n facility = request.data.get(\"facility\", None)\n password = request.data.get(\"password\", None)\n if username is None or facility is None:\n raise ValidationError(detail=\"Both username and facility are required\")\n url = reverse_remote(baseurl, \"kolibri:core:publicuser-list\")\n params = {\"facility\": facility, \"search\": username}\n\n # adding facility so auth works when learners can login without password:\n username = \"username={}&facility={}\".format(username, facility)\n\n auth = requests.auth.HTTPBasicAuth(username, password)\n try:\n response = requests.get(url, params=params, verify=False, auth=auth)\n if response.status_code == 200:\n return Response(response.json())\n else:\n return Response({\"error\": response.json()[\"detail\"]})\n except Exception as e:\n raise ValidationError(detail=str(e))\n\n\nclass LoginMergedUserViewset(APIView):\n \"\"\"\n Viewset to login into kolibri using the merged user,\n after the old user has been deleted\n \"\"\"\n\n def post(self, request):\n pk = request.data.get(\"pk\", None)\n token = request.data.get(\"token\", None)\n new_user = FacilityUser.objects.get(pk=pk)\n if not TokenGenerator().check_token(new_user, token):\n return Response({\"error\": \"Unauthorized\"}, status=401)\n login(request, new_user)\n return Response({\"success\": True})\n", "path": "kolibri/plugins/user_profile/viewsets.py"}]}
| 1,590 | 127 |
gh_patches_debug_41859
|
rasdani/github-patches
|
git_diff
|
python-discord__bot-823
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tag-search command to search tags via their contents instead of names.
Currently, it can be difficult to find the specific command for a tag even if you know it exists. A command to allow you to search through tag bodies would help with finding the right tags. For example doing `!tag-search backtick` would return `!code-block` (and any other tags that include the word backtick).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bot/cogs/tags.py`
Content:
```
1 import logging
2 import re
3 import time
4 from typing import Dict, List, Optional
5
6 from discord import Colour, Embed
7 from discord.ext.commands import Cog, Context, group
8
9 from bot.bot import Bot
10 from bot.constants import Channels, Cooldowns, MODERATION_ROLES, Roles
11 from bot.converters import TagContentConverter, TagNameConverter
12 from bot.decorators import with_role
13 from bot.pagination import LinePaginator
14
15 log = logging.getLogger(__name__)
16
17 TEST_CHANNELS = (
18 Channels.bot_commands,
19 Channels.helpers
20 )
21
22 REGEX_NON_ALPHABET = re.compile(r"[^a-z]", re.MULTILINE & re.IGNORECASE)
23
24
25 class Tags(Cog):
26 """Save new tags and fetch existing tags."""
27
28 def __init__(self, bot: Bot):
29 self.bot = bot
30 self.tag_cooldowns = {}
31
32 self._cache = {}
33 self._last_fetch: float = 0.0
34
35 async def _get_tags(self, is_forced: bool = False) -> None:
36 """Get all tags."""
37 # refresh only when there's a more than 5m gap from last call.
38 time_now: float = time.time()
39 if is_forced or not self._last_fetch or time_now - self._last_fetch > 5 * 60:
40 tags = await self.bot.api_client.get('bot/tags')
41 self._cache = {tag['title'].lower(): tag for tag in tags}
42 self._last_fetch = time_now
43
44 @staticmethod
45 def _fuzzy_search(search: str, target: str) -> int:
46 """A simple scoring algorithm based on how many letters are found / total, with order in mind."""
47 current, index = 0, 0
48 _search = REGEX_NON_ALPHABET.sub('', search.lower())
49 _targets = iter(REGEX_NON_ALPHABET.split(target.lower()))
50 _target = next(_targets)
51 try:
52 while True:
53 while index < len(_target) and _search[current] == _target[index]:
54 current += 1
55 index += 1
56 index, _target = 0, next(_targets)
57 except (StopIteration, IndexError):
58 pass
59 return current / len(_search) * 100
60
61 def _get_suggestions(self, tag_name: str, thresholds: Optional[List[int]] = None) -> List[str]:
62 """Return a list of suggested tags."""
63 scores: Dict[str, int] = {
64 tag_title: Tags._fuzzy_search(tag_name, tag['title'])
65 for tag_title, tag in self._cache.items()
66 }
67
68 thresholds = thresholds or [100, 90, 80, 70, 60]
69
70 for threshold in thresholds:
71 suggestions = [
72 self._cache[tag_title]
73 for tag_title, matching_score in scores.items()
74 if matching_score >= threshold
75 ]
76 if suggestions:
77 return suggestions
78
79 return []
80
81 async def _get_tag(self, tag_name: str) -> list:
82 """Get a specific tag."""
83 await self._get_tags()
84 found = [self._cache.get(tag_name.lower(), None)]
85 if not found[0]:
86 return self._get_suggestions(tag_name)
87 return found
88
89 @group(name='tags', aliases=('tag', 't'), invoke_without_command=True)
90 async def tags_group(self, ctx: Context, *, tag_name: TagNameConverter = None) -> None:
91 """Show all known tags, a single tag, or run a subcommand."""
92 await ctx.invoke(self.get_command, tag_name=tag_name)
93
94 @tags_group.command(name='get', aliases=('show', 'g'))
95 async def get_command(self, ctx: Context, *, tag_name: TagNameConverter = None) -> None:
96 """Get a specified tag, or a list of all tags if no tag is specified."""
97 def _command_on_cooldown(tag_name: str) -> bool:
98 """
99 Check if the command is currently on cooldown, on a per-tag, per-channel basis.
100
101 The cooldown duration is set in constants.py.
102 """
103 now = time.time()
104
105 cooldown_conditions = (
106 tag_name
107 and tag_name in self.tag_cooldowns
108 and (now - self.tag_cooldowns[tag_name]["time"]) < Cooldowns.tags
109 and self.tag_cooldowns[tag_name]["channel"] == ctx.channel.id
110 )
111
112 if cooldown_conditions:
113 return True
114 return False
115
116 if _command_on_cooldown(tag_name):
117 time_left = Cooldowns.tags - (time.time() - self.tag_cooldowns[tag_name]["time"])
118 log.info(
119 f"{ctx.author} tried to get the '{tag_name}' tag, but the tag is on cooldown. "
120 f"Cooldown ends in {time_left:.1f} seconds."
121 )
122 return
123
124 await self._get_tags()
125
126 if tag_name is not None:
127 founds = await self._get_tag(tag_name)
128
129 if len(founds) == 1:
130 tag = founds[0]
131 if ctx.channel.id not in TEST_CHANNELS:
132 self.tag_cooldowns[tag_name] = {
133 "time": time.time(),
134 "channel": ctx.channel.id
135 }
136 await ctx.send(embed=Embed.from_dict(tag['embed']))
137 elif founds and len(tag_name) >= 3:
138 await ctx.send(embed=Embed(
139 title='Did you mean ...',
140 description='\n'.join(tag['title'] for tag in founds[:10])
141 ))
142
143 else:
144 tags = self._cache.values()
145 if not tags:
146 await ctx.send(embed=Embed(
147 description="**There are no tags in the database!**",
148 colour=Colour.red()
149 ))
150 else:
151 embed: Embed = Embed(title="**Current tags**")
152 await LinePaginator.paginate(
153 sorted(f"**»** {tag['title']}" for tag in tags),
154 ctx,
155 embed,
156 footer_text="To show a tag, type !tags <tagname>.",
157 empty=False,
158 max_lines=15
159 )
160
161 @tags_group.command(name='set', aliases=('add', 's'))
162 @with_role(*MODERATION_ROLES)
163 async def set_command(
164 self,
165 ctx: Context,
166 tag_name: TagNameConverter,
167 *,
168 tag_content: TagContentConverter,
169 ) -> None:
170 """Create a new tag."""
171 body = {
172 'title': tag_name.lower().strip(),
173 'embed': {
174 'title': tag_name,
175 'description': tag_content
176 }
177 }
178
179 await self.bot.api_client.post('bot/tags', json=body)
180 self._cache[tag_name.lower()] = await self.bot.api_client.get(f'bot/tags/{tag_name}')
181
182 log.debug(f"{ctx.author} successfully added the following tag to our database: \n"
183 f"tag_name: {tag_name}\n"
184 f"tag_content: '{tag_content}'\n")
185
186 await ctx.send(embed=Embed(
187 title="Tag successfully added",
188 description=f"**{tag_name}** added to tag database.",
189 colour=Colour.blurple()
190 ))
191
192 @tags_group.command(name='edit', aliases=('e', ))
193 @with_role(*MODERATION_ROLES)
194 async def edit_command(
195 self,
196 ctx: Context,
197 tag_name: TagNameConverter,
198 *,
199 tag_content: TagContentConverter,
200 ) -> None:
201 """Edit an existing tag."""
202 body = {
203 'embed': {
204 'title': tag_name,
205 'description': tag_content
206 }
207 }
208
209 await self.bot.api_client.patch(f'bot/tags/{tag_name}', json=body)
210 self._cache[tag_name.lower()] = await self.bot.api_client.get(f'bot/tags/{tag_name}')
211
212 log.debug(f"{ctx.author} successfully edited the following tag in our database: \n"
213 f"tag_name: {tag_name}\n"
214 f"tag_content: '{tag_content}'\n")
215
216 await ctx.send(embed=Embed(
217 title="Tag successfully edited",
218 description=f"**{tag_name}** edited in the database.",
219 colour=Colour.blurple()
220 ))
221
222 @tags_group.command(name='delete', aliases=('remove', 'rm', 'd'))
223 @with_role(Roles.admins, Roles.owners)
224 async def delete_command(self, ctx: Context, *, tag_name: TagNameConverter) -> None:
225 """Remove a tag from the database."""
226 await self.bot.api_client.delete(f'bot/tags/{tag_name}')
227 self._cache.pop(tag_name.lower(), None)
228
229 log.debug(f"{ctx.author} successfully deleted the tag called '{tag_name}'")
230 await ctx.send(embed=Embed(
231 title=tag_name,
232 description=f"Tag successfully removed: {tag_name}.",
233 colour=Colour.blurple()
234 ))
235
236
237 def setup(bot: Bot) -> None:
238 """Load the Tags cog."""
239 bot.add_cog(Tags(bot))
240
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bot/cogs/tags.py b/bot/cogs/tags.py
--- a/bot/cogs/tags.py
+++ b/bot/cogs/tags.py
@@ -1,7 +1,7 @@
import logging
import re
import time
-from typing import Dict, List, Optional
+from typing import Callable, Dict, Iterable, List, Optional
from discord import Colour, Embed
from discord.ext.commands import Cog, Context, group
@@ -86,11 +86,80 @@
return self._get_suggestions(tag_name)
return found
+ async def _get_tags_via_content(self, check: Callable[[Iterable], bool], keywords: str) -> list:
+ """
+ Search for tags via contents.
+
+ `predicate` will be the built-in any, all, or a custom callable. Must return a bool.
+ """
+ await self._get_tags()
+
+ keywords_processed: List[str] = []
+ for keyword in keywords.split(','):
+ keyword_sanitized = keyword.strip().casefold()
+ if not keyword_sanitized:
+ # this happens when there are leading / trailing / consecutive comma.
+ continue
+ keywords_processed.append(keyword_sanitized)
+
+ if not keywords_processed:
+ # after sanitizing, we can end up with an empty list, for example when keywords is ','
+ # in that case, we simply want to search for such keywords directly instead.
+ keywords_processed = [keywords]
+
+ matching_tags = []
+ for tag in self._cache.values():
+ if check(query in tag['embed']['description'].casefold() for query in keywords_processed):
+ matching_tags.append(tag)
+
+ return matching_tags
+
+ async def _send_matching_tags(self, ctx: Context, keywords: str, matching_tags: list) -> None:
+ """Send the result of matching tags to user."""
+ if not matching_tags:
+ pass
+ elif len(matching_tags) == 1:
+ await ctx.send(embed=Embed().from_dict(matching_tags[0]['embed']))
+ else:
+ is_plural = keywords.strip().count(' ') > 0 or keywords.strip().count(',') > 0
+ embed = Embed(
+ title=f"Here are the tags containing the given keyword{'s' * is_plural}:",
+ description='\n'.join(tag['title'] for tag in matching_tags[:10])
+ )
+ await LinePaginator.paginate(
+ sorted(f"**»** {tag['title']}" for tag in matching_tags),
+ ctx,
+ embed,
+ footer_text="To show a tag, type !tags <tagname>.",
+ empty=False,
+ max_lines=15
+ )
+
@group(name='tags', aliases=('tag', 't'), invoke_without_command=True)
async def tags_group(self, ctx: Context, *, tag_name: TagNameConverter = None) -> None:
"""Show all known tags, a single tag, or run a subcommand."""
await ctx.invoke(self.get_command, tag_name=tag_name)
+ @tags_group.group(name='search', invoke_without_command=True)
+ async def search_tag_content(self, ctx: Context, *, keywords: str) -> None:
+ """
+ Search inside tags' contents for tags. Allow searching for multiple keywords separated by comma.
+
+ Only search for tags that has ALL the keywords.
+ """
+ matching_tags = await self._get_tags_via_content(all, keywords)
+ await self._send_matching_tags(ctx, keywords, matching_tags)
+
+ @search_tag_content.command(name='any')
+ async def search_tag_content_any_keyword(self, ctx: Context, *, keywords: Optional[str] = None) -> None:
+ """
+ Search inside tags' contents for tags. Allow searching for multiple keywords separated by comma.
+
+ Search for tags that has ANY of the keywords.
+ """
+ matching_tags = await self._get_tags_via_content(any, keywords or 'any')
+ await self._send_matching_tags(ctx, keywords, matching_tags)
+
@tags_group.command(name='get', aliases=('show', 'g'))
async def get_command(self, ctx: Context, *, tag_name: TagNameConverter = None) -> None:
"""Get a specified tag, or a list of all tags if no tag is specified."""
|
{"golden_diff": "diff --git a/bot/cogs/tags.py b/bot/cogs/tags.py\n--- a/bot/cogs/tags.py\n+++ b/bot/cogs/tags.py\n@@ -1,7 +1,7 @@\n import logging\n import re\n import time\n-from typing import Dict, List, Optional\n+from typing import Callable, Dict, Iterable, List, Optional\n \n from discord import Colour, Embed\n from discord.ext.commands import Cog, Context, group\n@@ -86,11 +86,80 @@\n return self._get_suggestions(tag_name)\n return found\n \n+ async def _get_tags_via_content(self, check: Callable[[Iterable], bool], keywords: str) -> list:\n+ \"\"\"\n+ Search for tags via contents.\n+\n+ `predicate` will be the built-in any, all, or a custom callable. Must return a bool.\n+ \"\"\"\n+ await self._get_tags()\n+\n+ keywords_processed: List[str] = []\n+ for keyword in keywords.split(','):\n+ keyword_sanitized = keyword.strip().casefold()\n+ if not keyword_sanitized:\n+ # this happens when there are leading / trailing / consecutive comma.\n+ continue\n+ keywords_processed.append(keyword_sanitized)\n+\n+ if not keywords_processed:\n+ # after sanitizing, we can end up with an empty list, for example when keywords is ','\n+ # in that case, we simply want to search for such keywords directly instead.\n+ keywords_processed = [keywords]\n+\n+ matching_tags = []\n+ for tag in self._cache.values():\n+ if check(query in tag['embed']['description'].casefold() for query in keywords_processed):\n+ matching_tags.append(tag)\n+\n+ return matching_tags\n+\n+ async def _send_matching_tags(self, ctx: Context, keywords: str, matching_tags: list) -> None:\n+ \"\"\"Send the result of matching tags to user.\"\"\"\n+ if not matching_tags:\n+ pass\n+ elif len(matching_tags) == 1:\n+ await ctx.send(embed=Embed().from_dict(matching_tags[0]['embed']))\n+ else:\n+ is_plural = keywords.strip().count(' ') > 0 or keywords.strip().count(',') > 0\n+ embed = Embed(\n+ title=f\"Here are the tags containing the given keyword{'s' * is_plural}:\",\n+ description='\\n'.join(tag['title'] for tag in matching_tags[:10])\n+ )\n+ await LinePaginator.paginate(\n+ sorted(f\"**\u00bb** {tag['title']}\" for tag in matching_tags),\n+ ctx,\n+ embed,\n+ footer_text=\"To show a tag, type !tags <tagname>.\",\n+ empty=False,\n+ max_lines=15\n+ )\n+\n @group(name='tags', aliases=('tag', 't'), invoke_without_command=True)\n async def tags_group(self, ctx: Context, *, tag_name: TagNameConverter = None) -> None:\n \"\"\"Show all known tags, a single tag, or run a subcommand.\"\"\"\n await ctx.invoke(self.get_command, tag_name=tag_name)\n \n+ @tags_group.group(name='search', invoke_without_command=True)\n+ async def search_tag_content(self, ctx: Context, *, keywords: str) -> None:\n+ \"\"\"\n+ Search inside tags' contents for tags. Allow searching for multiple keywords separated by comma.\n+\n+ Only search for tags that has ALL the keywords.\n+ \"\"\"\n+ matching_tags = await self._get_tags_via_content(all, keywords)\n+ await self._send_matching_tags(ctx, keywords, matching_tags)\n+\n+ @search_tag_content.command(name='any')\n+ async def search_tag_content_any_keyword(self, ctx: Context, *, keywords: Optional[str] = None) -> None:\n+ \"\"\"\n+ Search inside tags' contents for tags. Allow searching for multiple keywords separated by comma.\n+\n+ Search for tags that has ANY of the keywords.\n+ \"\"\"\n+ matching_tags = await self._get_tags_via_content(any, keywords or 'any')\n+ await self._send_matching_tags(ctx, keywords, matching_tags)\n+\n @tags_group.command(name='get', aliases=('show', 'g'))\n async def get_command(self, ctx: Context, *, tag_name: TagNameConverter = None) -> None:\n \"\"\"Get a specified tag, or a list of all tags if no tag is specified.\"\"\"\n", "issue": "tag-search command to search tags via their contents instead of names.\nCurrently, it can be difficult to find the specific command for a tag even if you know it exists. A command to allow you to search through tag bodies would help with finding the right tags. For example doing `!tag-search backtick` would return `!code-block` (and any other tags that include the word backtick).\n", "before_files": [{"content": "import logging\nimport re\nimport time\nfrom typing import Dict, List, Optional\n\nfrom discord import Colour, Embed\nfrom discord.ext.commands import Cog, Context, group\n\nfrom bot.bot import Bot\nfrom bot.constants import Channels, Cooldowns, MODERATION_ROLES, Roles\nfrom bot.converters import TagContentConverter, TagNameConverter\nfrom bot.decorators import with_role\nfrom bot.pagination import LinePaginator\n\nlog = logging.getLogger(__name__)\n\nTEST_CHANNELS = (\n Channels.bot_commands,\n Channels.helpers\n)\n\nREGEX_NON_ALPHABET = re.compile(r\"[^a-z]\", re.MULTILINE & re.IGNORECASE)\n\n\nclass Tags(Cog):\n \"\"\"Save new tags and fetch existing tags.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self.tag_cooldowns = {}\n\n self._cache = {}\n self._last_fetch: float = 0.0\n\n async def _get_tags(self, is_forced: bool = False) -> None:\n \"\"\"Get all tags.\"\"\"\n # refresh only when there's a more than 5m gap from last call.\n time_now: float = time.time()\n if is_forced or not self._last_fetch or time_now - self._last_fetch > 5 * 60:\n tags = await self.bot.api_client.get('bot/tags')\n self._cache = {tag['title'].lower(): tag for tag in tags}\n self._last_fetch = time_now\n\n @staticmethod\n def _fuzzy_search(search: str, target: str) -> int:\n \"\"\"A simple scoring algorithm based on how many letters are found / total, with order in mind.\"\"\"\n current, index = 0, 0\n _search = REGEX_NON_ALPHABET.sub('', search.lower())\n _targets = iter(REGEX_NON_ALPHABET.split(target.lower()))\n _target = next(_targets)\n try:\n while True:\n while index < len(_target) and _search[current] == _target[index]:\n current += 1\n index += 1\n index, _target = 0, next(_targets)\n except (StopIteration, IndexError):\n pass\n return current / len(_search) * 100\n\n def _get_suggestions(self, tag_name: str, thresholds: Optional[List[int]] = None) -> List[str]:\n \"\"\"Return a list of suggested tags.\"\"\"\n scores: Dict[str, int] = {\n tag_title: Tags._fuzzy_search(tag_name, tag['title'])\n for tag_title, tag in self._cache.items()\n }\n\n thresholds = thresholds or [100, 90, 80, 70, 60]\n\n for threshold in thresholds:\n suggestions = [\n self._cache[tag_title]\n for tag_title, matching_score in scores.items()\n if matching_score >= threshold\n ]\n if suggestions:\n return suggestions\n\n return []\n\n async def _get_tag(self, tag_name: str) -> list:\n \"\"\"Get a specific tag.\"\"\"\n await self._get_tags()\n found = [self._cache.get(tag_name.lower(), None)]\n if not found[0]:\n return self._get_suggestions(tag_name)\n return found\n\n @group(name='tags', aliases=('tag', 't'), invoke_without_command=True)\n async def tags_group(self, ctx: Context, *, tag_name: TagNameConverter = None) -> None:\n \"\"\"Show all known tags, a single tag, or run a subcommand.\"\"\"\n await ctx.invoke(self.get_command, tag_name=tag_name)\n\n @tags_group.command(name='get', aliases=('show', 'g'))\n async def get_command(self, ctx: Context, *, tag_name: TagNameConverter = None) -> None:\n \"\"\"Get a specified tag, or a list of all tags if no tag is specified.\"\"\"\n def _command_on_cooldown(tag_name: str) -> bool:\n \"\"\"\n Check if the command is currently on cooldown, on a per-tag, per-channel basis.\n\n The cooldown duration is set in constants.py.\n \"\"\"\n now = time.time()\n\n cooldown_conditions = (\n tag_name\n and tag_name in self.tag_cooldowns\n and (now - self.tag_cooldowns[tag_name][\"time\"]) < Cooldowns.tags\n and self.tag_cooldowns[tag_name][\"channel\"] == ctx.channel.id\n )\n\n if cooldown_conditions:\n return True\n return False\n\n if _command_on_cooldown(tag_name):\n time_left = Cooldowns.tags - (time.time() - self.tag_cooldowns[tag_name][\"time\"])\n log.info(\n f\"{ctx.author} tried to get the '{tag_name}' tag, but the tag is on cooldown. \"\n f\"Cooldown ends in {time_left:.1f} seconds.\"\n )\n return\n\n await self._get_tags()\n\n if tag_name is not None:\n founds = await self._get_tag(tag_name)\n\n if len(founds) == 1:\n tag = founds[0]\n if ctx.channel.id not in TEST_CHANNELS:\n self.tag_cooldowns[tag_name] = {\n \"time\": time.time(),\n \"channel\": ctx.channel.id\n }\n await ctx.send(embed=Embed.from_dict(tag['embed']))\n elif founds and len(tag_name) >= 3:\n await ctx.send(embed=Embed(\n title='Did you mean ...',\n description='\\n'.join(tag['title'] for tag in founds[:10])\n ))\n\n else:\n tags = self._cache.values()\n if not tags:\n await ctx.send(embed=Embed(\n description=\"**There are no tags in the database!**\",\n colour=Colour.red()\n ))\n else:\n embed: Embed = Embed(title=\"**Current tags**\")\n await LinePaginator.paginate(\n sorted(f\"**\u00bb** {tag['title']}\" for tag in tags),\n ctx,\n embed,\n footer_text=\"To show a tag, type !tags <tagname>.\",\n empty=False,\n max_lines=15\n )\n\n @tags_group.command(name='set', aliases=('add', 's'))\n @with_role(*MODERATION_ROLES)\n async def set_command(\n self,\n ctx: Context,\n tag_name: TagNameConverter,\n *,\n tag_content: TagContentConverter,\n ) -> None:\n \"\"\"Create a new tag.\"\"\"\n body = {\n 'title': tag_name.lower().strip(),\n 'embed': {\n 'title': tag_name,\n 'description': tag_content\n }\n }\n\n await self.bot.api_client.post('bot/tags', json=body)\n self._cache[tag_name.lower()] = await self.bot.api_client.get(f'bot/tags/{tag_name}')\n\n log.debug(f\"{ctx.author} successfully added the following tag to our database: \\n\"\n f\"tag_name: {tag_name}\\n\"\n f\"tag_content: '{tag_content}'\\n\")\n\n await ctx.send(embed=Embed(\n title=\"Tag successfully added\",\n description=f\"**{tag_name}** added to tag database.\",\n colour=Colour.blurple()\n ))\n\n @tags_group.command(name='edit', aliases=('e', ))\n @with_role(*MODERATION_ROLES)\n async def edit_command(\n self,\n ctx: Context,\n tag_name: TagNameConverter,\n *,\n tag_content: TagContentConverter,\n ) -> None:\n \"\"\"Edit an existing tag.\"\"\"\n body = {\n 'embed': {\n 'title': tag_name,\n 'description': tag_content\n }\n }\n\n await self.bot.api_client.patch(f'bot/tags/{tag_name}', json=body)\n self._cache[tag_name.lower()] = await self.bot.api_client.get(f'bot/tags/{tag_name}')\n\n log.debug(f\"{ctx.author} successfully edited the following tag in our database: \\n\"\n f\"tag_name: {tag_name}\\n\"\n f\"tag_content: '{tag_content}'\\n\")\n\n await ctx.send(embed=Embed(\n title=\"Tag successfully edited\",\n description=f\"**{tag_name}** edited in the database.\",\n colour=Colour.blurple()\n ))\n\n @tags_group.command(name='delete', aliases=('remove', 'rm', 'd'))\n @with_role(Roles.admins, Roles.owners)\n async def delete_command(self, ctx: Context, *, tag_name: TagNameConverter) -> None:\n \"\"\"Remove a tag from the database.\"\"\"\n await self.bot.api_client.delete(f'bot/tags/{tag_name}')\n self._cache.pop(tag_name.lower(), None)\n\n log.debug(f\"{ctx.author} successfully deleted the tag called '{tag_name}'\")\n await ctx.send(embed=Embed(\n title=tag_name,\n description=f\"Tag successfully removed: {tag_name}.\",\n colour=Colour.blurple()\n ))\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the Tags cog.\"\"\"\n bot.add_cog(Tags(bot))\n", "path": "bot/cogs/tags.py"}], "after_files": [{"content": "import logging\nimport re\nimport time\nfrom typing import Callable, Dict, Iterable, List, Optional\n\nfrom discord import Colour, Embed\nfrom discord.ext.commands import Cog, Context, group\n\nfrom bot.bot import Bot\nfrom bot.constants import Channels, Cooldowns, MODERATION_ROLES, Roles\nfrom bot.converters import TagContentConverter, TagNameConverter\nfrom bot.decorators import with_role\nfrom bot.pagination import LinePaginator\n\nlog = logging.getLogger(__name__)\n\nTEST_CHANNELS = (\n Channels.bot_commands,\n Channels.helpers\n)\n\nREGEX_NON_ALPHABET = re.compile(r\"[^a-z]\", re.MULTILINE & re.IGNORECASE)\n\n\nclass Tags(Cog):\n \"\"\"Save new tags and fetch existing tags.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self.tag_cooldowns = {}\n\n self._cache = {}\n self._last_fetch: float = 0.0\n\n async def _get_tags(self, is_forced: bool = False) -> None:\n \"\"\"Get all tags.\"\"\"\n # refresh only when there's a more than 5m gap from last call.\n time_now: float = time.time()\n if is_forced or not self._last_fetch or time_now - self._last_fetch > 5 * 60:\n tags = await self.bot.api_client.get('bot/tags')\n self._cache = {tag['title'].lower(): tag for tag in tags}\n self._last_fetch = time_now\n\n @staticmethod\n def _fuzzy_search(search: str, target: str) -> int:\n \"\"\"A simple scoring algorithm based on how many letters are found / total, with order in mind.\"\"\"\n current, index = 0, 0\n _search = REGEX_NON_ALPHABET.sub('', search.lower())\n _targets = iter(REGEX_NON_ALPHABET.split(target.lower()))\n _target = next(_targets)\n try:\n while True:\n while index < len(_target) and _search[current] == _target[index]:\n current += 1\n index += 1\n index, _target = 0, next(_targets)\n except (StopIteration, IndexError):\n pass\n return current / len(_search) * 100\n\n def _get_suggestions(self, tag_name: str, thresholds: Optional[List[int]] = None) -> List[str]:\n \"\"\"Return a list of suggested tags.\"\"\"\n scores: Dict[str, int] = {\n tag_title: Tags._fuzzy_search(tag_name, tag['title'])\n for tag_title, tag in self._cache.items()\n }\n\n thresholds = thresholds or [100, 90, 80, 70, 60]\n\n for threshold in thresholds:\n suggestions = [\n self._cache[tag_title]\n for tag_title, matching_score in scores.items()\n if matching_score >= threshold\n ]\n if suggestions:\n return suggestions\n\n return []\n\n async def _get_tag(self, tag_name: str) -> list:\n \"\"\"Get a specific tag.\"\"\"\n await self._get_tags()\n found = [self._cache.get(tag_name.lower(), None)]\n if not found[0]:\n return self._get_suggestions(tag_name)\n return found\n\n async def _get_tags_via_content(self, check: Callable[[Iterable], bool], keywords: str) -> list:\n \"\"\"\n Search for tags via contents.\n\n `predicate` will be the built-in any, all, or a custom callable. Must return a bool.\n \"\"\"\n await self._get_tags()\n\n keywords_processed: List[str] = []\n for keyword in keywords.split(','):\n keyword_sanitized = keyword.strip().casefold()\n if not keyword_sanitized:\n # this happens when there are leading / trailing / consecutive comma.\n continue\n keywords_processed.append(keyword_sanitized)\n\n if not keywords_processed:\n # after sanitizing, we can end up with an empty list, for example when keywords is ','\n # in that case, we simply want to search for such keywords directly instead.\n keywords_processed = [keywords]\n\n matching_tags = []\n for tag in self._cache.values():\n if check(query in tag['embed']['description'].casefold() for query in keywords_processed):\n matching_tags.append(tag)\n\n return matching_tags\n\n async def _send_matching_tags(self, ctx: Context, keywords: str, matching_tags: list) -> None:\n \"\"\"Send the result of matching tags to user.\"\"\"\n if not matching_tags:\n pass\n elif len(matching_tags) == 1:\n await ctx.send(embed=Embed().from_dict(matching_tags[0]['embed']))\n else:\n is_plural = keywords.strip().count(' ') > 0 or keywords.strip().count(',') > 0\n embed = Embed(\n title=f\"Here are the tags containing the given keyword{'s' * is_plural}:\",\n description='\\n'.join(tag['title'] for tag in matching_tags[:10])\n )\n await LinePaginator.paginate(\n sorted(f\"**\u00bb** {tag['title']}\" for tag in matching_tags),\n ctx,\n embed,\n footer_text=\"To show a tag, type !tags <tagname>.\",\n empty=False,\n max_lines=15\n )\n\n @group(name='tags', aliases=('tag', 't'), invoke_without_command=True)\n async def tags_group(self, ctx: Context, *, tag_name: TagNameConverter = None) -> None:\n \"\"\"Show all known tags, a single tag, or run a subcommand.\"\"\"\n await ctx.invoke(self.get_command, tag_name=tag_name)\n\n @tags_group.group(name='search', invoke_without_command=True)\n async def search_tag_content(self, ctx: Context, *, keywords: str) -> None:\n \"\"\"\n Search inside tags' contents for tags. Allow searching for multiple keywords separated by comma.\n\n Only search for tags that has ALL the keywords.\n \"\"\"\n matching_tags = await self._get_tags_via_content(all, keywords)\n await self._send_matching_tags(ctx, keywords, matching_tags)\n\n @search_tag_content.command(name='any')\n async def search_tag_content_any_keyword(self, ctx: Context, *, keywords: Optional[str] = None) -> None:\n \"\"\"\n Search inside tags' contents for tags. Allow searching for multiple keywords separated by comma.\n\n Search for tags that has ANY of the keywords.\n \"\"\"\n matching_tags = await self._get_tags_via_content(any, keywords or 'any')\n await self._send_matching_tags(ctx, keywords, matching_tags)\n\n @tags_group.command(name='get', aliases=('show', 'g'))\n async def get_command(self, ctx: Context, *, tag_name: TagNameConverter = None) -> None:\n \"\"\"Get a specified tag, or a list of all tags if no tag is specified.\"\"\"\n def _command_on_cooldown(tag_name: str) -> bool:\n \"\"\"\n Check if the command is currently on cooldown, on a per-tag, per-channel basis.\n\n The cooldown duration is set in constants.py.\n \"\"\"\n now = time.time()\n\n cooldown_conditions = (\n tag_name\n and tag_name in self.tag_cooldowns\n and (now - self.tag_cooldowns[tag_name][\"time\"]) < Cooldowns.tags\n and self.tag_cooldowns[tag_name][\"channel\"] == ctx.channel.id\n )\n\n if cooldown_conditions:\n return True\n return False\n\n if _command_on_cooldown(tag_name):\n time_left = Cooldowns.tags - (time.time() - self.tag_cooldowns[tag_name][\"time\"])\n log.info(\n f\"{ctx.author} tried to get the '{tag_name}' tag, but the tag is on cooldown. \"\n f\"Cooldown ends in {time_left:.1f} seconds.\"\n )\n return\n\n await self._get_tags()\n\n if tag_name is not None:\n founds = await self._get_tag(tag_name)\n\n if len(founds) == 1:\n tag = founds[0]\n if ctx.channel.id not in TEST_CHANNELS:\n self.tag_cooldowns[tag_name] = {\n \"time\": time.time(),\n \"channel\": ctx.channel.id\n }\n await ctx.send(embed=Embed.from_dict(tag['embed']))\n elif founds and len(tag_name) >= 3:\n await ctx.send(embed=Embed(\n title='Did you mean ...',\n description='\\n'.join(tag['title'] for tag in founds[:10])\n ))\n\n else:\n tags = self._cache.values()\n if not tags:\n await ctx.send(embed=Embed(\n description=\"**There are no tags in the database!**\",\n colour=Colour.red()\n ))\n else:\n embed: Embed = Embed(title=\"**Current tags**\")\n await LinePaginator.paginate(\n sorted(f\"**\u00bb** {tag['title']}\" for tag in tags),\n ctx,\n embed,\n footer_text=\"To show a tag, type !tags <tagname>.\",\n empty=False,\n max_lines=15\n )\n\n @tags_group.command(name='set', aliases=('add', 's'))\n @with_role(*MODERATION_ROLES)\n async def set_command(\n self,\n ctx: Context,\n tag_name: TagNameConverter,\n *,\n tag_content: TagContentConverter,\n ) -> None:\n \"\"\"Create a new tag.\"\"\"\n body = {\n 'title': tag_name.lower().strip(),\n 'embed': {\n 'title': tag_name,\n 'description': tag_content\n }\n }\n\n await self.bot.api_client.post('bot/tags', json=body)\n self._cache[tag_name.lower()] = await self.bot.api_client.get(f'bot/tags/{tag_name}')\n\n log.debug(f\"{ctx.author} successfully added the following tag to our database: \\n\"\n f\"tag_name: {tag_name}\\n\"\n f\"tag_content: '{tag_content}'\\n\")\n\n await ctx.send(embed=Embed(\n title=\"Tag successfully added\",\n description=f\"**{tag_name}** added to tag database.\",\n colour=Colour.blurple()\n ))\n\n @tags_group.command(name='edit', aliases=('e', ))\n @with_role(*MODERATION_ROLES)\n async def edit_command(\n self,\n ctx: Context,\n tag_name: TagNameConverter,\n *,\n tag_content: TagContentConverter,\n ) -> None:\n \"\"\"Edit an existing tag.\"\"\"\n body = {\n 'embed': {\n 'title': tag_name,\n 'description': tag_content\n }\n }\n\n await self.bot.api_client.patch(f'bot/tags/{tag_name}', json=body)\n self._cache[tag_name.lower()] = await self.bot.api_client.get(f'bot/tags/{tag_name}')\n\n log.debug(f\"{ctx.author} successfully edited the following tag in our database: \\n\"\n f\"tag_name: {tag_name}\\n\"\n f\"tag_content: '{tag_content}'\\n\")\n\n await ctx.send(embed=Embed(\n title=\"Tag successfully edited\",\n description=f\"**{tag_name}** edited in the database.\",\n colour=Colour.blurple()\n ))\n\n @tags_group.command(name='delete', aliases=('remove', 'rm', 'd'))\n @with_role(Roles.admins, Roles.owners)\n async def delete_command(self, ctx: Context, *, tag_name: TagNameConverter) -> None:\n \"\"\"Remove a tag from the database.\"\"\"\n await self.bot.api_client.delete(f'bot/tags/{tag_name}')\n self._cache.pop(tag_name.lower(), None)\n\n log.debug(f\"{ctx.author} successfully deleted the tag called '{tag_name}'\")\n await ctx.send(embed=Embed(\n title=tag_name,\n description=f\"Tag successfully removed: {tag_name}.\",\n colour=Colour.blurple()\n ))\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the Tags cog.\"\"\"\n bot.add_cog(Tags(bot))\n", "path": "bot/cogs/tags.py"}]}
| 2,914 | 957 |
gh_patches_debug_1219
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-4641
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pulp_file version is set to 3.40.0.dev
**Version**
pulpcore 3.40.0
**Describe the bug**
Status API reports pulp_file version as 3.40.0.dev
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulp_file/app/__init__.py`
Content:
```
1 from pulpcore.plugin import PulpPluginAppConfig
2
3
4 class PulpFilePluginAppConfig(PulpPluginAppConfig):
5 """
6 Entry point for pulp_file plugin.
7 """
8
9 name = "pulp_file.app"
10 label = "file"
11 version = "3.40.0.dev"
12 python_package_name = "pulp_file" # TODO Add python_module_name
13 domain_compatible = True
14
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pulp_file/app/__init__.py b/pulp_file/app/__init__.py
--- a/pulp_file/app/__init__.py
+++ b/pulp_file/app/__init__.py
@@ -8,6 +8,6 @@
name = "pulp_file.app"
label = "file"
- version = "3.40.0.dev"
+ version = "3.41.0.dev"
python_package_name = "pulp_file" # TODO Add python_module_name
domain_compatible = True
|
{"golden_diff": "diff --git a/pulp_file/app/__init__.py b/pulp_file/app/__init__.py\n--- a/pulp_file/app/__init__.py\n+++ b/pulp_file/app/__init__.py\n@@ -8,6 +8,6 @@\n \n name = \"pulp_file.app\"\n label = \"file\"\n- version = \"3.40.0.dev\"\n+ version = \"3.41.0.dev\"\n python_package_name = \"pulp_file\" # TODO Add python_module_name\n domain_compatible = True\n", "issue": "pulp_file version is set to 3.40.0.dev \n**Version**\r\npulpcore 3.40.0\r\n\r\n**Describe the bug**\r\nStatus API reports pulp_file version as 3.40.0.dev\n", "before_files": [{"content": "from pulpcore.plugin import PulpPluginAppConfig\n\n\nclass PulpFilePluginAppConfig(PulpPluginAppConfig):\n \"\"\"\n Entry point for pulp_file plugin.\n \"\"\"\n\n name = \"pulp_file.app\"\n label = \"file\"\n version = \"3.40.0.dev\"\n python_package_name = \"pulp_file\" # TODO Add python_module_name\n domain_compatible = True\n", "path": "pulp_file/app/__init__.py"}], "after_files": [{"content": "from pulpcore.plugin import PulpPluginAppConfig\n\n\nclass PulpFilePluginAppConfig(PulpPluginAppConfig):\n \"\"\"\n Entry point for pulp_file plugin.\n \"\"\"\n\n name = \"pulp_file.app\"\n label = \"file\"\n version = \"3.41.0.dev\"\n python_package_name = \"pulp_file\" # TODO Add python_module_name\n domain_compatible = True\n", "path": "pulp_file/app/__init__.py"}]}
| 423 | 121 |
gh_patches_debug_13169
|
rasdani/github-patches
|
git_diff
|
activeloopai__deeplake-1994
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Can't write objects to JSON
v3.0.14
```python
import pickle
t = ds.create_tensor(f"t/group/f", htype="json", chunk_compression="lz4")
t.append(pickle.dumps("test")) # pass any pickled object into append gets error
```
```
ValueError: Circular reference detected
```
passing strings and such into this tensor works fine, but for some reason any pickled object or python object that gets pickled gives the above ValueError.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `deeplake/util/json.py`
Content:
```
1 from typing import Any, Dict, List, Optional, Tuple, Union
2 import numpy as np
3 from numpy import ndarray
4 import json
5 import base64
6 from deeplake.core.sample import Sample # type: ignore
7
8 Schema = Any
9
10
11 scalars = ["int", "float", "bool", "str", "list", "dict", "ndarray", "Sample"]
12 types = ["Any", "Dict", "List", "Optional", "Union"]
13
14
15 def _norm_type(typ: str):
16 typ = typ.replace("typing.", "")
17 replacements = {
18 "numpy.ndarray": "ndarray",
19 "np.ndarray": "ndarray",
20 "deeplake.core.sample.Sample": "Sample",
21 "deeplake.Sample": "Sample",
22 }
23 return replacements.get(typ, typ)
24
25
26 def _parse_schema(schema: Union[str, Schema]) -> Tuple[str, List[str]]:
27 if getattr(schema, "__module__", None) == "typing":
28 schema = str(schema)
29 validate = False
30 else:
31 validate = True
32
33 if schema in scalars:
34 return schema, []
35
36 if "[" not in schema:
37 return _norm_type(schema), []
38
39 typ, param_string = schema.split("[", 1)
40 typ = _norm_type(typ)
41 assert param_string[-1] == "]"
42 params = []
43 buff = ""
44 level = 0
45 for c in param_string:
46 if c == "[":
47 level += 1
48 buff += c
49 elif c == "]":
50 if level == 0:
51 if buff:
52 params.append(buff)
53 if validate:
54 _validate_schema(typ, params)
55 return typ, params
56 else:
57 buff += c
58 level -= 1
59 elif c == ",":
60 if level == 0:
61 params.append(buff)
62 buff = ""
63 else:
64 buff += c
65 elif c == " ":
66 continue
67 else:
68 buff += c
69 raise InvalidJsonSchemaException()
70
71
72 class InvalidJsonSchemaException(Exception):
73 pass
74
75
76 class ArgumentMismatchException(InvalidJsonSchemaException):
77 def __init__(self, typ: str, actual: int, expected: int, exact: bool = False):
78 assert actual != expected
79 gt = actual > expected
80 super(ArgumentMismatchException, self).__init__(
81 f"Too {'many' if gt else 'few'} parameters for {typ};"
82 + f" actual {actual},expected {'exatcly' if exact else ('at most' if gt else 'at least')} {expected}."
83 )
84
85
86 def _validate_schema(typ: str, params: List[str]) -> Tuple[str, List[str]]:
87 if typ in scalars:
88 return typ, params
89
90 if typ not in types:
91 raise InvalidJsonSchemaException(f"Unsupported type: {typ}")
92
93 def _err(expected_num_params: int, exact: bool = False):
94 raise ArgumentMismatchException(typ, len(params), expected_num_params, exact)
95
96 if typ == "Any":
97 if params:
98 _err(0)
99 elif typ == "Optional":
100 if len(params) > 1:
101 _err(1)
102 elif typ == "Union":
103 if len(params) == 0:
104 _err(1)
105 elif typ == "List":
106 if len(params) > 1:
107 _err(1)
108 elif typ == "Dict":
109 if len(params) not in (0, 2):
110 _err(2, True)
111 return typ, params
112
113
114 def _validate_any(obj: Any, params: List[str]):
115 assert not params
116 return True
117
118
119 def _validate_union(obj: Any, params: List[str]):
120 for schema in params:
121 if _validate_object(obj, schema):
122 return True
123 return False
124
125
126 def _validate_optional(obj: Any, params: List[str]) -> bool:
127 assert len(params) <= 1
128 if obj is None:
129 return True
130 if params:
131 return _validate_object(obj, params[0])
132 return True
133
134
135 def _validate_list(obj: Any, params: List[str]) -> bool:
136 assert len(params) <= 1
137 if not isinstance(obj, (list, tuple)):
138 return False
139 if params:
140 for item in obj:
141 if not _validate_object(item, params[0]):
142 return False
143 return True
144
145
146 def _validate_dict(obj: Any, params: List[str]) -> bool:
147 assert len(params) in (0, 2)
148 if not isinstance(obj, dict):
149 return False
150 if params:
151 assert params[0] in (
152 "str",
153 "Any",
154 ), "Only string keys are allowed for json dicts."
155 for v in obj.values():
156 if not _validate_object(v, params[1]):
157 return False
158 return True
159
160
161 def _validate_nonetype(obj: Any, params: List[str]) -> bool:
162 assert not params
163 return obj is None
164
165
166 def _validate_object(obj: Any, schema: Union[str, Schema]) -> bool:
167 typ, params = _parse_schema(schema)
168 if typ in scalars:
169 return isinstance(obj, eval(typ))
170 return globals()[f"_validate_{typ.lower()}"](obj, params)
171
172
173 class JsonValidationError(Exception):
174 pass
175
176
177 def validate_json_object(obj: Any, schema: Union[str, Schema]) -> None:
178 if obj and not _validate_object(obj, schema):
179 raise JsonValidationError()
180
181
182 def validate_json_schema(schema: str):
183 _parse_schema(schema)
184
185
186 class HubJsonEncoder(json.JSONEncoder):
187 def default(self, obj):
188 if isinstance(obj, ndarray):
189 return {
190 "_hub_custom_type": "ndarray",
191 "data": base64.b64encode(obj.tobytes()).decode(),
192 "shape": obj.shape,
193 "dtype": obj.dtype.name,
194 }
195 elif isinstance(obj, Sample):
196 if obj.compression:
197 return {
198 "_hub_custom_type": "Sample",
199 "data": base64.b64encode(obj.buffer).decode(),
200 "compression": obj.compression,
201 }
202 else:
203 return self.default(obj.array)
204 return obj
205
206
207 class HubJsonDecoder(json.JSONDecoder):
208 def __init__(self, *args, **kwargs):
209 json.JSONDecoder.__init__(self, object_hook=self.object_hook, *args, **kwargs)
210
211 def object_hook(self, obj):
212 hub_custom_type = obj.get("_hub_custom_type")
213 if hub_custom_type == "ndarray":
214 return np.frombuffer(
215 base64.b64decode(obj["data"]), dtype=obj["dtype"]
216 ).reshape(obj["shape"])
217 elif hub_custom_type == "Sample":
218 return Sample(
219 buffer=base64.b64decode(obj["data"]), compression=obj["compression"]
220 )
221 return obj
222
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/deeplake/util/json.py b/deeplake/util/json.py
--- a/deeplake/util/json.py
+++ b/deeplake/util/json.py
@@ -201,6 +201,12 @@
}
else:
return self.default(obj.array)
+ elif isinstance(obj, bytes):
+ return {
+ "_hub_custom_type": "bytes",
+ "data": base64.b64encode(obj).decode(),
+ }
+
return obj
@@ -218,4 +224,6 @@
return Sample(
buffer=base64.b64decode(obj["data"]), compression=obj["compression"]
)
+ elif hub_custom_type == "bytes":
+ return base64.b64decode(obj["data"])
return obj
|
{"golden_diff": "diff --git a/deeplake/util/json.py b/deeplake/util/json.py\n--- a/deeplake/util/json.py\n+++ b/deeplake/util/json.py\n@@ -201,6 +201,12 @@\n }\n else:\n return self.default(obj.array)\n+ elif isinstance(obj, bytes):\n+ return {\n+ \"_hub_custom_type\": \"bytes\",\n+ \"data\": base64.b64encode(obj).decode(),\n+ }\n+\n return obj\n \n \n@@ -218,4 +224,6 @@\n return Sample(\n buffer=base64.b64decode(obj[\"data\"]), compression=obj[\"compression\"]\n )\n+ elif hub_custom_type == \"bytes\":\n+ return base64.b64decode(obj[\"data\"])\n return obj\n", "issue": "[BUG] Can't write objects to JSON\nv3.0.14\r\n\r\n```python\r\nimport pickle\r\nt = ds.create_tensor(f\"t/group/f\", htype=\"json\", chunk_compression=\"lz4\")\r\nt.append(pickle.dumps(\"test\")) # pass any pickled object into append gets error\r\n```\r\n\r\n```\r\nValueError: Circular reference detected\r\n```\r\n\r\npassing strings and such into this tensor works fine, but for some reason any pickled object or python object that gets pickled gives the above ValueError.\n", "before_files": [{"content": "from typing import Any, Dict, List, Optional, Tuple, Union\nimport numpy as np\nfrom numpy import ndarray\nimport json\nimport base64\nfrom deeplake.core.sample import Sample # type: ignore\n\nSchema = Any\n\n\nscalars = [\"int\", \"float\", \"bool\", \"str\", \"list\", \"dict\", \"ndarray\", \"Sample\"]\ntypes = [\"Any\", \"Dict\", \"List\", \"Optional\", \"Union\"]\n\n\ndef _norm_type(typ: str):\n typ = typ.replace(\"typing.\", \"\")\n replacements = {\n \"numpy.ndarray\": \"ndarray\",\n \"np.ndarray\": \"ndarray\",\n \"deeplake.core.sample.Sample\": \"Sample\",\n \"deeplake.Sample\": \"Sample\",\n }\n return replacements.get(typ, typ)\n\n\ndef _parse_schema(schema: Union[str, Schema]) -> Tuple[str, List[str]]:\n if getattr(schema, \"__module__\", None) == \"typing\":\n schema = str(schema)\n validate = False\n else:\n validate = True\n\n if schema in scalars:\n return schema, []\n\n if \"[\" not in schema:\n return _norm_type(schema), []\n\n typ, param_string = schema.split(\"[\", 1)\n typ = _norm_type(typ)\n assert param_string[-1] == \"]\"\n params = []\n buff = \"\"\n level = 0\n for c in param_string:\n if c == \"[\":\n level += 1\n buff += c\n elif c == \"]\":\n if level == 0:\n if buff:\n params.append(buff)\n if validate:\n _validate_schema(typ, params)\n return typ, params\n else:\n buff += c\n level -= 1\n elif c == \",\":\n if level == 0:\n params.append(buff)\n buff = \"\"\n else:\n buff += c\n elif c == \" \":\n continue\n else:\n buff += c\n raise InvalidJsonSchemaException()\n\n\nclass InvalidJsonSchemaException(Exception):\n pass\n\n\nclass ArgumentMismatchException(InvalidJsonSchemaException):\n def __init__(self, typ: str, actual: int, expected: int, exact: bool = False):\n assert actual != expected\n gt = actual > expected\n super(ArgumentMismatchException, self).__init__(\n f\"Too {'many' if gt else 'few'} parameters for {typ};\"\n + f\" actual {actual},expected {'exatcly' if exact else ('at most' if gt else 'at least')} {expected}.\"\n )\n\n\ndef _validate_schema(typ: str, params: List[str]) -> Tuple[str, List[str]]:\n if typ in scalars:\n return typ, params\n\n if typ not in types:\n raise InvalidJsonSchemaException(f\"Unsupported type: {typ}\")\n\n def _err(expected_num_params: int, exact: bool = False):\n raise ArgumentMismatchException(typ, len(params), expected_num_params, exact)\n\n if typ == \"Any\":\n if params:\n _err(0)\n elif typ == \"Optional\":\n if len(params) > 1:\n _err(1)\n elif typ == \"Union\":\n if len(params) == 0:\n _err(1)\n elif typ == \"List\":\n if len(params) > 1:\n _err(1)\n elif typ == \"Dict\":\n if len(params) not in (0, 2):\n _err(2, True)\n return typ, params\n\n\ndef _validate_any(obj: Any, params: List[str]):\n assert not params\n return True\n\n\ndef _validate_union(obj: Any, params: List[str]):\n for schema in params:\n if _validate_object(obj, schema):\n return True\n return False\n\n\ndef _validate_optional(obj: Any, params: List[str]) -> bool:\n assert len(params) <= 1\n if obj is None:\n return True\n if params:\n return _validate_object(obj, params[0])\n return True\n\n\ndef _validate_list(obj: Any, params: List[str]) -> bool:\n assert len(params) <= 1\n if not isinstance(obj, (list, tuple)):\n return False\n if params:\n for item in obj:\n if not _validate_object(item, params[0]):\n return False\n return True\n\n\ndef _validate_dict(obj: Any, params: List[str]) -> bool:\n assert len(params) in (0, 2)\n if not isinstance(obj, dict):\n return False\n if params:\n assert params[0] in (\n \"str\",\n \"Any\",\n ), \"Only string keys are allowed for json dicts.\"\n for v in obj.values():\n if not _validate_object(v, params[1]):\n return False\n return True\n\n\ndef _validate_nonetype(obj: Any, params: List[str]) -> bool:\n assert not params\n return obj is None\n\n\ndef _validate_object(obj: Any, schema: Union[str, Schema]) -> bool:\n typ, params = _parse_schema(schema)\n if typ in scalars:\n return isinstance(obj, eval(typ))\n return globals()[f\"_validate_{typ.lower()}\"](obj, params)\n\n\nclass JsonValidationError(Exception):\n pass\n\n\ndef validate_json_object(obj: Any, schema: Union[str, Schema]) -> None:\n if obj and not _validate_object(obj, schema):\n raise JsonValidationError()\n\n\ndef validate_json_schema(schema: str):\n _parse_schema(schema)\n\n\nclass HubJsonEncoder(json.JSONEncoder):\n def default(self, obj):\n if isinstance(obj, ndarray):\n return {\n \"_hub_custom_type\": \"ndarray\",\n \"data\": base64.b64encode(obj.tobytes()).decode(),\n \"shape\": obj.shape,\n \"dtype\": obj.dtype.name,\n }\n elif isinstance(obj, Sample):\n if obj.compression:\n return {\n \"_hub_custom_type\": \"Sample\",\n \"data\": base64.b64encode(obj.buffer).decode(),\n \"compression\": obj.compression,\n }\n else:\n return self.default(obj.array)\n return obj\n\n\nclass HubJsonDecoder(json.JSONDecoder):\n def __init__(self, *args, **kwargs):\n json.JSONDecoder.__init__(self, object_hook=self.object_hook, *args, **kwargs)\n\n def object_hook(self, obj):\n hub_custom_type = obj.get(\"_hub_custom_type\")\n if hub_custom_type == \"ndarray\":\n return np.frombuffer(\n base64.b64decode(obj[\"data\"]), dtype=obj[\"dtype\"]\n ).reshape(obj[\"shape\"])\n elif hub_custom_type == \"Sample\":\n return Sample(\n buffer=base64.b64decode(obj[\"data\"]), compression=obj[\"compression\"]\n )\n return obj\n", "path": "deeplake/util/json.py"}], "after_files": [{"content": "from typing import Any, Dict, List, Optional, Tuple, Union\nimport numpy as np\nfrom numpy import ndarray\nimport json\nimport base64\nfrom deeplake.core.sample import Sample # type: ignore\n\nSchema = Any\n\n\nscalars = [\"int\", \"float\", \"bool\", \"str\", \"list\", \"dict\", \"ndarray\", \"Sample\"]\ntypes = [\"Any\", \"Dict\", \"List\", \"Optional\", \"Union\"]\n\n\ndef _norm_type(typ: str):\n typ = typ.replace(\"typing.\", \"\")\n replacements = {\n \"numpy.ndarray\": \"ndarray\",\n \"np.ndarray\": \"ndarray\",\n \"deeplake.core.sample.Sample\": \"Sample\",\n \"deeplake.Sample\": \"Sample\",\n }\n return replacements.get(typ, typ)\n\n\ndef _parse_schema(schema: Union[str, Schema]) -> Tuple[str, List[str]]:\n if getattr(schema, \"__module__\", None) == \"typing\":\n schema = str(schema)\n validate = False\n else:\n validate = True\n\n if schema in scalars:\n return schema, []\n\n if \"[\" not in schema:\n return _norm_type(schema), []\n\n typ, param_string = schema.split(\"[\", 1)\n typ = _norm_type(typ)\n assert param_string[-1] == \"]\"\n params = []\n buff = \"\"\n level = 0\n for c in param_string:\n if c == \"[\":\n level += 1\n buff += c\n elif c == \"]\":\n if level == 0:\n if buff:\n params.append(buff)\n if validate:\n _validate_schema(typ, params)\n return typ, params\n else:\n buff += c\n level -= 1\n elif c == \",\":\n if level == 0:\n params.append(buff)\n buff = \"\"\n else:\n buff += c\n elif c == \" \":\n continue\n else:\n buff += c\n raise InvalidJsonSchemaException()\n\n\nclass InvalidJsonSchemaException(Exception):\n pass\n\n\nclass ArgumentMismatchException(InvalidJsonSchemaException):\n def __init__(self, typ: str, actual: int, expected: int, exact: bool = False):\n assert actual != expected\n gt = actual > expected\n super(ArgumentMismatchException, self).__init__(\n f\"Too {'many' if gt else 'few'} parameters for {typ};\"\n + f\" actual {actual},expected {'exatcly' if exact else ('at most' if gt else 'at least')} {expected}.\"\n )\n\n\ndef _validate_schema(typ: str, params: List[str]) -> Tuple[str, List[str]]:\n if typ in scalars:\n return typ, params\n\n if typ not in types:\n raise InvalidJsonSchemaException(f\"Unsupported type: {typ}\")\n\n def _err(expected_num_params: int, exact: bool = False):\n raise ArgumentMismatchException(typ, len(params), expected_num_params, exact)\n\n if typ == \"Any\":\n if params:\n _err(0)\n elif typ == \"Optional\":\n if len(params) > 1:\n _err(1)\n elif typ == \"Union\":\n if len(params) == 0:\n _err(1)\n elif typ == \"List\":\n if len(params) > 1:\n _err(1)\n elif typ == \"Dict\":\n if len(params) not in (0, 2):\n _err(2, True)\n return typ, params\n\n\ndef _validate_any(obj: Any, params: List[str]):\n assert not params\n return True\n\n\ndef _validate_union(obj: Any, params: List[str]):\n for schema in params:\n if _validate_object(obj, schema):\n return True\n return False\n\n\ndef _validate_optional(obj: Any, params: List[str]) -> bool:\n assert len(params) <= 1\n if obj is None:\n return True\n if params:\n return _validate_object(obj, params[0])\n return True\n\n\ndef _validate_list(obj: Any, params: List[str]) -> bool:\n assert len(params) <= 1\n if not isinstance(obj, (list, tuple)):\n return False\n if params:\n for item in obj:\n if not _validate_object(item, params[0]):\n return False\n return True\n\n\ndef _validate_dict(obj: Any, params: List[str]) -> bool:\n assert len(params) in (0, 2)\n if not isinstance(obj, dict):\n return False\n if params:\n assert params[0] in (\n \"str\",\n \"Any\",\n ), \"Only string keys are allowed for json dicts.\"\n for v in obj.values():\n if not _validate_object(v, params[1]):\n return False\n return True\n\n\ndef _validate_nonetype(obj: Any, params: List[str]) -> bool:\n assert not params\n return obj is None\n\n\ndef _validate_object(obj: Any, schema: Union[str, Schema]) -> bool:\n typ, params = _parse_schema(schema)\n if typ in scalars:\n return isinstance(obj, eval(typ))\n return globals()[f\"_validate_{typ.lower()}\"](obj, params)\n\n\nclass JsonValidationError(Exception):\n pass\n\n\ndef validate_json_object(obj: Any, schema: Union[str, Schema]) -> None:\n if obj and not _validate_object(obj, schema):\n raise JsonValidationError()\n\n\ndef validate_json_schema(schema: str):\n _parse_schema(schema)\n\n\nclass HubJsonEncoder(json.JSONEncoder):\n def default(self, obj):\n if isinstance(obj, ndarray):\n return {\n \"_hub_custom_type\": \"ndarray\",\n \"data\": base64.b64encode(obj.tobytes()).decode(),\n \"shape\": obj.shape,\n \"dtype\": obj.dtype.name,\n }\n elif isinstance(obj, Sample):\n if obj.compression:\n return {\n \"_hub_custom_type\": \"Sample\",\n \"data\": base64.b64encode(obj.buffer).decode(),\n \"compression\": obj.compression,\n }\n else:\n return self.default(obj.array)\n elif isinstance(obj, bytes):\n return {\n \"_hub_custom_type\": \"bytes\",\n \"data\": base64.b64encode(obj).decode(),\n }\n\n return obj\n\n\nclass HubJsonDecoder(json.JSONDecoder):\n def __init__(self, *args, **kwargs):\n json.JSONDecoder.__init__(self, object_hook=self.object_hook, *args, **kwargs)\n\n def object_hook(self, obj):\n hub_custom_type = obj.get(\"_hub_custom_type\")\n if hub_custom_type == \"ndarray\":\n return np.frombuffer(\n base64.b64decode(obj[\"data\"]), dtype=obj[\"dtype\"]\n ).reshape(obj[\"shape\"])\n elif hub_custom_type == \"Sample\":\n return Sample(\n buffer=base64.b64decode(obj[\"data\"]), compression=obj[\"compression\"]\n )\n elif hub_custom_type == \"bytes\":\n return base64.b64decode(obj[\"data\"])\n return obj\n", "path": "deeplake/util/json.py"}]}
| 2,422 | 181 |
gh_patches_debug_3577
|
rasdani/github-patches
|
git_diff
|
python__mypy-2596
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`Tuple[()]` is occasionally converted to `Tuple[Any, ...]`
Most obvious when the `Tuple[()]` is passed through a Callable
```
from typing import *
Type = Callable[[Tuple[()]], int]
x = "foo" # type: Type
```
Results in:
```
Incompatible types in assignment (expression has type "str", variable has type Callable[[Tuple[Any, ...]], int])
```
As a side note,
```Type = Tuple[()]```
Also appears to give a weird error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mypy/exprtotype.py`
Content:
```
1 """Translate an Expression to a Type value."""
2
3 from mypy.nodes import (
4 Expression, NameExpr, MemberExpr, IndexExpr, TupleExpr,
5 ListExpr, StrExpr, BytesExpr, UnicodeExpr, EllipsisExpr
6 )
7 from mypy.parsetype import parse_str_as_type, TypeParseError
8 from mypy.types import Type, UnboundType, TypeList, EllipsisType
9
10
11 class TypeTranslationError(Exception):
12 """Exception raised when an expression is not valid as a type."""
13
14
15 def expr_to_unanalyzed_type(expr: Expression) -> Type:
16 """Translate an expression to the corresponding type.
17
18 The result is not semantically analyzed. It can be UnboundType or TypeList.
19 Raise TypeTranslationError if the expression cannot represent a type.
20 """
21 if isinstance(expr, NameExpr):
22 name = expr.name
23 return UnboundType(name, line=expr.line, column=expr.column)
24 elif isinstance(expr, MemberExpr):
25 fullname = get_member_expr_fullname(expr)
26 if fullname:
27 return UnboundType(fullname, line=expr.line, column=expr.column)
28 else:
29 raise TypeTranslationError()
30 elif isinstance(expr, IndexExpr):
31 base = expr_to_unanalyzed_type(expr.base)
32 if isinstance(base, UnboundType):
33 if base.args:
34 raise TypeTranslationError()
35 if isinstance(expr.index, TupleExpr):
36 args = expr.index.items
37 else:
38 args = [expr.index]
39 base.args = [expr_to_unanalyzed_type(arg) for arg in args]
40 return base
41 else:
42 raise TypeTranslationError()
43 elif isinstance(expr, ListExpr):
44 return TypeList([expr_to_unanalyzed_type(t) for t in expr.items],
45 line=expr.line, column=expr.column)
46 elif isinstance(expr, (StrExpr, BytesExpr, UnicodeExpr)):
47 # Parse string literal type.
48 try:
49 result = parse_str_as_type(expr.value, expr.line)
50 except TypeParseError:
51 raise TypeTranslationError()
52 return result
53 elif isinstance(expr, EllipsisExpr):
54 return EllipsisType(expr.line)
55 else:
56 raise TypeTranslationError()
57
58
59 def get_member_expr_fullname(expr: MemberExpr) -> str:
60 """Return the qualified name representation of a member expression.
61
62 Return a string of form foo.bar, foo.bar.baz, or similar, or None if the
63 argument cannot be represented in this form.
64 """
65 if isinstance(expr.expr, NameExpr):
66 initial = expr.expr.name
67 elif isinstance(expr.expr, MemberExpr):
68 initial = get_member_expr_fullname(expr.expr)
69 else:
70 return None
71 return '{}.{}'.format(initial, expr.name)
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mypy/exprtotype.py b/mypy/exprtotype.py
--- a/mypy/exprtotype.py
+++ b/mypy/exprtotype.py
@@ -37,6 +37,8 @@
else:
args = [expr.index]
base.args = [expr_to_unanalyzed_type(arg) for arg in args]
+ if not base.args:
+ base.empty_tuple_index = True
return base
else:
raise TypeTranslationError()
|
{"golden_diff": "diff --git a/mypy/exprtotype.py b/mypy/exprtotype.py\n--- a/mypy/exprtotype.py\n+++ b/mypy/exprtotype.py\n@@ -37,6 +37,8 @@\n else:\n args = [expr.index]\n base.args = [expr_to_unanalyzed_type(arg) for arg in args]\n+ if not base.args:\n+ base.empty_tuple_index = True\n return base\n else:\n raise TypeTranslationError()\n", "issue": "`Tuple[()]` is occasionally converted to `Tuple[Any, ...]`\nMost obvious when the `Tuple[()]` is passed through a Callable\r\n```\r\nfrom typing import *\r\n\r\nType = Callable[[Tuple[()]], int]\r\nx = \"foo\" # type: Type\r\n```\r\nResults in:\r\n```\r\nIncompatible types in assignment (expression has type \"str\", variable has type Callable[[Tuple[Any, ...]], int])\r\n```\r\n\r\nAs a side note,\r\n```Type = Tuple[()]```\r\nAlso appears to give a weird error.\n", "before_files": [{"content": "\"\"\"Translate an Expression to a Type value.\"\"\"\n\nfrom mypy.nodes import (\n Expression, NameExpr, MemberExpr, IndexExpr, TupleExpr,\n ListExpr, StrExpr, BytesExpr, UnicodeExpr, EllipsisExpr\n)\nfrom mypy.parsetype import parse_str_as_type, TypeParseError\nfrom mypy.types import Type, UnboundType, TypeList, EllipsisType\n\n\nclass TypeTranslationError(Exception):\n \"\"\"Exception raised when an expression is not valid as a type.\"\"\"\n\n\ndef expr_to_unanalyzed_type(expr: Expression) -> Type:\n \"\"\"Translate an expression to the corresponding type.\n\n The result is not semantically analyzed. It can be UnboundType or TypeList.\n Raise TypeTranslationError if the expression cannot represent a type.\n \"\"\"\n if isinstance(expr, NameExpr):\n name = expr.name\n return UnboundType(name, line=expr.line, column=expr.column)\n elif isinstance(expr, MemberExpr):\n fullname = get_member_expr_fullname(expr)\n if fullname:\n return UnboundType(fullname, line=expr.line, column=expr.column)\n else:\n raise TypeTranslationError()\n elif isinstance(expr, IndexExpr):\n base = expr_to_unanalyzed_type(expr.base)\n if isinstance(base, UnboundType):\n if base.args:\n raise TypeTranslationError()\n if isinstance(expr.index, TupleExpr):\n args = expr.index.items\n else:\n args = [expr.index]\n base.args = [expr_to_unanalyzed_type(arg) for arg in args]\n return base\n else:\n raise TypeTranslationError()\n elif isinstance(expr, ListExpr):\n return TypeList([expr_to_unanalyzed_type(t) for t in expr.items],\n line=expr.line, column=expr.column)\n elif isinstance(expr, (StrExpr, BytesExpr, UnicodeExpr)):\n # Parse string literal type.\n try:\n result = parse_str_as_type(expr.value, expr.line)\n except TypeParseError:\n raise TypeTranslationError()\n return result\n elif isinstance(expr, EllipsisExpr):\n return EllipsisType(expr.line)\n else:\n raise TypeTranslationError()\n\n\ndef get_member_expr_fullname(expr: MemberExpr) -> str:\n \"\"\"Return the qualified name representation of a member expression.\n\n Return a string of form foo.bar, foo.bar.baz, or similar, or None if the\n argument cannot be represented in this form.\n \"\"\"\n if isinstance(expr.expr, NameExpr):\n initial = expr.expr.name\n elif isinstance(expr.expr, MemberExpr):\n initial = get_member_expr_fullname(expr.expr)\n else:\n return None\n return '{}.{}'.format(initial, expr.name)\n", "path": "mypy/exprtotype.py"}], "after_files": [{"content": "\"\"\"Translate an Expression to a Type value.\"\"\"\n\nfrom mypy.nodes import (\n Expression, NameExpr, MemberExpr, IndexExpr, TupleExpr,\n ListExpr, StrExpr, BytesExpr, UnicodeExpr, EllipsisExpr\n)\nfrom mypy.parsetype import parse_str_as_type, TypeParseError\nfrom mypy.types import Type, UnboundType, TypeList, EllipsisType\n\n\nclass TypeTranslationError(Exception):\n \"\"\"Exception raised when an expression is not valid as a type.\"\"\"\n\n\ndef expr_to_unanalyzed_type(expr: Expression) -> Type:\n \"\"\"Translate an expression to the corresponding type.\n\n The result is not semantically analyzed. It can be UnboundType or TypeList.\n Raise TypeTranslationError if the expression cannot represent a type.\n \"\"\"\n if isinstance(expr, NameExpr):\n name = expr.name\n return UnboundType(name, line=expr.line, column=expr.column)\n elif isinstance(expr, MemberExpr):\n fullname = get_member_expr_fullname(expr)\n if fullname:\n return UnboundType(fullname, line=expr.line, column=expr.column)\n else:\n raise TypeTranslationError()\n elif isinstance(expr, IndexExpr):\n base = expr_to_unanalyzed_type(expr.base)\n if isinstance(base, UnboundType):\n if base.args:\n raise TypeTranslationError()\n if isinstance(expr.index, TupleExpr):\n args = expr.index.items\n else:\n args = [expr.index]\n base.args = [expr_to_unanalyzed_type(arg) for arg in args]\n if not base.args:\n base.empty_tuple_index = True\n return base\n else:\n raise TypeTranslationError()\n elif isinstance(expr, ListExpr):\n return TypeList([expr_to_unanalyzed_type(t) for t in expr.items],\n line=expr.line, column=expr.column)\n elif isinstance(expr, (StrExpr, BytesExpr, UnicodeExpr)):\n # Parse string literal type.\n try:\n result = parse_str_as_type(expr.value, expr.line)\n except TypeParseError:\n raise TypeTranslationError()\n return result\n elif isinstance(expr, EllipsisExpr):\n return EllipsisType(expr.line)\n else:\n raise TypeTranslationError()\n\n\ndef get_member_expr_fullname(expr: MemberExpr) -> str:\n \"\"\"Return the qualified name representation of a member expression.\n\n Return a string of form foo.bar, foo.bar.baz, or similar, or None if the\n argument cannot be represented in this form.\n \"\"\"\n if isinstance(expr.expr, NameExpr):\n initial = expr.expr.name\n elif isinstance(expr.expr, MemberExpr):\n initial = get_member_expr_fullname(expr.expr)\n else:\n return None\n return '{}.{}'.format(initial, expr.name)\n", "path": "mypy/exprtotype.py"}]}
| 1,085 | 109 |
gh_patches_debug_20406
|
rasdani/github-patches
|
git_diff
|
sopel-irc__sopel-1413
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
URLs with ending commas get included when Sopel v6.5.x parse them.
```
Examples:
[02:46pm] <Ant> http://www.tv.com/shows/family-guy/da-boom-25450/,
02:46PM <Crushinator> [ Not found - TV.com ] - www.tv.com
02:47PM <URL> [ Not found - TV.com ] - www.tv.com
Periods had no problems:
[02:48pm] <Ant> http://www.tv.com/shows/family-guy/da-boom-25450/.
02:48PM <URL> [ Family Guy - Season 2, Episode 3: Da Boom - TV.com ] -
www.tv.com
02:48PM <Crushinator> [ Family Guy - Season 2, Episode 3: Da Boom - TV.com ] -
www.tv.com
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/modules/url.py`
Content:
```
1 # coding=utf-8
2 """URL title module"""
3 # Copyright 2010-2011, Michael Yanovich, yanovich.net, Kenneth Sham
4 # Copyright 2012-2013 Elsie Powell
5 # Copyright 2013 Lior Ramati ([email protected])
6 # Copyright © 2014 Elad Alfassa <[email protected]>
7 # Licensed under the Eiffel Forum License 2.
8 from __future__ import unicode_literals, absolute_import, print_function, division
9
10 import re
11 from sopel import web, tools, __version__
12 from sopel.module import commands, rule, example
13 from sopel.config.types import ValidatedAttribute, ListAttribute, StaticSection
14
15 import requests
16
17 USER_AGENT = 'Sopel/{} (https://sopel.chat)'.format(__version__)
18 default_headers = {'User-Agent': USER_AGENT}
19 find_urls = None
20 # These are used to clean up the title tag before actually parsing it. Not the
21 # world's best way to do this, but it'll do for now.
22 title_tag_data = re.compile('<(/?)title( [^>]+)?>', re.IGNORECASE)
23 quoted_title = re.compile('[\'"]<title>[\'"]', re.IGNORECASE)
24 # This is another regex that presumably does something important.
25 re_dcc = re.compile(r'(?i)dcc\ssend')
26 # This sets the maximum number of bytes that should be read in order to find
27 # the title. We don't want it too high, or a link to a big file/stream will
28 # just keep downloading until there's no more memory. 640k ought to be enough
29 # for anybody.
30 max_bytes = 655360
31
32
33 class UrlSection(StaticSection):
34 # TODO some validation rules maybe?
35 exclude = ListAttribute('exclude')
36 exclusion_char = ValidatedAttribute('exclusion_char', default='!')
37 shorten_url_length = ValidatedAttribute(
38 'shorten_url_length', int, default=0)
39
40
41 def configure(config):
42 config.define_section('url', UrlSection)
43 config.url.configure_setting(
44 'exclude',
45 'Enter regular expressions for each URL you would like to exclude.'
46 )
47 config.url.configure_setting(
48 'exclusion_char',
49 'Enter a character which can be prefixed to suppress URL titling'
50 )
51 config.url.configure_setting(
52 'shorten_url_length',
53 'Enter how many characters a URL should be before the bot puts a'
54 ' shorter version of the URL in the title as a TinyURL link'
55 ' (0 to disable)'
56 )
57
58
59 def setup(bot):
60 global find_urls
61
62 bot.config.define_section('url', UrlSection)
63
64 if bot.config.url.exclude:
65 regexes = [re.compile(s) for s in bot.config.url.exclude]
66 else:
67 regexes = []
68
69 # We're keeping these in their own list, rather than putting then in the
70 # callbacks list because 1, it's easier to deal with modules that are still
71 # using this list, and not the newer callbacks list and 2, having a lambda
72 # just to pass is kinda ugly.
73 if not bot.memory.contains('url_exclude'):
74 bot.memory['url_exclude'] = regexes
75 else:
76 exclude = bot.memory['url_exclude']
77 if regexes:
78 exclude.extend(regexes)
79 bot.memory['url_exclude'] = exclude
80
81 # Ensure that url_callbacks and last_seen_url are in memory
82 if not bot.memory.contains('url_callbacks'):
83 bot.memory['url_callbacks'] = tools.SopelMemory()
84 if not bot.memory.contains('last_seen_url'):
85 bot.memory['last_seen_url'] = tools.SopelMemory()
86
87 def find_func(text):
88 re_url = r'(?u)((?<!%s)(?:http|https|ftp)(?::\/\/\S+))'\
89 % (bot.config.url.exclusion_char)
90 r = re.compile(re_url, re.IGNORECASE)
91
92 urls = re.findall(r, text)
93 return urls
94
95 find_urls = find_func
96
97
98 @commands('title')
99 @example('.title http://google.com', '[ Google ] - google.com')
100 def title_command(bot, trigger):
101 """
102 Show the title or URL information for the given URL, or the last URL seen
103 in this channel.
104 """
105 if not trigger.group(2):
106 if trigger.sender not in bot.memory['last_seen_url']:
107 return
108 matched = check_callbacks(bot, trigger,
109 bot.memory['last_seen_url'][trigger.sender],
110 True)
111 if matched:
112 return
113 else:
114 urls = [bot.memory['last_seen_url'][trigger.sender]]
115 else:
116 urls = find_urls(trigger)
117
118 results = process_urls(bot, trigger, urls)
119 for title, domain, tinyurl in results[:4]:
120 message = '[ %s ] - %s' % (title, domain)
121 if tinyurl:
122 message += ' ( %s )' % tinyurl
123 bot.reply(message)
124
125
126 @rule(r'(?u).*(https?://\S+).*')
127 def title_auto(bot, trigger):
128 """
129 Automatically show titles for URLs. For shortened URLs/redirects, find
130 where the URL redirects to and show the title for that (or call a function
131 from another module to give more information).
132 """
133 if re.match(bot.config.core.prefix + 'title', trigger):
134 return
135
136 # Avoid fetching known malicious links
137 if 'safety_cache' in bot.memory and trigger in bot.memory['safety_cache']:
138 if bot.memory['safety_cache'][trigger]['positives'] > 1:
139 return
140
141 urls = find_urls(trigger)
142 if len(urls) == 0:
143 return
144
145 results = process_urls(bot, trigger, urls)
146 bot.memory['last_seen_url'][trigger.sender] = urls[-1]
147
148 for title, domain, tinyurl in results[:4]:
149 message = '[ %s ] - %s' % (title, domain)
150 if tinyurl:
151 message += ' ( %s )' % tinyurl
152 # Guard against responding to other instances of this bot.
153 if message != trigger:
154 bot.say(message)
155
156
157 def process_urls(bot, trigger, urls):
158 """
159 For each URL in the list, ensure that it isn't handled by another module.
160 If not, find where it redirects to, if anywhere. If that redirected URL
161 should be handled by another module, dispatch the callback for it.
162 Return a list of (title, hostname) tuples for each URL which is not handled by
163 another module.
164 """
165
166 results = []
167 shorten_url_length = bot.config.url.shorten_url_length
168 for url in urls:
169 if not url.startswith(bot.config.url.exclusion_char):
170 # Magic stuff to account for international domain names
171 try:
172 url = web.iri_to_uri(url)
173 except Exception: # TODO: Be specific
174 pass
175 # First, check that the URL we got doesn't match
176 matched = check_callbacks(bot, trigger, url, False)
177 if matched:
178 continue
179 # If the URL is over bot.config.url.shorten_url_length,
180 # shorten the URL
181 tinyurl = None
182 if (shorten_url_length > 0) and (len(url) > shorten_url_length):
183 # Check bot memory to see if the shortened URL is already in
184 # memory
185 if not bot.memory.contains('shortened_urls'):
186 # Initialize shortened_urls as a dict if it doesn't exist.
187 bot.memory['shortened_urls'] = tools.SopelMemory()
188 if bot.memory['shortened_urls'].contains(url):
189 tinyurl = bot.memory['shortened_urls'][url]
190 else:
191 tinyurl = get_tinyurl(url)
192 bot.memory['shortened_urls'][url] = tinyurl
193 # Finally, actually show the URL
194 title = find_title(url, verify=bot.config.core.verify_ssl)
195 if title:
196 results.append((title, get_hostname(url), tinyurl))
197 return results
198
199
200 def check_callbacks(bot, trigger, url, run=True):
201 """
202 Check the given URL against the callbacks list. If it matches, and ``run``
203 is given as ``True``, run the callback function, otherwise pass. Returns
204 ``True`` if the url matched anything in the callbacks list.
205 """
206 # Check if it matches the exclusion list first
207 matched = any(regex.search(url) for regex in bot.memory['url_exclude'])
208 # Then, check if there's anything in the callback list
209 for regex, function in tools.iteritems(bot.memory['url_callbacks']):
210 match = regex.search(url)
211 if match:
212 # Always run ones from @url; they don't run on their own.
213 if run or hasattr(function, 'url_regex'):
214 function(bot, trigger, match)
215 matched = True
216 return matched
217
218
219 def find_title(url, verify=True):
220 """Return the title for the given URL."""
221 try:
222 response = requests.get(url, stream=True, verify=verify,
223 headers=default_headers)
224 content = b''
225 for byte in response.iter_content(chunk_size=512):
226 content += byte
227 if b'</title>' in content or len(content) > max_bytes:
228 break
229 content = content.decode('utf-8', errors='ignore')
230 # Need to close the connection because we have not read all
231 # the data
232 response.close()
233 except requests.exceptions.ConnectionError:
234 return None
235
236 # Some cleanup that I don't really grok, but was in the original, so
237 # we'll keep it (with the compiled regexes made global) for now.
238 content = title_tag_data.sub(r'<\1title>', content)
239 content = quoted_title.sub('', content)
240
241 start = content.rfind('<title>')
242 end = content.rfind('</title>')
243 if start == -1 or end == -1:
244 return
245 title = web.decode(content[start + 7:end])
246 title = title.strip()[:200]
247
248 title = ' '.join(title.split()) # cleanly remove multiple spaces
249
250 # More cryptic regex substitutions. This one looks to be myano's invention.
251 title = re_dcc.sub('', title)
252
253 return title or None
254
255
256 def get_hostname(url):
257 idx = 7
258 if url.startswith('https://'):
259 idx = 8
260 elif url.startswith('ftp://'):
261 idx = 6
262 hostname = url[idx:]
263 slash = hostname.find('/')
264 if slash != -1:
265 hostname = hostname[:slash]
266 return hostname
267
268
269 def get_tinyurl(url):
270 """ Returns a shortened tinyURL link of the URL. """
271 tinyurl = "https://tinyurl.com/api-create.php?url=%s" % url
272 try:
273 res = requests.get(tinyurl)
274 res.raise_for_status()
275 except requests.exceptions.RequestException:
276 return None
277 # Replace text output with https instead of http to make the
278 # result an HTTPS link.
279 return res.text.replace("http://", "https://")
280
281
282 if __name__ == "__main__":
283 from sopel.test_tools import run_example_tests
284 run_example_tests(__file__)
285
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sopel/modules/url.py b/sopel/modules/url.py
--- a/sopel/modules/url.py
+++ b/sopel/modules/url.py
@@ -84,12 +84,26 @@
if not bot.memory.contains('last_seen_url'):
bot.memory['last_seen_url'] = tools.SopelMemory()
- def find_func(text):
+ def find_func(text, clean=False):
+ def trim_url(url):
+ # clean trailing sentence- or clause-ending punctuation
+ while url[-1] in '.,?!\'":;':
+ url = url[:-1]
+
+ # clean unmatched parentheses/braces/brackets
+ for (opener, closer) in [('(', ')'), ('[', ']'), ('{', '}'), ('<', '>')]:
+ if url[-1] is closer and url.count(opener) < url.count(closer):
+ url = url[:-1]
+
+ return url
+
re_url = r'(?u)((?<!%s)(?:http|https|ftp)(?::\/\/\S+))'\
% (bot.config.url.exclusion_char)
r = re.compile(re_url, re.IGNORECASE)
urls = re.findall(r, text)
+ if clean:
+ urls = [trim_url(url) for url in urls]
return urls
find_urls = find_func
|
{"golden_diff": "diff --git a/sopel/modules/url.py b/sopel/modules/url.py\n--- a/sopel/modules/url.py\n+++ b/sopel/modules/url.py\n@@ -84,12 +84,26 @@\n if not bot.memory.contains('last_seen_url'):\n bot.memory['last_seen_url'] = tools.SopelMemory()\n \n- def find_func(text):\n+ def find_func(text, clean=False):\n+ def trim_url(url):\n+ # clean trailing sentence- or clause-ending punctuation\n+ while url[-1] in '.,?!\\'\":;':\n+ url = url[:-1]\n+\n+ # clean unmatched parentheses/braces/brackets\n+ for (opener, closer) in [('(', ')'), ('[', ']'), ('{', '}'), ('<', '>')]:\n+ if url[-1] is closer and url.count(opener) < url.count(closer):\n+ url = url[:-1]\n+\n+ return url\n+\n re_url = r'(?u)((?<!%s)(?:http|https|ftp)(?::\\/\\/\\S+))'\\\n % (bot.config.url.exclusion_char)\n r = re.compile(re_url, re.IGNORECASE)\n \n urls = re.findall(r, text)\n+ if clean:\n+ urls = [trim_url(url) for url in urls]\n return urls\n \n find_urls = find_func\n", "issue": "URLs with ending commas get included when Sopel v6.5.x parse them.\n```\r\nExamples:\r\n\r\n[02:46pm] <Ant> http://www.tv.com/shows/family-guy/da-boom-25450/,\r\n02:46PM <Crushinator> [ Not found - TV.com ] - www.tv.com\r\n02:47PM <URL> [ Not found - TV.com ] - www.tv.com\r\n\r\nPeriods had no problems:\r\n[02:48pm] <Ant> http://www.tv.com/shows/family-guy/da-boom-25450/.\r\n02:48PM <URL> [ Family Guy - Season 2, Episode 3: Da Boom - TV.com ] -\r\n www.tv.com\r\n02:48PM <Crushinator> [ Family Guy - Season 2, Episode 3: Da Boom - TV.com ] -\r\n www.tv.com\r\n\r\n```\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"URL title module\"\"\"\n# Copyright 2010-2011, Michael Yanovich, yanovich.net, Kenneth Sham\n# Copyright 2012-2013 Elsie Powell\n# Copyright 2013 Lior Ramati ([email protected])\n# Copyright \u00a9 2014 Elad Alfassa <[email protected]>\n# Licensed under the Eiffel Forum License 2.\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nimport re\nfrom sopel import web, tools, __version__\nfrom sopel.module import commands, rule, example\nfrom sopel.config.types import ValidatedAttribute, ListAttribute, StaticSection\n\nimport requests\n\nUSER_AGENT = 'Sopel/{} (https://sopel.chat)'.format(__version__)\ndefault_headers = {'User-Agent': USER_AGENT}\nfind_urls = None\n# These are used to clean up the title tag before actually parsing it. Not the\n# world's best way to do this, but it'll do for now.\ntitle_tag_data = re.compile('<(/?)title( [^>]+)?>', re.IGNORECASE)\nquoted_title = re.compile('[\\'\"]<title>[\\'\"]', re.IGNORECASE)\n# This is another regex that presumably does something important.\nre_dcc = re.compile(r'(?i)dcc\\ssend')\n# This sets the maximum number of bytes that should be read in order to find\n# the title. We don't want it too high, or a link to a big file/stream will\n# just keep downloading until there's no more memory. 640k ought to be enough\n# for anybody.\nmax_bytes = 655360\n\n\nclass UrlSection(StaticSection):\n # TODO some validation rules maybe?\n exclude = ListAttribute('exclude')\n exclusion_char = ValidatedAttribute('exclusion_char', default='!')\n shorten_url_length = ValidatedAttribute(\n 'shorten_url_length', int, default=0)\n\n\ndef configure(config):\n config.define_section('url', UrlSection)\n config.url.configure_setting(\n 'exclude',\n 'Enter regular expressions for each URL you would like to exclude.'\n )\n config.url.configure_setting(\n 'exclusion_char',\n 'Enter a character which can be prefixed to suppress URL titling'\n )\n config.url.configure_setting(\n 'shorten_url_length',\n 'Enter how many characters a URL should be before the bot puts a'\n ' shorter version of the URL in the title as a TinyURL link'\n ' (0 to disable)'\n )\n\n\ndef setup(bot):\n global find_urls\n\n bot.config.define_section('url', UrlSection)\n\n if bot.config.url.exclude:\n regexes = [re.compile(s) for s in bot.config.url.exclude]\n else:\n regexes = []\n\n # We're keeping these in their own list, rather than putting then in the\n # callbacks list because 1, it's easier to deal with modules that are still\n # using this list, and not the newer callbacks list and 2, having a lambda\n # just to pass is kinda ugly.\n if not bot.memory.contains('url_exclude'):\n bot.memory['url_exclude'] = regexes\n else:\n exclude = bot.memory['url_exclude']\n if regexes:\n exclude.extend(regexes)\n bot.memory['url_exclude'] = exclude\n\n # Ensure that url_callbacks and last_seen_url are in memory\n if not bot.memory.contains('url_callbacks'):\n bot.memory['url_callbacks'] = tools.SopelMemory()\n if not bot.memory.contains('last_seen_url'):\n bot.memory['last_seen_url'] = tools.SopelMemory()\n\n def find_func(text):\n re_url = r'(?u)((?<!%s)(?:http|https|ftp)(?::\\/\\/\\S+))'\\\n % (bot.config.url.exclusion_char)\n r = re.compile(re_url, re.IGNORECASE)\n\n urls = re.findall(r, text)\n return urls\n\n find_urls = find_func\n\n\n@commands('title')\n@example('.title http://google.com', '[ Google ] - google.com')\ndef title_command(bot, trigger):\n \"\"\"\n Show the title or URL information for the given URL, or the last URL seen\n in this channel.\n \"\"\"\n if not trigger.group(2):\n if trigger.sender not in bot.memory['last_seen_url']:\n return\n matched = check_callbacks(bot, trigger,\n bot.memory['last_seen_url'][trigger.sender],\n True)\n if matched:\n return\n else:\n urls = [bot.memory['last_seen_url'][trigger.sender]]\n else:\n urls = find_urls(trigger)\n\n results = process_urls(bot, trigger, urls)\n for title, domain, tinyurl in results[:4]:\n message = '[ %s ] - %s' % (title, domain)\n if tinyurl:\n message += ' ( %s )' % tinyurl\n bot.reply(message)\n\n\n@rule(r'(?u).*(https?://\\S+).*')\ndef title_auto(bot, trigger):\n \"\"\"\n Automatically show titles for URLs. For shortened URLs/redirects, find\n where the URL redirects to and show the title for that (or call a function\n from another module to give more information).\n \"\"\"\n if re.match(bot.config.core.prefix + 'title', trigger):\n return\n\n # Avoid fetching known malicious links\n if 'safety_cache' in bot.memory and trigger in bot.memory['safety_cache']:\n if bot.memory['safety_cache'][trigger]['positives'] > 1:\n return\n\n urls = find_urls(trigger)\n if len(urls) == 0:\n return\n\n results = process_urls(bot, trigger, urls)\n bot.memory['last_seen_url'][trigger.sender] = urls[-1]\n\n for title, domain, tinyurl in results[:4]:\n message = '[ %s ] - %s' % (title, domain)\n if tinyurl:\n message += ' ( %s )' % tinyurl\n # Guard against responding to other instances of this bot.\n if message != trigger:\n bot.say(message)\n\n\ndef process_urls(bot, trigger, urls):\n \"\"\"\n For each URL in the list, ensure that it isn't handled by another module.\n If not, find where it redirects to, if anywhere. If that redirected URL\n should be handled by another module, dispatch the callback for it.\n Return a list of (title, hostname) tuples for each URL which is not handled by\n another module.\n \"\"\"\n\n results = []\n shorten_url_length = bot.config.url.shorten_url_length\n for url in urls:\n if not url.startswith(bot.config.url.exclusion_char):\n # Magic stuff to account for international domain names\n try:\n url = web.iri_to_uri(url)\n except Exception: # TODO: Be specific\n pass\n # First, check that the URL we got doesn't match\n matched = check_callbacks(bot, trigger, url, False)\n if matched:\n continue\n # If the URL is over bot.config.url.shorten_url_length,\n # shorten the URL\n tinyurl = None\n if (shorten_url_length > 0) and (len(url) > shorten_url_length):\n # Check bot memory to see if the shortened URL is already in\n # memory\n if not bot.memory.contains('shortened_urls'):\n # Initialize shortened_urls as a dict if it doesn't exist.\n bot.memory['shortened_urls'] = tools.SopelMemory()\n if bot.memory['shortened_urls'].contains(url):\n tinyurl = bot.memory['shortened_urls'][url]\n else:\n tinyurl = get_tinyurl(url)\n bot.memory['shortened_urls'][url] = tinyurl\n # Finally, actually show the URL\n title = find_title(url, verify=bot.config.core.verify_ssl)\n if title:\n results.append((title, get_hostname(url), tinyurl))\n return results\n\n\ndef check_callbacks(bot, trigger, url, run=True):\n \"\"\"\n Check the given URL against the callbacks list. If it matches, and ``run``\n is given as ``True``, run the callback function, otherwise pass. Returns\n ``True`` if the url matched anything in the callbacks list.\n \"\"\"\n # Check if it matches the exclusion list first\n matched = any(regex.search(url) for regex in bot.memory['url_exclude'])\n # Then, check if there's anything in the callback list\n for regex, function in tools.iteritems(bot.memory['url_callbacks']):\n match = regex.search(url)\n if match:\n # Always run ones from @url; they don't run on their own.\n if run or hasattr(function, 'url_regex'):\n function(bot, trigger, match)\n matched = True\n return matched\n\n\ndef find_title(url, verify=True):\n \"\"\"Return the title for the given URL.\"\"\"\n try:\n response = requests.get(url, stream=True, verify=verify,\n headers=default_headers)\n content = b''\n for byte in response.iter_content(chunk_size=512):\n content += byte\n if b'</title>' in content or len(content) > max_bytes:\n break\n content = content.decode('utf-8', errors='ignore')\n # Need to close the connection because we have not read all\n # the data\n response.close()\n except requests.exceptions.ConnectionError:\n return None\n\n # Some cleanup that I don't really grok, but was in the original, so\n # we'll keep it (with the compiled regexes made global) for now.\n content = title_tag_data.sub(r'<\\1title>', content)\n content = quoted_title.sub('', content)\n\n start = content.rfind('<title>')\n end = content.rfind('</title>')\n if start == -1 or end == -1:\n return\n title = web.decode(content[start + 7:end])\n title = title.strip()[:200]\n\n title = ' '.join(title.split()) # cleanly remove multiple spaces\n\n # More cryptic regex substitutions. This one looks to be myano's invention.\n title = re_dcc.sub('', title)\n\n return title or None\n\n\ndef get_hostname(url):\n idx = 7\n if url.startswith('https://'):\n idx = 8\n elif url.startswith('ftp://'):\n idx = 6\n hostname = url[idx:]\n slash = hostname.find('/')\n if slash != -1:\n hostname = hostname[:slash]\n return hostname\n\n\ndef get_tinyurl(url):\n \"\"\" Returns a shortened tinyURL link of the URL. \"\"\"\n tinyurl = \"https://tinyurl.com/api-create.php?url=%s\" % url\n try:\n res = requests.get(tinyurl)\n res.raise_for_status()\n except requests.exceptions.RequestException:\n return None\n # Replace text output with https instead of http to make the\n # result an HTTPS link.\n return res.text.replace(\"http://\", \"https://\")\n\n\nif __name__ == \"__main__\":\n from sopel.test_tools import run_example_tests\n run_example_tests(__file__)\n", "path": "sopel/modules/url.py"}], "after_files": [{"content": "# coding=utf-8\n\"\"\"URL title module\"\"\"\n# Copyright 2010-2011, Michael Yanovich, yanovich.net, Kenneth Sham\n# Copyright 2012-2013 Elsie Powell\n# Copyright 2013 Lior Ramati ([email protected])\n# Copyright \u00a9 2014 Elad Alfassa <[email protected]>\n# Licensed under the Eiffel Forum License 2.\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nimport re\nfrom sopel import web, tools, __version__\nfrom sopel.module import commands, rule, example\nfrom sopel.config.types import ValidatedAttribute, ListAttribute, StaticSection\n\nimport requests\n\nUSER_AGENT = 'Sopel/{} (https://sopel.chat)'.format(__version__)\ndefault_headers = {'User-Agent': USER_AGENT}\nfind_urls = None\n# These are used to clean up the title tag before actually parsing it. Not the\n# world's best way to do this, but it'll do for now.\ntitle_tag_data = re.compile('<(/?)title( [^>]+)?>', re.IGNORECASE)\nquoted_title = re.compile('[\\'\"]<title>[\\'\"]', re.IGNORECASE)\n# This is another regex that presumably does something important.\nre_dcc = re.compile(r'(?i)dcc\\ssend')\n# This sets the maximum number of bytes that should be read in order to find\n# the title. We don't want it too high, or a link to a big file/stream will\n# just keep downloading until there's no more memory. 640k ought to be enough\n# for anybody.\nmax_bytes = 655360\n\n\nclass UrlSection(StaticSection):\n # TODO some validation rules maybe?\n exclude = ListAttribute('exclude')\n exclusion_char = ValidatedAttribute('exclusion_char', default='!')\n shorten_url_length = ValidatedAttribute(\n 'shorten_url_length', int, default=0)\n\n\ndef configure(config):\n config.define_section('url', UrlSection)\n config.url.configure_setting(\n 'exclude',\n 'Enter regular expressions for each URL you would like to exclude.'\n )\n config.url.configure_setting(\n 'exclusion_char',\n 'Enter a character which can be prefixed to suppress URL titling'\n )\n config.url.configure_setting(\n 'shorten_url_length',\n 'Enter how many characters a URL should be before the bot puts a'\n ' shorter version of the URL in the title as a TinyURL link'\n ' (0 to disable)'\n )\n\n\ndef setup(bot):\n global find_urls\n\n bot.config.define_section('url', UrlSection)\n\n if bot.config.url.exclude:\n regexes = [re.compile(s) for s in bot.config.url.exclude]\n else:\n regexes = []\n\n # We're keeping these in their own list, rather than putting then in the\n # callbacks list because 1, it's easier to deal with modules that are still\n # using this list, and not the newer callbacks list and 2, having a lambda\n # just to pass is kinda ugly.\n if not bot.memory.contains('url_exclude'):\n bot.memory['url_exclude'] = regexes\n else:\n exclude = bot.memory['url_exclude']\n if regexes:\n exclude.extend(regexes)\n bot.memory['url_exclude'] = exclude\n\n # Ensure that url_callbacks and last_seen_url are in memory\n if not bot.memory.contains('url_callbacks'):\n bot.memory['url_callbacks'] = tools.SopelMemory()\n if not bot.memory.contains('last_seen_url'):\n bot.memory['last_seen_url'] = tools.SopelMemory()\n\n def find_func(text, clean=False):\n def trim_url(url):\n # clean trailing sentence- or clause-ending punctuation\n while url[-1] in '.,?!\\'\":;':\n url = url[:-1]\n\n # clean unmatched parentheses/braces/brackets\n for (opener, closer) in [('(', ')'), ('[', ']'), ('{', '}'), ('<', '>')]:\n if url[-1] is closer and url.count(opener) < url.count(closer):\n url = url[:-1]\n\n return url\n\n re_url = r'(?u)((?<!%s)(?:http|https|ftp)(?::\\/\\/\\S+))'\\\n % (bot.config.url.exclusion_char)\n r = re.compile(re_url, re.IGNORECASE)\n\n urls = re.findall(r, text)\n if clean:\n urls = [trim_url(url) for url in urls]\n return urls\n\n find_urls = find_func\n\n\n@commands('title')\n@example('.title http://google.com', '[ Google ] - google.com')\ndef title_command(bot, trigger):\n \"\"\"\n Show the title or URL information for the given URL, or the last URL seen\n in this channel.\n \"\"\"\n if not trigger.group(2):\n if trigger.sender not in bot.memory['last_seen_url']:\n return\n matched = check_callbacks(bot, trigger,\n bot.memory['last_seen_url'][trigger.sender],\n True)\n if matched:\n return\n else:\n urls = [bot.memory['last_seen_url'][trigger.sender]]\n else:\n urls = find_urls(trigger)\n\n results = process_urls(bot, trigger, urls)\n for title, domain, tinyurl in results[:4]:\n message = '[ %s ] - %s' % (title, domain)\n if tinyurl:\n message += ' ( %s )' % tinyurl\n bot.reply(message)\n\n\n@rule(r'(?u).*(https?://\\S+).*')\ndef title_auto(bot, trigger):\n \"\"\"\n Automatically show titles for URLs. For shortened URLs/redirects, find\n where the URL redirects to and show the title for that (or call a function\n from another module to give more information).\n \"\"\"\n if re.match(bot.config.core.prefix + 'title', trigger):\n return\n\n # Avoid fetching known malicious links\n if 'safety_cache' in bot.memory and trigger in bot.memory['safety_cache']:\n if bot.memory['safety_cache'][trigger]['positives'] > 1:\n return\n\n urls = find_urls(trigger)\n if len(urls) == 0:\n return\n\n results = process_urls(bot, trigger, urls)\n bot.memory['last_seen_url'][trigger.sender] = urls[-1]\n\n for title, domain, tinyurl in results[:4]:\n message = '[ %s ] - %s' % (title, domain)\n if tinyurl:\n message += ' ( %s )' % tinyurl\n # Guard against responding to other instances of this bot.\n if message != trigger:\n bot.say(message)\n\n\ndef process_urls(bot, trigger, urls):\n \"\"\"\n For each URL in the list, ensure that it isn't handled by another module.\n If not, find where it redirects to, if anywhere. If that redirected URL\n should be handled by another module, dispatch the callback for it.\n Return a list of (title, hostname) tuples for each URL which is not handled by\n another module.\n \"\"\"\n\n results = []\n shorten_url_length = bot.config.url.shorten_url_length\n for url in urls:\n if not url.startswith(bot.config.url.exclusion_char):\n # Magic stuff to account for international domain names\n try:\n url = web.iri_to_uri(url)\n except Exception: # TODO: Be specific\n pass\n # First, check that the URL we got doesn't match\n matched = check_callbacks(bot, trigger, url, False)\n if matched:\n continue\n # If the URL is over bot.config.url.shorten_url_length,\n # shorten the URL\n tinyurl = None\n if (shorten_url_length > 0) and (len(url) > shorten_url_length):\n # Check bot memory to see if the shortened URL is already in\n # memory\n if not bot.memory.contains('shortened_urls'):\n # Initialize shortened_urls as a dict if it doesn't exist.\n bot.memory['shortened_urls'] = tools.SopelMemory()\n if bot.memory['shortened_urls'].contains(url):\n tinyurl = bot.memory['shortened_urls'][url]\n else:\n tinyurl = get_tinyurl(url)\n bot.memory['shortened_urls'][url] = tinyurl\n # Finally, actually show the URL\n title = find_title(url, verify=bot.config.core.verify_ssl)\n if title:\n results.append((title, get_hostname(url), tinyurl))\n return results\n\n\ndef check_callbacks(bot, trigger, url, run=True):\n \"\"\"\n Check the given URL against the callbacks list. If it matches, and ``run``\n is given as ``True``, run the callback function, otherwise pass. Returns\n ``True`` if the url matched anything in the callbacks list.\n \"\"\"\n # Check if it matches the exclusion list first\n matched = any(regex.search(url) for regex in bot.memory['url_exclude'])\n # Then, check if there's anything in the callback list\n for regex, function in tools.iteritems(bot.memory['url_callbacks']):\n match = regex.search(url)\n if match:\n # Always run ones from @url; they don't run on their own.\n if run or hasattr(function, 'url_regex'):\n function(bot, trigger, match)\n matched = True\n return matched\n\n\ndef find_title(url, verify=True):\n \"\"\"Return the title for the given URL.\"\"\"\n try:\n response = requests.get(url, stream=True, verify=verify,\n headers=default_headers)\n content = b''\n for byte in response.iter_content(chunk_size=512):\n content += byte\n if b'</title>' in content or len(content) > max_bytes:\n break\n content = content.decode('utf-8', errors='ignore')\n # Need to close the connection because we have not read all\n # the data\n response.close()\n except requests.exceptions.ConnectionError:\n return None\n\n # Some cleanup that I don't really grok, but was in the original, so\n # we'll keep it (with the compiled regexes made global) for now.\n content = title_tag_data.sub(r'<\\1title>', content)\n content = quoted_title.sub('', content)\n\n start = content.rfind('<title>')\n end = content.rfind('</title>')\n if start == -1 or end == -1:\n return\n title = web.decode(content[start + 7:end])\n title = title.strip()[:200]\n\n title = ' '.join(title.split()) # cleanly remove multiple spaces\n\n # More cryptic regex substitutions. This one looks to be myano's invention.\n title = re_dcc.sub('', title)\n\n return title or None\n\n\ndef get_hostname(url):\n idx = 7\n if url.startswith('https://'):\n idx = 8\n elif url.startswith('ftp://'):\n idx = 6\n hostname = url[idx:]\n slash = hostname.find('/')\n if slash != -1:\n hostname = hostname[:slash]\n return hostname\n\n\ndef get_tinyurl(url):\n \"\"\" Returns a shortened tinyURL link of the URL. \"\"\"\n tinyurl = \"https://tinyurl.com/api-create.php?url=%s\" % url\n try:\n res = requests.get(tinyurl)\n res.raise_for_status()\n except requests.exceptions.RequestException:\n return None\n # Replace text output with https instead of http to make the\n # result an HTTPS link.\n return res.text.replace(\"http://\", \"https://\")\n\n\nif __name__ == \"__main__\":\n from sopel.test_tools import run_example_tests\n run_example_tests(__file__)\n", "path": "sopel/modules/url.py"}]}
| 3,665 | 308 |
gh_patches_debug_17355
|
rasdani/github-patches
|
git_diff
|
canonical__microk8s-4023
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Non-HA single node, leaving node removes all pods
#### Summary
Iam running the microk8s in ubuntu in no high availability i.e. there is a single node onto the same machine where it is installed. I updated the certificates and then i issue following command:
`sudo microk8s leave`
It give following messages:
```
Generating new cluster certificates.
Waiting for node to start.
```
and then i stopped microk8s and then started again, the node appeared however all of my pods / namespaces are gone, how to recover?
#### What Should Happen Instead?
All the pods should retain
#### Reproduction Steps
Already described above
#### Introspection Report
#### Can you suggest a fix?
<!-- (This section is optional). How do you propose that the issue be fixed? -->
[https://wetransfer.com/downloads/ee891f9d62bd9ffd7fdac2f9597e638f20230529135310/bf5d38484b8a54a7107a5447153c884820230529135327/7b32a3](url)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/wrappers/remove_node.py`
Content:
```
1 #!/usr/bin/python3
2 import json
3 import os
4 import shutil
5 import subprocess
6 import sys
7
8 import click
9 import netifaces
10
11 from ipaddress import ip_address, IPv4Address
12
13 from common.cluster.utils import (
14 try_set_file_permissions,
15 is_node_running_dqlite,
16 is_token_auth_enabled,
17 )
18
19 snap_path = os.environ.get("SNAP")
20 snapdata_path = os.environ.get("SNAP_DATA")
21 callback_tokens_file = "{}/credentials/callback-tokens.txt".format(snapdata_path)
22
23 cluster_dir = "{}/var/kubernetes/backend".format(snapdata_path)
24
25
26 def remove_dqlite_node(node, force=False):
27 try:
28 # If node is an IP address, find the node name.
29 if type(ip_address(node)) is IPv4Address:
30 node_info = subprocess.check_output(
31 "{}/microk8s-kubectl.wrapper get no -o json".format(snap_path).split()
32 )
33 info = json.loads(node_info.decode())
34 found = False
35 for n in info["items"]:
36 if found:
37 break
38 for a in n["status"]["addresses"]:
39 if a["type"] == "InternalIP" and a["address"] == node:
40 node = n["metadata"]["name"]
41 found = True
42 break
43
44 # Make sure this node exists
45 node_info = subprocess.check_output(
46 "{}/microk8s-kubectl.wrapper get no {} -o json".format(snap_path, node).split()
47 )
48 info = json.loads(node_info.decode())
49 node_address = None
50 for a in info["status"]["addresses"]:
51 if a["type"] == "InternalIP":
52 node_address = a["address"]
53 break
54
55 if not node_address:
56 print("Node {} is not part of the cluster.".format(node))
57 exit(1)
58
59 node_ep = None
60 my_ep, other_ep = get_dqlite_endpoints()
61 for ep in other_ep:
62 if ep.startswith("{}:".format(node_address)):
63 node_ep = ep
64
65 if node_ep and force:
66 delete_dqlite_node([node_ep], my_ep)
67 elif node_ep and not force:
68 print(
69 "Removal failed. Node {} is registered with dqlite. "
70 "Please, run first 'microk8s leave' on the departing node. \n"
71 "If the node is not available anymore and will never attempt to join the cluster "
72 "in the future use the '--force' flag \n"
73 "to unregister the node while removing it.".format(node)
74 )
75 exit(1)
76
77 except subprocess.CalledProcessError:
78 print("Node {} does not exist in Kubernetes.".format(node))
79 if force:
80 print("Attempting to remove {} from dqlite.".format(node))
81 # Make sure we do not have the node in dqlite.
82 # We assume the IP is provided to denote the
83 my_ep, other_ep = get_dqlite_endpoints()
84 for ep in other_ep:
85 if ep.startswith("{}:".format(node)):
86 print("Removing node entry found in dqlite.")
87 delete_dqlite_node([ep], my_ep)
88 exit(1)
89
90 remove_node(node)
91
92
93 def remove_node(node):
94 try:
95 # Make sure this node exists
96 subprocess.check_call(
97 "{}/microk8s-kubectl.wrapper get no {}".format(snap_path, node).split(),
98 stdout=subprocess.DEVNULL,
99 stderr=subprocess.DEVNULL,
100 )
101 except subprocess.CalledProcessError:
102 print("Node {} does not exist.".format(node))
103 exit(1)
104
105 if is_token_auth_enabled():
106 remove_kubelet_token(node)
107 remove_callback_token(node)
108 subprocess.check_call(
109 "{}/microk8s-kubectl.wrapper delete no {}".format(snap_path, node).split(),
110 stdout=subprocess.DEVNULL,
111 stderr=subprocess.DEVNULL,
112 )
113
114
115 def remove_kubelet_token(node):
116 """
117 Remove a token for a node in the known tokens
118
119 :param node: the name of the node
120 """
121 file = "{}/credentials/known_tokens.csv".format(snapdata_path)
122 backup_file = "{}.backup".format(file)
123 token = "system:node:{}".format(node)
124 # That is a critical section. We need to protect it.
125 with open(backup_file, "w") as back_fp:
126 with open(file, "r") as fp:
127 for _, line in enumerate(fp):
128 if token in line:
129 continue
130 back_fp.write("{}".format(line))
131
132 try_set_file_permissions(backup_file)
133 shutil.copyfile(backup_file, file)
134
135
136 def get_dqlite_endpoints():
137 """
138 Return the endpoints the current node has on dqlite and the endpoints of the rest of the nodes.
139
140 :return: two lists with the endpoints
141 """
142 out = subprocess.check_output(
143 "{snappath}/bin/dqlite -s file://{dbdir}/cluster.yaml -c {dbdir}/cluster.crt "
144 "-k {dbdir}/cluster.key -f json k8s .cluster".format(
145 snappath=snap_path, dbdir=cluster_dir
146 ).split()
147 )
148 data = json.loads(out.decode())
149 ep_addresses = []
150 for ep in data:
151 ep_addresses.append(ep["Address"])
152 local_ips = []
153 for interface in netifaces.interfaces():
154 if netifaces.AF_INET not in netifaces.ifaddresses(interface):
155 continue
156 for link in netifaces.ifaddresses(interface)[netifaces.AF_INET]:
157 local_ips.append(link["addr"])
158 my_ep = []
159 other_ep = []
160 for ep in ep_addresses:
161 found = False
162 for ip in local_ips:
163 if "{}:".format(ip) in ep:
164 my_ep.append(ep)
165 found = True
166 if not found:
167 other_ep.append(ep)
168
169 return my_ep, other_ep
170
171
172 def delete_dqlite_node(delete_node, dqlite_ep):
173 if len(delete_node) > 0 and "127.0.0.1" not in delete_node[0]:
174 for ep in dqlite_ep:
175 try:
176 cmd = (
177 "{snappath}/bin/dqlite -s file://{dbdir}/cluster.yaml -c {dbdir}/cluster.crt "
178 "-k {dbdir}/cluster.key -f json k8s".format(
179 snappath=snap_path, dbdir=cluster_dir
180 ).split()
181 )
182 cmd.append(".remove {}".format(delete_node[0]))
183 subprocess.check_output(cmd)
184 break
185 except Exception as err:
186 print("Contacting node {} failed. Error:".format(ep))
187 print(repr(err))
188 exit(2)
189
190
191 def remove_callback_token(node):
192 """
193 Remove a callback token
194
195 :param node: the node
196 """
197 tmp_file = "{}.tmp".format(callback_tokens_file)
198 if not os.path.isfile(callback_tokens_file):
199 open(callback_tokens_file, "a+")
200 os.chmod(callback_tokens_file, 0o600)
201 with open(tmp_file, "w") as backup_fp:
202 os.chmod(tmp_file, 0o600)
203 with open(callback_tokens_file, "r+") as callback_fp:
204 # Entries are of the format: 'node_hostname:agent_port token'
205 # We need to get the node_hostname part
206 for line in callback_fp:
207 parts = line.split(":")
208 if parts[0] == node:
209 continue
210 else:
211 backup_fp.write(line)
212
213 try_set_file_permissions(tmp_file)
214 shutil.move(tmp_file, callback_tokens_file)
215
216
217 @click.command()
218 @click.argument("node", required=True)
219 @click.option(
220 "--force",
221 is_flag=True,
222 required=False,
223 default=False,
224 help="Force the node removal operation. (default: false)",
225 )
226 def reset(node, force):
227 """
228 Remove a node from the cluster
229 """
230 if is_node_running_dqlite():
231 remove_dqlite_node(node, force)
232 else:
233 remove_node(node)
234 sys.exit(0)
235
236
237 if __name__ == "__main__":
238 reset(prog_name="microk8s remove-node")
239
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scripts/wrappers/remove_node.py b/scripts/wrappers/remove_node.py
--- a/scripts/wrappers/remove_node.py
+++ b/scripts/wrappers/remove_node.py
@@ -8,7 +8,7 @@
import click
import netifaces
-from ipaddress import ip_address, IPv4Address
+from ipaddress import ip_address
from common.cluster.utils import (
try_set_file_permissions,
@@ -26,7 +26,13 @@
def remove_dqlite_node(node, force=False):
try:
# If node is an IP address, find the node name.
- if type(ip_address(node)) is IPv4Address:
+ is_node_ip = True
+ try:
+ ip_address(node)
+ except ValueError:
+ is_node_ip = False
+
+ if is_node_ip:
node_info = subprocess.check_output(
"{}/microk8s-kubectl.wrapper get no -o json".format(snap_path).split()
)
|
{"golden_diff": "diff --git a/scripts/wrappers/remove_node.py b/scripts/wrappers/remove_node.py\n--- a/scripts/wrappers/remove_node.py\n+++ b/scripts/wrappers/remove_node.py\n@@ -8,7 +8,7 @@\n import click\n import netifaces\n \n-from ipaddress import ip_address, IPv4Address\n+from ipaddress import ip_address\n \n from common.cluster.utils import (\n try_set_file_permissions,\n@@ -26,7 +26,13 @@\n def remove_dqlite_node(node, force=False):\n try:\n # If node is an IP address, find the node name.\n- if type(ip_address(node)) is IPv4Address:\n+ is_node_ip = True\n+ try:\n+ ip_address(node)\n+ except ValueError:\n+ is_node_ip = False\n+\n+ if is_node_ip:\n node_info = subprocess.check_output(\n \"{}/microk8s-kubectl.wrapper get no -o json\".format(snap_path).split()\n )\n", "issue": "Non-HA single node, leaving node removes all pods\n\r\n\r\n#### Summary\r\nIam running the microk8s in ubuntu in no high availability i.e. there is a single node onto the same machine where it is installed. I updated the certificates and then i issue following command:\r\n`sudo microk8s leave`\r\n\r\nIt give following messages:\r\n\r\n```\r\nGenerating new cluster certificates.\r\nWaiting for node to start. \r\n```\r\n\r\nand then i stopped microk8s and then started again, the node appeared however all of my pods / namespaces are gone, how to recover?\r\n\r\n#### What Should Happen Instead?\r\nAll the pods should retain\r\n\r\n#### Reproduction Steps\r\nAlready described above\r\n\r\n#### Introspection Report\r\n\r\n#### Can you suggest a fix?\r\n<!-- (This section is optional). How do you propose that the issue be fixed? -->\r\n[https://wetransfer.com/downloads/ee891f9d62bd9ffd7fdac2f9597e638f20230529135310/bf5d38484b8a54a7107a5447153c884820230529135327/7b32a3](url)\r\n\n", "before_files": [{"content": "#!/usr/bin/python3\nimport json\nimport os\nimport shutil\nimport subprocess\nimport sys\n\nimport click\nimport netifaces\n\nfrom ipaddress import ip_address, IPv4Address\n\nfrom common.cluster.utils import (\n try_set_file_permissions,\n is_node_running_dqlite,\n is_token_auth_enabled,\n)\n\nsnap_path = os.environ.get(\"SNAP\")\nsnapdata_path = os.environ.get(\"SNAP_DATA\")\ncallback_tokens_file = \"{}/credentials/callback-tokens.txt\".format(snapdata_path)\n\ncluster_dir = \"{}/var/kubernetes/backend\".format(snapdata_path)\n\n\ndef remove_dqlite_node(node, force=False):\n try:\n # If node is an IP address, find the node name.\n if type(ip_address(node)) is IPv4Address:\n node_info = subprocess.check_output(\n \"{}/microk8s-kubectl.wrapper get no -o json\".format(snap_path).split()\n )\n info = json.loads(node_info.decode())\n found = False\n for n in info[\"items\"]:\n if found:\n break\n for a in n[\"status\"][\"addresses\"]:\n if a[\"type\"] == \"InternalIP\" and a[\"address\"] == node:\n node = n[\"metadata\"][\"name\"]\n found = True\n break\n\n # Make sure this node exists\n node_info = subprocess.check_output(\n \"{}/microk8s-kubectl.wrapper get no {} -o json\".format(snap_path, node).split()\n )\n info = json.loads(node_info.decode())\n node_address = None\n for a in info[\"status\"][\"addresses\"]:\n if a[\"type\"] == \"InternalIP\":\n node_address = a[\"address\"]\n break\n\n if not node_address:\n print(\"Node {} is not part of the cluster.\".format(node))\n exit(1)\n\n node_ep = None\n my_ep, other_ep = get_dqlite_endpoints()\n for ep in other_ep:\n if ep.startswith(\"{}:\".format(node_address)):\n node_ep = ep\n\n if node_ep and force:\n delete_dqlite_node([node_ep], my_ep)\n elif node_ep and not force:\n print(\n \"Removal failed. Node {} is registered with dqlite. \"\n \"Please, run first 'microk8s leave' on the departing node. \\n\"\n \"If the node is not available anymore and will never attempt to join the cluster \"\n \"in the future use the '--force' flag \\n\"\n \"to unregister the node while removing it.\".format(node)\n )\n exit(1)\n\n except subprocess.CalledProcessError:\n print(\"Node {} does not exist in Kubernetes.\".format(node))\n if force:\n print(\"Attempting to remove {} from dqlite.\".format(node))\n # Make sure we do not have the node in dqlite.\n # We assume the IP is provided to denote the\n my_ep, other_ep = get_dqlite_endpoints()\n for ep in other_ep:\n if ep.startswith(\"{}:\".format(node)):\n print(\"Removing node entry found in dqlite.\")\n delete_dqlite_node([ep], my_ep)\n exit(1)\n\n remove_node(node)\n\n\ndef remove_node(node):\n try:\n # Make sure this node exists\n subprocess.check_call(\n \"{}/microk8s-kubectl.wrapper get no {}\".format(snap_path, node).split(),\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )\n except subprocess.CalledProcessError:\n print(\"Node {} does not exist.\".format(node))\n exit(1)\n\n if is_token_auth_enabled():\n remove_kubelet_token(node)\n remove_callback_token(node)\n subprocess.check_call(\n \"{}/microk8s-kubectl.wrapper delete no {}\".format(snap_path, node).split(),\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )\n\n\ndef remove_kubelet_token(node):\n \"\"\"\n Remove a token for a node in the known tokens\n\n :param node: the name of the node\n \"\"\"\n file = \"{}/credentials/known_tokens.csv\".format(snapdata_path)\n backup_file = \"{}.backup\".format(file)\n token = \"system:node:{}\".format(node)\n # That is a critical section. We need to protect it.\n with open(backup_file, \"w\") as back_fp:\n with open(file, \"r\") as fp:\n for _, line in enumerate(fp):\n if token in line:\n continue\n back_fp.write(\"{}\".format(line))\n\n try_set_file_permissions(backup_file)\n shutil.copyfile(backup_file, file)\n\n\ndef get_dqlite_endpoints():\n \"\"\"\n Return the endpoints the current node has on dqlite and the endpoints of the rest of the nodes.\n\n :return: two lists with the endpoints\n \"\"\"\n out = subprocess.check_output(\n \"{snappath}/bin/dqlite -s file://{dbdir}/cluster.yaml -c {dbdir}/cluster.crt \"\n \"-k {dbdir}/cluster.key -f json k8s .cluster\".format(\n snappath=snap_path, dbdir=cluster_dir\n ).split()\n )\n data = json.loads(out.decode())\n ep_addresses = []\n for ep in data:\n ep_addresses.append(ep[\"Address\"])\n local_ips = []\n for interface in netifaces.interfaces():\n if netifaces.AF_INET not in netifaces.ifaddresses(interface):\n continue\n for link in netifaces.ifaddresses(interface)[netifaces.AF_INET]:\n local_ips.append(link[\"addr\"])\n my_ep = []\n other_ep = []\n for ep in ep_addresses:\n found = False\n for ip in local_ips:\n if \"{}:\".format(ip) in ep:\n my_ep.append(ep)\n found = True\n if not found:\n other_ep.append(ep)\n\n return my_ep, other_ep\n\n\ndef delete_dqlite_node(delete_node, dqlite_ep):\n if len(delete_node) > 0 and \"127.0.0.1\" not in delete_node[0]:\n for ep in dqlite_ep:\n try:\n cmd = (\n \"{snappath}/bin/dqlite -s file://{dbdir}/cluster.yaml -c {dbdir}/cluster.crt \"\n \"-k {dbdir}/cluster.key -f json k8s\".format(\n snappath=snap_path, dbdir=cluster_dir\n ).split()\n )\n cmd.append(\".remove {}\".format(delete_node[0]))\n subprocess.check_output(cmd)\n break\n except Exception as err:\n print(\"Contacting node {} failed. Error:\".format(ep))\n print(repr(err))\n exit(2)\n\n\ndef remove_callback_token(node):\n \"\"\"\n Remove a callback token\n\n :param node: the node\n \"\"\"\n tmp_file = \"{}.tmp\".format(callback_tokens_file)\n if not os.path.isfile(callback_tokens_file):\n open(callback_tokens_file, \"a+\")\n os.chmod(callback_tokens_file, 0o600)\n with open(tmp_file, \"w\") as backup_fp:\n os.chmod(tmp_file, 0o600)\n with open(callback_tokens_file, \"r+\") as callback_fp:\n # Entries are of the format: 'node_hostname:agent_port token'\n # We need to get the node_hostname part\n for line in callback_fp:\n parts = line.split(\":\")\n if parts[0] == node:\n continue\n else:\n backup_fp.write(line)\n\n try_set_file_permissions(tmp_file)\n shutil.move(tmp_file, callback_tokens_file)\n\n\[email protected]()\[email protected](\"node\", required=True)\[email protected](\n \"--force\",\n is_flag=True,\n required=False,\n default=False,\n help=\"Force the node removal operation. (default: false)\",\n)\ndef reset(node, force):\n \"\"\"\n Remove a node from the cluster\n \"\"\"\n if is_node_running_dqlite():\n remove_dqlite_node(node, force)\n else:\n remove_node(node)\n sys.exit(0)\n\n\nif __name__ == \"__main__\":\n reset(prog_name=\"microk8s remove-node\")\n", "path": "scripts/wrappers/remove_node.py"}], "after_files": [{"content": "#!/usr/bin/python3\nimport json\nimport os\nimport shutil\nimport subprocess\nimport sys\n\nimport click\nimport netifaces\n\nfrom ipaddress import ip_address\n\nfrom common.cluster.utils import (\n try_set_file_permissions,\n is_node_running_dqlite,\n is_token_auth_enabled,\n)\n\nsnap_path = os.environ.get(\"SNAP\")\nsnapdata_path = os.environ.get(\"SNAP_DATA\")\ncallback_tokens_file = \"{}/credentials/callback-tokens.txt\".format(snapdata_path)\n\ncluster_dir = \"{}/var/kubernetes/backend\".format(snapdata_path)\n\n\ndef remove_dqlite_node(node, force=False):\n try:\n # If node is an IP address, find the node name.\n is_node_ip = True\n try:\n ip_address(node)\n except ValueError:\n is_node_ip = False\n\n if is_node_ip:\n node_info = subprocess.check_output(\n \"{}/microk8s-kubectl.wrapper get no -o json\".format(snap_path).split()\n )\n info = json.loads(node_info.decode())\n found = False\n for n in info[\"items\"]:\n if found:\n break\n for a in n[\"status\"][\"addresses\"]:\n if a[\"type\"] == \"InternalIP\" and a[\"address\"] == node:\n node = n[\"metadata\"][\"name\"]\n found = True\n break\n\n # Make sure this node exists\n node_info = subprocess.check_output(\n \"{}/microk8s-kubectl.wrapper get no {} -o json\".format(snap_path, node).split()\n )\n info = json.loads(node_info.decode())\n node_address = None\n for a in info[\"status\"][\"addresses\"]:\n if a[\"type\"] == \"InternalIP\":\n node_address = a[\"address\"]\n break\n\n if not node_address:\n print(\"Node {} is not part of the cluster.\".format(node))\n exit(1)\n\n node_ep = None\n my_ep, other_ep = get_dqlite_endpoints()\n for ep in other_ep:\n if ep.startswith(\"{}:\".format(node_address)):\n node_ep = ep\n\n if node_ep and force:\n delete_dqlite_node([node_ep], my_ep)\n elif node_ep and not force:\n print(\n \"Removal failed. Node {} is registered with dqlite. \"\n \"Please, run first 'microk8s leave' on the departing node. \\n\"\n \"If the node is not available anymore and will never attempt to join the cluster \"\n \"in the future use the '--force' flag \\n\"\n \"to unregister the node while removing it.\".format(node)\n )\n exit(1)\n\n except subprocess.CalledProcessError:\n print(\"Node {} does not exist in Kubernetes.\".format(node))\n if force:\n print(\"Attempting to remove {} from dqlite.\".format(node))\n # Make sure we do not have the node in dqlite.\n # We assume the IP is provided to denote the\n my_ep, other_ep = get_dqlite_endpoints()\n for ep in other_ep:\n if ep.startswith(\"{}:\".format(node)):\n print(\"Removing node entry found in dqlite.\")\n delete_dqlite_node([ep], my_ep)\n exit(1)\n\n remove_node(node)\n\n\ndef remove_node(node):\n try:\n # Make sure this node exists\n subprocess.check_call(\n \"{}/microk8s-kubectl.wrapper get no {}\".format(snap_path, node).split(),\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )\n except subprocess.CalledProcessError:\n print(\"Node {} does not exist.\".format(node))\n exit(1)\n\n if is_token_auth_enabled():\n remove_kubelet_token(node)\n remove_callback_token(node)\n subprocess.check_call(\n \"{}/microk8s-kubectl.wrapper delete no {}\".format(snap_path, node).split(),\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )\n\n\ndef remove_kubelet_token(node):\n \"\"\"\n Remove a token for a node in the known tokens\n\n :param node: the name of the node\n \"\"\"\n file = \"{}/credentials/known_tokens.csv\".format(snapdata_path)\n backup_file = \"{}.backup\".format(file)\n token = \"system:node:{}\".format(node)\n # That is a critical section. We need to protect it.\n with open(backup_file, \"w\") as back_fp:\n with open(file, \"r\") as fp:\n for _, line in enumerate(fp):\n if token in line:\n continue\n back_fp.write(\"{}\".format(line))\n\n try_set_file_permissions(backup_file)\n shutil.copyfile(backup_file, file)\n\n\ndef get_dqlite_endpoints():\n \"\"\"\n Return the endpoints the current node has on dqlite and the endpoints of the rest of the nodes.\n\n :return: two lists with the endpoints\n \"\"\"\n out = subprocess.check_output(\n \"{snappath}/bin/dqlite -s file://{dbdir}/cluster.yaml -c {dbdir}/cluster.crt \"\n \"-k {dbdir}/cluster.key -f json k8s .cluster\".format(\n snappath=snap_path, dbdir=cluster_dir\n ).split()\n )\n data = json.loads(out.decode())\n ep_addresses = []\n for ep in data:\n ep_addresses.append(ep[\"Address\"])\n local_ips = []\n for interface in netifaces.interfaces():\n if netifaces.AF_INET not in netifaces.ifaddresses(interface):\n continue\n for link in netifaces.ifaddresses(interface)[netifaces.AF_INET]:\n local_ips.append(link[\"addr\"])\n my_ep = []\n other_ep = []\n for ep in ep_addresses:\n found = False\n for ip in local_ips:\n if \"{}:\".format(ip) in ep:\n my_ep.append(ep)\n found = True\n if not found:\n other_ep.append(ep)\n\n return my_ep, other_ep\n\n\ndef delete_dqlite_node(delete_node, dqlite_ep):\n if len(delete_node) > 0 and \"127.0.0.1\" not in delete_node[0]:\n for ep in dqlite_ep:\n try:\n cmd = (\n \"{snappath}/bin/dqlite -s file://{dbdir}/cluster.yaml -c {dbdir}/cluster.crt \"\n \"-k {dbdir}/cluster.key -f json k8s\".format(\n snappath=snap_path, dbdir=cluster_dir\n ).split()\n )\n cmd.append(\".remove {}\".format(delete_node[0]))\n subprocess.check_output(cmd)\n break\n except Exception as err:\n print(\"Contacting node {} failed. Error:\".format(ep))\n print(repr(err))\n exit(2)\n\n\ndef remove_callback_token(node):\n \"\"\"\n Remove a callback token\n\n :param node: the node\n \"\"\"\n tmp_file = \"{}.tmp\".format(callback_tokens_file)\n if not os.path.isfile(callback_tokens_file):\n open(callback_tokens_file, \"a+\")\n os.chmod(callback_tokens_file, 0o600)\n with open(tmp_file, \"w\") as backup_fp:\n os.chmod(tmp_file, 0o600)\n with open(callback_tokens_file, \"r+\") as callback_fp:\n # Entries are of the format: 'node_hostname:agent_port token'\n # We need to get the node_hostname part\n for line in callback_fp:\n parts = line.split(\":\")\n if parts[0] == node:\n continue\n else:\n backup_fp.write(line)\n\n try_set_file_permissions(tmp_file)\n shutil.move(tmp_file, callback_tokens_file)\n\n\[email protected]()\[email protected](\"node\", required=True)\[email protected](\n \"--force\",\n is_flag=True,\n required=False,\n default=False,\n help=\"Force the node removal operation. (default: false)\",\n)\ndef reset(node, force):\n \"\"\"\n Remove a node from the cluster\n \"\"\"\n if is_node_running_dqlite():\n remove_dqlite_node(node, force)\n else:\n remove_node(node)\n sys.exit(0)\n\n\nif __name__ == \"__main__\":\n reset(prog_name=\"microk8s remove-node\")\n", "path": "scripts/wrappers/remove_node.py"}]}
| 2,937 | 215 |
gh_patches_debug_13582
|
rasdani/github-patches
|
git_diff
|
vega__altair-334
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FutureWarning in Pandas 0.20.1
Since upgrading to Pandas 0.20.1 I get this warning when first using altair in a notebook.
```
site-packages\altair\utils\core.py:110: FutureWarning: pandas.lib is deprecated and will be removed in a future version.
You can access infer_dtype as pandas.api.types.infer_dtype
typ = pd.lib.infer_dtype(data)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `altair/utils/core.py`
Content:
```
1 """
2 Utility routines
3 """
4 import re
5 import warnings
6
7 import pandas as pd
8 import numpy as np
9
10
11 TYPECODE_MAP = {'ordinal': 'O',
12 'nominal': 'N',
13 'quantitative': 'Q',
14 'temporal': 'T'}
15
16 INV_TYPECODE_MAP = {v: k for k, v in TYPECODE_MAP.items()}
17
18 TYPE_ABBR = TYPECODE_MAP.values()
19
20
21 def parse_shorthand(shorthand):
22 """
23 Parse the shorthand expression for aggregation, field, and type.
24
25 These are of the form:
26
27 - "col_name"
28 - "col_name:O"
29 - "average(col_name)"
30 - "average(col_name):O"
31
32 Parameters
33 ----------
34 shorthand: str
35 Shorthand string
36
37 Returns
38 -------
39 D : dict
40 Dictionary containing the field, aggregate, and typecode
41 """
42 if not shorthand:
43 return {}
44
45 # Must import this here to avoid circular imports
46 from ..schema import AggregateOp
47 valid_aggregates = AggregateOp().values
48 valid_typecodes = list(TYPECODE_MAP) + list(INV_TYPECODE_MAP)
49
50 # build regular expressions
51 units = dict(field='(?P<field>.*)',
52 type='(?P<type>{0})'.format('|'.join(valid_typecodes)),
53 aggregate='(?P<aggregate>{0})'.format('|'.join(valid_aggregates)))
54 patterns = [r'{field}',
55 r'{field}:{type}',
56 r'{aggregate}\({field}\)',
57 r'{aggregate}\({field}\):{type}']
58 regexps = (re.compile('\A' + p.format(**units) + '\Z', re.DOTALL)
59 for p in patterns[::-1])
60
61 # find matches depending on valid fields passed
62 match = next(exp.match(shorthand).groupdict() for exp in regexps
63 if exp.match(shorthand))
64
65 # Use short form of the type expression
66 typ = match.get('type', None)
67 if typ:
68 match['type'] = INV_TYPECODE_MAP.get(typ, typ)
69 return match
70
71
72 def construct_shorthand(field=None, aggregate=None, type=None):
73 """Construct a shorthand representation.
74
75 See also: parse_shorthand"""
76 if field is None:
77 return ''
78
79 sh = field
80
81 if aggregate is not None:
82 sh = '{0}({1})'.format(aggregate, sh)
83
84 if type is not None:
85 type = TYPECODE_MAP.get(type, type)
86 if type not in TYPE_ABBR:
87 raise ValueError('Unrecognized Type: {0}'.format(type))
88 sh = '{0}:{1}'.format(sh, type)
89
90 return sh
91
92
93 def infer_vegalite_type(data, field=None):
94 """
95 From an array-like input, infer the correct vega typecode
96 ('ordinal', 'nominal', 'quantitative', or 'temporal')
97
98 Parameters
99 ----------
100 data: Numpy array or Pandas Series
101 field: str column name
102 """
103 # See if we can read the type from the field
104 if field is not None:
105 parsed = parse_shorthand(field)
106 if parsed.get('type'):
107 return parsed['type']
108
109 # Otherwise, infer based on the dtype of the input
110 typ = pd.lib.infer_dtype(data)
111
112 # TODO: Once this returns 'O', please update test_select_x and test_select_y in test_api.py
113
114 if typ in ['floating', 'mixed-integer-float', 'integer',
115 'mixed-integer', 'complex']:
116 return 'quantitative'
117 elif typ in ['string', 'bytes', 'categorical', 'boolean', 'mixed', 'unicode']:
118 return 'nominal'
119 elif typ in ['datetime', 'datetime64', 'timedelta',
120 'timedelta64', 'date', 'time', 'period']:
121 return 'temporal'
122 else:
123 warnings.warn("I don't know how to infer vegalite type from '{0}'. "
124 "Defaulting to nominal.".format(typ))
125 return 'nominal'
126
127
128 def sanitize_dataframe(df):
129 """Sanitize a DataFrame to prepare it for serialization.
130
131 * Make a copy
132 * Raise ValueError if it has a hierarchical index.
133 * Convert categoricals to strings.
134 * Convert np.bool_ dtypes to Python bool objects
135 * Convert np.int dtypes to Python int objects
136 * Convert floats to objects and replace NaNs by None.
137 * Convert DateTime dtypes into appropriate string representations
138 """
139 df = df.copy()
140
141 if isinstance(df.index, pd.core.index.MultiIndex):
142 raise ValueError('Hierarchical indices not supported')
143 if isinstance(df.columns, pd.core.index.MultiIndex):
144 raise ValueError('Hierarchical indices not supported')
145
146 def to_list_if_array(val):
147 if isinstance(val, np.ndarray):
148 return val.tolist()
149 else:
150 return val
151
152 for col_name, dtype in df.dtypes.iteritems():
153 if str(dtype) == 'category':
154 # XXXX: work around bug in to_json for categorical types
155 # https://github.com/pydata/pandas/issues/10778
156 df[col_name] = df[col_name].astype(str)
157 elif str(dtype) == 'bool':
158 # convert numpy bools to objects; np.bool is not JSON serializable
159 df[col_name] = df[col_name].astype(object)
160 elif np.issubdtype(dtype, np.integer):
161 # convert integers to objects; np.int is not JSON serializable
162 df[col_name] = df[col_name].astype(object)
163 elif np.issubdtype(dtype, np.floating):
164 # For floats, convert nan->None: np.float is not JSON serializable
165 col = df[col_name].astype(object)
166 df[col_name] = col.where(col.notnull(), None)
167 elif str(dtype).startswith('datetime'):
168 # Convert datetimes to strings
169 # astype(str) will choose the appropriate resolution
170 df[col_name] = df[col_name].astype(str).replace('NaT', '')
171 elif dtype == object:
172 # Convert numpy arrays saved as objects to lists
173 # Arrays are not JSON serializable
174 col = df[col_name].apply(to_list_if_array, convert_dtype=False)
175 df[col_name] = col.where(col.notnull(), None)
176 return df
177
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/altair/utils/core.py b/altair/utils/core.py
--- a/altair/utils/core.py
+++ b/altair/utils/core.py
@@ -7,6 +7,10 @@
import pandas as pd
import numpy as np
+try:
+ from pandas.api.types import infer_dtype
+except ImportError: # Pandas before 0.20.0
+ from pandas.lib import infer_dtype
TYPECODE_MAP = {'ordinal': 'O',
'nominal': 'N',
@@ -107,7 +111,7 @@
return parsed['type']
# Otherwise, infer based on the dtype of the input
- typ = pd.lib.infer_dtype(data)
+ typ = infer_dtype(data)
# TODO: Once this returns 'O', please update test_select_x and test_select_y in test_api.py
|
{"golden_diff": "diff --git a/altair/utils/core.py b/altair/utils/core.py\n--- a/altair/utils/core.py\n+++ b/altair/utils/core.py\n@@ -7,6 +7,10 @@\n import pandas as pd\n import numpy as np\n \n+try:\n+ from pandas.api.types import infer_dtype\n+except ImportError: # Pandas before 0.20.0\n+ from pandas.lib import infer_dtype\n \n TYPECODE_MAP = {'ordinal': 'O',\n 'nominal': 'N',\n@@ -107,7 +111,7 @@\n return parsed['type']\n \n # Otherwise, infer based on the dtype of the input\n- typ = pd.lib.infer_dtype(data)\n+ typ = infer_dtype(data)\n \n # TODO: Once this returns 'O', please update test_select_x and test_select_y in test_api.py\n", "issue": "FutureWarning in Pandas 0.20.1\nSince upgrading to Pandas 0.20.1 I get this warning when first using altair in a notebook.\r\n\r\n```\r\nsite-packages\\altair\\utils\\core.py:110: FutureWarning: pandas.lib is deprecated and will be removed in a future version.\r\nYou can access infer_dtype as pandas.api.types.infer_dtype\r\n typ = pd.lib.infer_dtype(data)\r\n```\n", "before_files": [{"content": "\"\"\"\nUtility routines\n\"\"\"\nimport re\nimport warnings\n\nimport pandas as pd\nimport numpy as np\n\n\nTYPECODE_MAP = {'ordinal': 'O',\n 'nominal': 'N',\n 'quantitative': 'Q',\n 'temporal': 'T'}\n\nINV_TYPECODE_MAP = {v: k for k, v in TYPECODE_MAP.items()}\n\nTYPE_ABBR = TYPECODE_MAP.values()\n\n\ndef parse_shorthand(shorthand):\n \"\"\"\n Parse the shorthand expression for aggregation, field, and type.\n\n These are of the form:\n\n - \"col_name\"\n - \"col_name:O\"\n - \"average(col_name)\"\n - \"average(col_name):O\"\n\n Parameters\n ----------\n shorthand: str\n Shorthand string\n\n Returns\n -------\n D : dict\n Dictionary containing the field, aggregate, and typecode\n \"\"\"\n if not shorthand:\n return {}\n\n # Must import this here to avoid circular imports\n from ..schema import AggregateOp\n valid_aggregates = AggregateOp().values\n valid_typecodes = list(TYPECODE_MAP) + list(INV_TYPECODE_MAP)\n\n # build regular expressions\n units = dict(field='(?P<field>.*)',\n type='(?P<type>{0})'.format('|'.join(valid_typecodes)),\n aggregate='(?P<aggregate>{0})'.format('|'.join(valid_aggregates)))\n patterns = [r'{field}',\n r'{field}:{type}',\n r'{aggregate}\\({field}\\)',\n r'{aggregate}\\({field}\\):{type}']\n regexps = (re.compile('\\A' + p.format(**units) + '\\Z', re.DOTALL)\n for p in patterns[::-1])\n\n # find matches depending on valid fields passed\n match = next(exp.match(shorthand).groupdict() for exp in regexps\n if exp.match(shorthand))\n\n # Use short form of the type expression\n typ = match.get('type', None)\n if typ:\n match['type'] = INV_TYPECODE_MAP.get(typ, typ)\n return match\n\n\ndef construct_shorthand(field=None, aggregate=None, type=None):\n \"\"\"Construct a shorthand representation.\n\n See also: parse_shorthand\"\"\"\n if field is None:\n return ''\n\n sh = field\n\n if aggregate is not None:\n sh = '{0}({1})'.format(aggregate, sh)\n\n if type is not None:\n type = TYPECODE_MAP.get(type, type)\n if type not in TYPE_ABBR:\n raise ValueError('Unrecognized Type: {0}'.format(type))\n sh = '{0}:{1}'.format(sh, type)\n\n return sh\n\n\ndef infer_vegalite_type(data, field=None):\n \"\"\"\n From an array-like input, infer the correct vega typecode\n ('ordinal', 'nominal', 'quantitative', or 'temporal')\n\n Parameters\n ----------\n data: Numpy array or Pandas Series\n field: str column name\n \"\"\"\n # See if we can read the type from the field\n if field is not None:\n parsed = parse_shorthand(field)\n if parsed.get('type'):\n return parsed['type']\n\n # Otherwise, infer based on the dtype of the input\n typ = pd.lib.infer_dtype(data)\n\n # TODO: Once this returns 'O', please update test_select_x and test_select_y in test_api.py\n\n if typ in ['floating', 'mixed-integer-float', 'integer',\n 'mixed-integer', 'complex']:\n return 'quantitative'\n elif typ in ['string', 'bytes', 'categorical', 'boolean', 'mixed', 'unicode']:\n return 'nominal'\n elif typ in ['datetime', 'datetime64', 'timedelta',\n 'timedelta64', 'date', 'time', 'period']:\n return 'temporal'\n else:\n warnings.warn(\"I don't know how to infer vegalite type from '{0}'. \"\n \"Defaulting to nominal.\".format(typ))\n return 'nominal'\n\n\ndef sanitize_dataframe(df):\n \"\"\"Sanitize a DataFrame to prepare it for serialization.\n\n * Make a copy\n * Raise ValueError if it has a hierarchical index.\n * Convert categoricals to strings.\n * Convert np.bool_ dtypes to Python bool objects\n * Convert np.int dtypes to Python int objects\n * Convert floats to objects and replace NaNs by None.\n * Convert DateTime dtypes into appropriate string representations\n \"\"\"\n df = df.copy()\n\n if isinstance(df.index, pd.core.index.MultiIndex):\n raise ValueError('Hierarchical indices not supported')\n if isinstance(df.columns, pd.core.index.MultiIndex):\n raise ValueError('Hierarchical indices not supported')\n\n def to_list_if_array(val):\n if isinstance(val, np.ndarray):\n return val.tolist()\n else:\n return val\n\n for col_name, dtype in df.dtypes.iteritems():\n if str(dtype) == 'category':\n # XXXX: work around bug in to_json for categorical types\n # https://github.com/pydata/pandas/issues/10778\n df[col_name] = df[col_name].astype(str)\n elif str(dtype) == 'bool':\n # convert numpy bools to objects; np.bool is not JSON serializable\n df[col_name] = df[col_name].astype(object)\n elif np.issubdtype(dtype, np.integer):\n # convert integers to objects; np.int is not JSON serializable\n df[col_name] = df[col_name].astype(object)\n elif np.issubdtype(dtype, np.floating):\n # For floats, convert nan->None: np.float is not JSON serializable\n col = df[col_name].astype(object)\n df[col_name] = col.where(col.notnull(), None)\n elif str(dtype).startswith('datetime'):\n # Convert datetimes to strings\n # astype(str) will choose the appropriate resolution\n df[col_name] = df[col_name].astype(str).replace('NaT', '')\n elif dtype == object:\n # Convert numpy arrays saved as objects to lists\n # Arrays are not JSON serializable\n col = df[col_name].apply(to_list_if_array, convert_dtype=False)\n df[col_name] = col.where(col.notnull(), None)\n return df\n", "path": "altair/utils/core.py"}], "after_files": [{"content": "\"\"\"\nUtility routines\n\"\"\"\nimport re\nimport warnings\n\nimport pandas as pd\nimport numpy as np\n\ntry:\n from pandas.api.types import infer_dtype\nexcept ImportError: # Pandas before 0.20.0\n from pandas.lib import infer_dtype\n\nTYPECODE_MAP = {'ordinal': 'O',\n 'nominal': 'N',\n 'quantitative': 'Q',\n 'temporal': 'T'}\n\nINV_TYPECODE_MAP = {v: k for k, v in TYPECODE_MAP.items()}\n\nTYPE_ABBR = TYPECODE_MAP.values()\n\n\ndef parse_shorthand(shorthand):\n \"\"\"\n Parse the shorthand expression for aggregation, field, and type.\n\n These are of the form:\n\n - \"col_name\"\n - \"col_name:O\"\n - \"average(col_name)\"\n - \"average(col_name):O\"\n\n Parameters\n ----------\n shorthand: str\n Shorthand string\n\n Returns\n -------\n D : dict\n Dictionary containing the field, aggregate, and typecode\n \"\"\"\n if not shorthand:\n return {}\n\n # Must import this here to avoid circular imports\n from ..schema import AggregateOp\n valid_aggregates = AggregateOp().values\n valid_typecodes = list(TYPECODE_MAP) + list(INV_TYPECODE_MAP)\n\n # build regular expressions\n units = dict(field='(?P<field>.*)',\n type='(?P<type>{0})'.format('|'.join(valid_typecodes)),\n aggregate='(?P<aggregate>{0})'.format('|'.join(valid_aggregates)))\n patterns = [r'{field}',\n r'{field}:{type}',\n r'{aggregate}\\({field}\\)',\n r'{aggregate}\\({field}\\):{type}']\n regexps = (re.compile('\\A' + p.format(**units) + '\\Z', re.DOTALL)\n for p in patterns[::-1])\n\n # find matches depending on valid fields passed\n match = next(exp.match(shorthand).groupdict() for exp in regexps\n if exp.match(shorthand))\n\n # Use short form of the type expression\n typ = match.get('type', None)\n if typ:\n match['type'] = INV_TYPECODE_MAP.get(typ, typ)\n return match\n\n\ndef construct_shorthand(field=None, aggregate=None, type=None):\n \"\"\"Construct a shorthand representation.\n\n See also: parse_shorthand\"\"\"\n if field is None:\n return ''\n\n sh = field\n\n if aggregate is not None:\n sh = '{0}({1})'.format(aggregate, sh)\n\n if type is not None:\n type = TYPECODE_MAP.get(type, type)\n if type not in TYPE_ABBR:\n raise ValueError('Unrecognized Type: {0}'.format(type))\n sh = '{0}:{1}'.format(sh, type)\n\n return sh\n\n\ndef infer_vegalite_type(data, field=None):\n \"\"\"\n From an array-like input, infer the correct vega typecode\n ('ordinal', 'nominal', 'quantitative', or 'temporal')\n\n Parameters\n ----------\n data: Numpy array or Pandas Series\n field: str column name\n \"\"\"\n # See if we can read the type from the field\n if field is not None:\n parsed = parse_shorthand(field)\n if parsed.get('type'):\n return parsed['type']\n\n # Otherwise, infer based on the dtype of the input\n typ = infer_dtype(data)\n\n # TODO: Once this returns 'O', please update test_select_x and test_select_y in test_api.py\n\n if typ in ['floating', 'mixed-integer-float', 'integer',\n 'mixed-integer', 'complex']:\n return 'quantitative'\n elif typ in ['string', 'bytes', 'categorical', 'boolean', 'mixed', 'unicode']:\n return 'nominal'\n elif typ in ['datetime', 'datetime64', 'timedelta',\n 'timedelta64', 'date', 'time', 'period']:\n return 'temporal'\n else:\n warnings.warn(\"I don't know how to infer vegalite type from '{0}'. \"\n \"Defaulting to nominal.\".format(typ))\n return 'nominal'\n\n\ndef sanitize_dataframe(df):\n \"\"\"Sanitize a DataFrame to prepare it for serialization.\n\n * Make a copy\n * Raise ValueError if it has a hierarchical index.\n * Convert categoricals to strings.\n * Convert np.bool_ dtypes to Python bool objects\n * Convert np.int dtypes to Python int objects\n * Convert floats to objects and replace NaNs by None.\n * Convert DateTime dtypes into appropriate string representations\n \"\"\"\n df = df.copy()\n\n if isinstance(df.index, pd.core.index.MultiIndex):\n raise ValueError('Hierarchical indices not supported')\n if isinstance(df.columns, pd.core.index.MultiIndex):\n raise ValueError('Hierarchical indices not supported')\n\n def to_list_if_array(val):\n if isinstance(val, np.ndarray):\n return val.tolist()\n else:\n return val\n\n for col_name, dtype in df.dtypes.iteritems():\n if str(dtype) == 'category':\n # XXXX: work around bug in to_json for categorical types\n # https://github.com/pydata/pandas/issues/10778\n df[col_name] = df[col_name].astype(str)\n elif str(dtype) == 'bool':\n # convert numpy bools to objects; np.bool is not JSON serializable\n df[col_name] = df[col_name].astype(object)\n elif np.issubdtype(dtype, np.integer):\n # convert integers to objects; np.int is not JSON serializable\n df[col_name] = df[col_name].astype(object)\n elif np.issubdtype(dtype, np.floating):\n # For floats, convert nan->None: np.float is not JSON serializable\n col = df[col_name].astype(object)\n df[col_name] = col.where(col.notnull(), None)\n elif str(dtype).startswith('datetime'):\n # Convert datetimes to strings\n # astype(str) will choose the appropriate resolution\n df[col_name] = df[col_name].astype(str).replace('NaT', '')\n elif dtype == object:\n # Convert numpy arrays saved as objects to lists\n # Arrays are not JSON serializable\n col = df[col_name].apply(to_list_if_array, convert_dtype=False)\n df[col_name] = col.where(col.notnull(), None)\n return df\n", "path": "altair/utils/core.py"}]}
| 2,157 | 191 |
gh_patches_debug_34021
|
rasdani/github-patches
|
git_diff
|
google__TensorNetwork-820
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Einsum support?
Should we extend our API to support einsum equations? It could potentially make connecting nodes much less verbose. However, I question whether anyone would want to use `tn.einsum` over say `np.einsum`. Perhaps we could support only doing the left side of the equation?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensornetwork/__init__.py`
Content:
```
1 from tensornetwork.network_components import (AbstractNode, CopyNode, Edge,
2 Node, NodeCollection)
3 from tensornetwork.network_operations import (
4 check_connected, check_correct, contract_trace_edges, copy, get_all_edges,
5 get_all_nodes, get_neighbors, get_subgraph_dangling, reachable,
6 reduced_density, remove_node, replicate_nodes, split_node,
7 split_node_full_svd, split_node_qr, split_node_rq, switch_backend)
8
9 from tensornetwork.tensor import Tensor
10 from tensornetwork.linalg.initialization import (
11 eye,
12 ones,
13 randn,
14 random_uniform,
15 zeros
16 )
17
18 from tensornetwork.linalg.linalg import norm, qr, svd
19
20 #pylint: disable=redefined-builtin
21 from tensornetwork.linalg.operations import (
22 tensordot,
23 reshape,
24 transpose,
25 take_slice,
26 shape,
27 sqrt,
28 outer,
29 einsum,
30 conj,
31 hconj,
32 sin,
33 cos,
34 exp,
35 log,
36 diagonal,
37 diagflat,
38 trace,
39 sign,
40 abs,
41 kron,
42 pivot
43 )
44
45 from tensornetwork.backends.decorators import jit
46
47 from tensornetwork.network_components import (
48 contract, contract_between, contract_copy_node, contract_parallel,
49 flatten_all_edges, flatten_edges, flatten_edges_between,
50 get_all_nondangling, get_all_dangling, get_parallel_edges, get_shared_edges,
51 outer_product, outer_product_final_nodes, slice_edge, split_edge)
52 from tensornetwork.backends.abstract_backend import AbstractBackend
53 from tensornetwork.network_components import connect, disconnect
54 from tensornetwork.ncon_interface import ncon
55 from tensornetwork.version import __version__
56 from tensornetwork.visualization.graphviz import to_graphviz
57 from tensornetwork import contractors
58 from tensornetwork.utils import load_nodes, save_nodes
59 from tensornetwork.matrixproductstates.infinite_mps import InfiniteMPS
60 from tensornetwork.matrixproductstates.finite_mps import FiniteMPS
61 from tensornetwork.matrixproductstates.dmrg import FiniteDMRG
62 from tensornetwork.matrixproductstates.mpo import FiniteTFI, FiniteXXZ
63 from tensornetwork.backend_contextmanager import DefaultBackend
64 from tensornetwork.backend_contextmanager import set_default_backend
65 from tensornetwork import block_sparse
66 from tensornetwork.block_sparse.blocksparsetensor import BlockSparseTensor
67 from tensornetwork.block_sparse.blocksparsetensor import ChargeArray
68 from tensornetwork.block_sparse.index import Index
69 from tensornetwork.block_sparse.charge import U1Charge, BaseCharge, Z2Charge
70 from tensornetwork.block_sparse.charge import ZNCharge
71
```
Path: `tensornetwork/utils.py`
Content:
```
1 # Copyright 2019 The TensorNetwork Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import h5py
16 from tensornetwork.component_factory import get_component
17 from tensornetwork.network_components import Edge, AbstractNode
18 from tensornetwork.network_operations import reachable, get_all_edges
19 from typing import List, Union, BinaryIO
20 import numpy as np
21 string_type = h5py.special_dtype(vlen=str)
22
23
24 def save_nodes(nodes: List[AbstractNode], path: Union[str, BinaryIO]) -> None:
25 """Save an iterable of nodes into hdf5 format.
26
27 Args:
28 nodes: An iterable of connected nodes. All nodes have to connect within
29 `nodes`.
30 path: path to file where network is saved.
31 """
32 if reachable(nodes) > set(nodes):
33 raise ValueError(
34 "Some nodes in `nodes` are connected to nodes not contained in `nodes`."
35 " Saving not possible.")
36 if len(set(nodes)) < len(list(nodes)):
37 raise ValueError(
38 'Some nodes in `nodes` appear more than once. This is not supported')
39 #we need to iterate twice and order matters
40 edges = list(get_all_edges(nodes))
41 nodes = list(nodes)
42
43 old_edge_names = {n: edge.name for n, edge in enumerate(edges)}
44 old_node_names = {n: node.name for n, node in enumerate(nodes)}
45
46 #generate unique names for nodes and edges
47 #for saving them
48 for n, node in enumerate(nodes):
49 node.set_name('node{}'.format(n))
50
51 for e, edge in enumerate(edges):
52 edge.set_name('edge{}'.format(e))
53
54 with h5py.File(path, 'w') as net_file:
55 nodes_group = net_file.create_group('nodes')
56 node_names_group = net_file.create_group('node_names')
57 node_names_group.create_dataset(
58 'names',
59 dtype=string_type,
60 data=np.array(list(old_node_names.values()), dtype=object))
61
62 edges_group = net_file.create_group('edges')
63 edge_names_group = net_file.create_group('edge_names')
64 edge_names_group.create_dataset(
65 'names',
66 dtype=string_type,
67 data=np.array(list(old_edge_names.values()), dtype=object))
68
69 for n, node in enumerate(nodes):
70 node_group = nodes_group.create_group(node.name)
71 node._save_node(node_group)
72 for edge in node.edges:
73 if edge.node1 == node and edge in edges:
74 edge_group = edges_group.create_group(edge.name)
75 edge._save_edge(edge_group)
76 edges.remove(edge)
77
78 #name edges and nodes back to their original names
79 for n, node in enumerate(nodes):
80 nodes[n].set_name(old_node_names[n])
81
82 for n, edge in enumerate(edges):
83 edges[n].set_name(old_edge_names[n])
84
85
86 def load_nodes(path: str) -> List[AbstractNode]:
87 """Load a set of nodes from disk.
88
89 Args:
90 path: path to file where network is saved.
91 Returns:
92 An iterable of `Node` objects
93 """
94 nodes_list = []
95 edges_list = []
96 with h5py.File(path, 'r') as net_file:
97 nodes = list(net_file["nodes"].keys())
98 node_names = {
99 'node{}'.format(n): v
100 for n, v in enumerate(net_file["node_names"]['names'][()])
101 }
102
103 edge_names = {
104 'edge{}'.format(n): v
105 for n, v in enumerate(net_file["edge_names"]['names'][()])
106 }
107 edges = list(net_file["edges"].keys())
108 for node_name in nodes:
109 node_data = net_file["nodes/" + node_name]
110 node_type = get_component(node_data['type'][()])
111 nodes_list.append(node_type._load_node(node_data=node_data))
112 nodes_dict = {node.name: node for node in nodes_list}
113 for edge in edges:
114 edge_data = net_file["edges/" + edge]
115 edges_list.append(Edge._load_edge(edge_data, nodes_dict))
116
117 for edge in edges_list:
118 edge.set_name(edge_names[edge.name])
119 for node in nodes_list:
120 node.set_name(node_names[node.name])
121
122 return nodes_list
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tensornetwork/__init__.py b/tensornetwork/__init__.py
--- a/tensornetwork/__init__.py
+++ b/tensornetwork/__init__.py
@@ -55,7 +55,7 @@
from tensornetwork.version import __version__
from tensornetwork.visualization.graphviz import to_graphviz
from tensornetwork import contractors
-from tensornetwork.utils import load_nodes, save_nodes
+from tensornetwork.utils import load_nodes, save_nodes, from_topology
from tensornetwork.matrixproductstates.infinite_mps import InfiniteMPS
from tensornetwork.matrixproductstates.finite_mps import FiniteMPS
from tensornetwork.matrixproductstates.dmrg import FiniteDMRG
diff --git a/tensornetwork/utils.py b/tensornetwork/utils.py
--- a/tensornetwork/utils.py
+++ b/tensornetwork/utils.py
@@ -14,7 +14,7 @@
import h5py
from tensornetwork.component_factory import get_component
-from tensornetwork.network_components import Edge, AbstractNode
+from tensornetwork.network_components import Edge, AbstractNode, Node
from tensornetwork.network_operations import reachable, get_all_edges
from typing import List, Union, BinaryIO
import numpy as np
@@ -120,3 +120,37 @@
node.set_name(node_names[node.name])
return nodes_list
+
+def from_topology(topology, tensors, backend=None):
+ """Create and connect new `tn.Node`s by the given einsum-like topology.
+
+ Example:
+ ```
+ a, b, c = tn.from_topology("xy,yz,zx", [a, b, c])
+ ```
+ Args:
+ topology: A string that defines the topology. Should be like
+ the left side of an einsum expression.
+ tensors: The tensors needed to create the nodes.
+
+ Returns:
+ A list of Nodes.
+ """
+ edge_dict = {}
+ nodes = []
+ split_list = topology.split(",")
+ if len(split_list) != len(tensors):
+ raise ValueError("topology and number of tensors is mismatched")
+ for local_axes, tensor in zip(split_list, tensors):
+ local_axes_list = list(local_axes)
+ if len(local_axes_list) != len(tensor.shape):
+ raise ValueError(f"{local_axes} does not match shape {tensor.shape}")
+ new_node = Node(tensor, axis_names=local_axes_list, backend=backend)
+ for c in local_axes:
+ if c in edge_dict:
+ edge_dict[c] = edge_dict[c] ^ new_node[c]
+ else:
+ edge_dict[c] = new_node[c]
+ nodes.append(new_node)
+ return nodes
+
|
{"golden_diff": "diff --git a/tensornetwork/__init__.py b/tensornetwork/__init__.py\n--- a/tensornetwork/__init__.py\n+++ b/tensornetwork/__init__.py\n@@ -55,7 +55,7 @@\n from tensornetwork.version import __version__\n from tensornetwork.visualization.graphviz import to_graphviz\n from tensornetwork import contractors\n-from tensornetwork.utils import load_nodes, save_nodes\n+from tensornetwork.utils import load_nodes, save_nodes, from_topology\n from tensornetwork.matrixproductstates.infinite_mps import InfiniteMPS\n from tensornetwork.matrixproductstates.finite_mps import FiniteMPS\n from tensornetwork.matrixproductstates.dmrg import FiniteDMRG\ndiff --git a/tensornetwork/utils.py b/tensornetwork/utils.py\n--- a/tensornetwork/utils.py\n+++ b/tensornetwork/utils.py\n@@ -14,7 +14,7 @@\n \n import h5py\n from tensornetwork.component_factory import get_component\n-from tensornetwork.network_components import Edge, AbstractNode\n+from tensornetwork.network_components import Edge, AbstractNode, Node\n from tensornetwork.network_operations import reachable, get_all_edges\n from typing import List, Union, BinaryIO\n import numpy as np\n@@ -120,3 +120,37 @@\n node.set_name(node_names[node.name])\n \n return nodes_list\n+\n+def from_topology(topology, tensors, backend=None):\n+ \"\"\"Create and connect new `tn.Node`s by the given einsum-like topology.\n+ \n+ Example:\n+ ```\n+ a, b, c = tn.from_topology(\"xy,yz,zx\", [a, b, c])\n+ ```\n+ Args:\n+ topology: A string that defines the topology. Should be like\n+ the left side of an einsum expression.\n+ tensors: The tensors needed to create the nodes.\n+\n+ Returns:\n+ A list of Nodes.\n+ \"\"\"\n+ edge_dict = {}\n+ nodes = []\n+ split_list = topology.split(\",\")\n+ if len(split_list) != len(tensors):\n+ raise ValueError(\"topology and number of tensors is mismatched\")\n+ for local_axes, tensor in zip(split_list, tensors):\n+ local_axes_list = list(local_axes)\n+ if len(local_axes_list) != len(tensor.shape):\n+ raise ValueError(f\"{local_axes} does not match shape {tensor.shape}\")\n+ new_node = Node(tensor, axis_names=local_axes_list, backend=backend)\n+ for c in local_axes:\n+ if c in edge_dict:\n+ edge_dict[c] = edge_dict[c] ^ new_node[c]\n+ else:\n+ edge_dict[c] = new_node[c]\n+ nodes.append(new_node)\n+ return nodes \n+\n", "issue": "Einsum support?\nShould we extend our API to support einsum equations? It could potentially make connecting nodes much less verbose. However, I question whether anyone would want to use `tn.einsum` over say `np.einsum`. Perhaps we could support only doing the left side of the equation?\n", "before_files": [{"content": "from tensornetwork.network_components import (AbstractNode, CopyNode, Edge,\n Node, NodeCollection)\nfrom tensornetwork.network_operations import (\n check_connected, check_correct, contract_trace_edges, copy, get_all_edges,\n get_all_nodes, get_neighbors, get_subgraph_dangling, reachable,\n reduced_density, remove_node, replicate_nodes, split_node,\n split_node_full_svd, split_node_qr, split_node_rq, switch_backend)\n\nfrom tensornetwork.tensor import Tensor\nfrom tensornetwork.linalg.initialization import (\n eye,\n ones,\n randn,\n random_uniform,\n zeros\n )\n\nfrom tensornetwork.linalg.linalg import norm, qr, svd\n\n#pylint: disable=redefined-builtin\nfrom tensornetwork.linalg.operations import (\n tensordot,\n reshape,\n transpose,\n take_slice,\n shape,\n sqrt,\n outer,\n einsum,\n conj,\n hconj,\n sin,\n cos,\n exp,\n log,\n diagonal,\n diagflat,\n trace,\n sign,\n abs,\n kron,\n pivot\n )\n\nfrom tensornetwork.backends.decorators import jit\n\nfrom tensornetwork.network_components import (\n contract, contract_between, contract_copy_node, contract_parallel,\n flatten_all_edges, flatten_edges, flatten_edges_between,\n get_all_nondangling, get_all_dangling, get_parallel_edges, get_shared_edges,\n outer_product, outer_product_final_nodes, slice_edge, split_edge)\nfrom tensornetwork.backends.abstract_backend import AbstractBackend\nfrom tensornetwork.network_components import connect, disconnect\nfrom tensornetwork.ncon_interface import ncon\nfrom tensornetwork.version import __version__\nfrom tensornetwork.visualization.graphviz import to_graphviz\nfrom tensornetwork import contractors\nfrom tensornetwork.utils import load_nodes, save_nodes\nfrom tensornetwork.matrixproductstates.infinite_mps import InfiniteMPS\nfrom tensornetwork.matrixproductstates.finite_mps import FiniteMPS\nfrom tensornetwork.matrixproductstates.dmrg import FiniteDMRG\nfrom tensornetwork.matrixproductstates.mpo import FiniteTFI, FiniteXXZ\nfrom tensornetwork.backend_contextmanager import DefaultBackend\nfrom tensornetwork.backend_contextmanager import set_default_backend\nfrom tensornetwork import block_sparse\nfrom tensornetwork.block_sparse.blocksparsetensor import BlockSparseTensor\nfrom tensornetwork.block_sparse.blocksparsetensor import ChargeArray\nfrom tensornetwork.block_sparse.index import Index\nfrom tensornetwork.block_sparse.charge import U1Charge, BaseCharge, Z2Charge\nfrom tensornetwork.block_sparse.charge import ZNCharge\n", "path": "tensornetwork/__init__.py"}, {"content": "# Copyright 2019 The TensorNetwork Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport h5py\nfrom tensornetwork.component_factory import get_component\nfrom tensornetwork.network_components import Edge, AbstractNode\nfrom tensornetwork.network_operations import reachable, get_all_edges\nfrom typing import List, Union, BinaryIO\nimport numpy as np\nstring_type = h5py.special_dtype(vlen=str)\n\n\ndef save_nodes(nodes: List[AbstractNode], path: Union[str, BinaryIO]) -> None:\n \"\"\"Save an iterable of nodes into hdf5 format.\n\n Args:\n nodes: An iterable of connected nodes. All nodes have to connect within\n `nodes`.\n path: path to file where network is saved.\n \"\"\"\n if reachable(nodes) > set(nodes):\n raise ValueError(\n \"Some nodes in `nodes` are connected to nodes not contained in `nodes`.\"\n \" Saving not possible.\")\n if len(set(nodes)) < len(list(nodes)):\n raise ValueError(\n 'Some nodes in `nodes` appear more than once. This is not supported')\n #we need to iterate twice and order matters\n edges = list(get_all_edges(nodes))\n nodes = list(nodes)\n\n old_edge_names = {n: edge.name for n, edge in enumerate(edges)}\n old_node_names = {n: node.name for n, node in enumerate(nodes)}\n\n #generate unique names for nodes and edges\n #for saving them\n for n, node in enumerate(nodes):\n node.set_name('node{}'.format(n))\n\n for e, edge in enumerate(edges):\n edge.set_name('edge{}'.format(e))\n\n with h5py.File(path, 'w') as net_file:\n nodes_group = net_file.create_group('nodes')\n node_names_group = net_file.create_group('node_names')\n node_names_group.create_dataset(\n 'names',\n dtype=string_type,\n data=np.array(list(old_node_names.values()), dtype=object))\n\n edges_group = net_file.create_group('edges')\n edge_names_group = net_file.create_group('edge_names')\n edge_names_group.create_dataset(\n 'names',\n dtype=string_type,\n data=np.array(list(old_edge_names.values()), dtype=object))\n\n for n, node in enumerate(nodes):\n node_group = nodes_group.create_group(node.name)\n node._save_node(node_group)\n for edge in node.edges:\n if edge.node1 == node and edge in edges:\n edge_group = edges_group.create_group(edge.name)\n edge._save_edge(edge_group)\n edges.remove(edge)\n\n #name edges and nodes back to their original names\n for n, node in enumerate(nodes):\n nodes[n].set_name(old_node_names[n])\n\n for n, edge in enumerate(edges):\n edges[n].set_name(old_edge_names[n])\n\n\ndef load_nodes(path: str) -> List[AbstractNode]:\n \"\"\"Load a set of nodes from disk.\n\n Args:\n path: path to file where network is saved.\n Returns:\n An iterable of `Node` objects\n \"\"\"\n nodes_list = []\n edges_list = []\n with h5py.File(path, 'r') as net_file:\n nodes = list(net_file[\"nodes\"].keys())\n node_names = {\n 'node{}'.format(n): v\n for n, v in enumerate(net_file[\"node_names\"]['names'][()])\n }\n\n edge_names = {\n 'edge{}'.format(n): v\n for n, v in enumerate(net_file[\"edge_names\"]['names'][()])\n }\n edges = list(net_file[\"edges\"].keys())\n for node_name in nodes:\n node_data = net_file[\"nodes/\" + node_name]\n node_type = get_component(node_data['type'][()])\n nodes_list.append(node_type._load_node(node_data=node_data))\n nodes_dict = {node.name: node for node in nodes_list}\n for edge in edges:\n edge_data = net_file[\"edges/\" + edge]\n edges_list.append(Edge._load_edge(edge_data, nodes_dict))\n\n for edge in edges_list:\n edge.set_name(edge_names[edge.name])\n for node in nodes_list:\n node.set_name(node_names[node.name])\n\n return nodes_list\n", "path": "tensornetwork/utils.py"}], "after_files": [{"content": "from tensornetwork.network_components import (AbstractNode, CopyNode, Edge,\n Node, NodeCollection)\nfrom tensornetwork.network_operations import (\n check_connected, check_correct, contract_trace_edges, copy, get_all_edges,\n get_all_nodes, get_neighbors, get_subgraph_dangling, reachable,\n reduced_density, remove_node, replicate_nodes, split_node,\n split_node_full_svd, split_node_qr, split_node_rq, switch_backend)\n\nfrom tensornetwork.tensor import Tensor\nfrom tensornetwork.linalg.initialization import (\n eye,\n ones,\n randn,\n random_uniform,\n zeros\n )\n\nfrom tensornetwork.linalg.linalg import norm, qr, svd\n\n#pylint: disable=redefined-builtin\nfrom tensornetwork.linalg.operations import (\n tensordot,\n reshape,\n transpose,\n take_slice,\n shape,\n sqrt,\n outer,\n einsum,\n conj,\n hconj,\n sin,\n cos,\n exp,\n log,\n diagonal,\n diagflat,\n trace,\n sign,\n abs,\n kron,\n pivot\n )\n\nfrom tensornetwork.backends.decorators import jit\n\nfrom tensornetwork.network_components import (\n contract, contract_between, contract_copy_node, contract_parallel,\n flatten_all_edges, flatten_edges, flatten_edges_between,\n get_all_nondangling, get_all_dangling, get_parallel_edges, get_shared_edges,\n outer_product, outer_product_final_nodes, slice_edge, split_edge)\nfrom tensornetwork.backends.abstract_backend import AbstractBackend\nfrom tensornetwork.network_components import connect, disconnect\nfrom tensornetwork.ncon_interface import ncon\nfrom tensornetwork.version import __version__\nfrom tensornetwork.visualization.graphviz import to_graphviz\nfrom tensornetwork import contractors\nfrom tensornetwork.utils import load_nodes, save_nodes, from_topology\nfrom tensornetwork.matrixproductstates.infinite_mps import InfiniteMPS\nfrom tensornetwork.matrixproductstates.finite_mps import FiniteMPS\nfrom tensornetwork.matrixproductstates.dmrg import FiniteDMRG\nfrom tensornetwork.matrixproductstates.mpo import FiniteTFI, FiniteXXZ\nfrom tensornetwork.backend_contextmanager import DefaultBackend\nfrom tensornetwork.backend_contextmanager import set_default_backend\nfrom tensornetwork import block_sparse\nfrom tensornetwork.block_sparse.blocksparsetensor import BlockSparseTensor\nfrom tensornetwork.block_sparse.blocksparsetensor import ChargeArray\nfrom tensornetwork.block_sparse.index import Index\nfrom tensornetwork.block_sparse.charge import U1Charge, BaseCharge, Z2Charge\nfrom tensornetwork.block_sparse.charge import ZNCharge\n", "path": "tensornetwork/__init__.py"}, {"content": "# Copyright 2019 The TensorNetwork Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport h5py\nfrom tensornetwork.component_factory import get_component\nfrom tensornetwork.network_components import Edge, AbstractNode, Node\nfrom tensornetwork.network_operations import reachable, get_all_edges\nfrom typing import List, Union, BinaryIO\nimport numpy as np\nstring_type = h5py.special_dtype(vlen=str)\n\n\ndef save_nodes(nodes: List[AbstractNode], path: Union[str, BinaryIO]) -> None:\n \"\"\"Save an iterable of nodes into hdf5 format.\n\n Args:\n nodes: An iterable of connected nodes. All nodes have to connect within\n `nodes`.\n path: path to file where network is saved.\n \"\"\"\n if reachable(nodes) > set(nodes):\n raise ValueError(\n \"Some nodes in `nodes` are connected to nodes not contained in `nodes`.\"\n \" Saving not possible.\")\n if len(set(nodes)) < len(list(nodes)):\n raise ValueError(\n 'Some nodes in `nodes` appear more than once. This is not supported')\n #we need to iterate twice and order matters\n edges = list(get_all_edges(nodes))\n nodes = list(nodes)\n\n old_edge_names = {n: edge.name for n, edge in enumerate(edges)}\n old_node_names = {n: node.name for n, node in enumerate(nodes)}\n\n #generate unique names for nodes and edges\n #for saving them\n for n, node in enumerate(nodes):\n node.set_name('node{}'.format(n))\n\n for e, edge in enumerate(edges):\n edge.set_name('edge{}'.format(e))\n\n with h5py.File(path, 'w') as net_file:\n nodes_group = net_file.create_group('nodes')\n node_names_group = net_file.create_group('node_names')\n node_names_group.create_dataset(\n 'names',\n dtype=string_type,\n data=np.array(list(old_node_names.values()), dtype=object))\n\n edges_group = net_file.create_group('edges')\n edge_names_group = net_file.create_group('edge_names')\n edge_names_group.create_dataset(\n 'names',\n dtype=string_type,\n data=np.array(list(old_edge_names.values()), dtype=object))\n\n for n, node in enumerate(nodes):\n node_group = nodes_group.create_group(node.name)\n node._save_node(node_group)\n for edge in node.edges:\n if edge.node1 == node and edge in edges:\n edge_group = edges_group.create_group(edge.name)\n edge._save_edge(edge_group)\n edges.remove(edge)\n\n #name edges and nodes back to their original names\n for n, node in enumerate(nodes):\n nodes[n].set_name(old_node_names[n])\n\n for n, edge in enumerate(edges):\n edges[n].set_name(old_edge_names[n])\n\n\ndef load_nodes(path: str) -> List[AbstractNode]:\n \"\"\"Load a set of nodes from disk.\n\n Args:\n path: path to file where network is saved.\n Returns:\n An iterable of `Node` objects\n \"\"\"\n nodes_list = []\n edges_list = []\n with h5py.File(path, 'r') as net_file:\n nodes = list(net_file[\"nodes\"].keys())\n node_names = {\n 'node{}'.format(n): v\n for n, v in enumerate(net_file[\"node_names\"]['names'][()])\n }\n\n edge_names = {\n 'edge{}'.format(n): v\n for n, v in enumerate(net_file[\"edge_names\"]['names'][()])\n }\n edges = list(net_file[\"edges\"].keys())\n for node_name in nodes:\n node_data = net_file[\"nodes/\" + node_name]\n node_type = get_component(node_data['type'][()])\n nodes_list.append(node_type._load_node(node_data=node_data))\n nodes_dict = {node.name: node for node in nodes_list}\n for edge in edges:\n edge_data = net_file[\"edges/\" + edge]\n edges_list.append(Edge._load_edge(edge_data, nodes_dict))\n\n for edge in edges_list:\n edge.set_name(edge_names[edge.name])\n for node in nodes_list:\n node.set_name(node_names[node.name])\n\n return nodes_list\n\ndef from_topology(topology, tensors, backend=None):\n \"\"\"Create and connect new `tn.Node`s by the given einsum-like topology.\n \n Example:\n ```\n a, b, c = tn.from_topology(\"xy,yz,zx\", [a, b, c])\n ```\n Args:\n topology: A string that defines the topology. Should be like\n the left side of an einsum expression.\n tensors: The tensors needed to create the nodes.\n\n Returns:\n A list of Nodes.\n \"\"\"\n edge_dict = {}\n nodes = []\n split_list = topology.split(\",\")\n if len(split_list) != len(tensors):\n raise ValueError(\"topology and number of tensors is mismatched\")\n for local_axes, tensor in zip(split_list, tensors):\n local_axes_list = list(local_axes)\n if len(local_axes_list) != len(tensor.shape):\n raise ValueError(f\"{local_axes} does not match shape {tensor.shape}\")\n new_node = Node(tensor, axis_names=local_axes_list, backend=backend)\n for c in local_axes:\n if c in edge_dict:\n edge_dict[c] = edge_dict[c] ^ new_node[c]\n else:\n edge_dict[c] = new_node[c]\n nodes.append(new_node)\n return nodes \n \n", "path": "tensornetwork/utils.py"}]}
| 2,308 | 612 |
gh_patches_debug_56668
|
rasdani/github-patches
|
git_diff
|
magenta__magenta-841
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
execfile() was removed from Python 3
https://github.com/tensorflow/magenta/blob/master/magenta/tools/pip/setup.py#L23
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `magenta/tools/pip/setup.py`
Content:
```
1 # Copyright 2016 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """A setuptools based setup module for magenta."""
15
16 from setuptools import find_packages
17 from setuptools import setup
18
19 # Bit of a hack to parse the version string stored in version.py without
20 # executing __init__.py, which will end up requiring a bunch of dependencies to
21 # execute (e.g., tensorflow, pretty_midi, etc.).
22 # Makes the __version__ variable available.
23 execfile('magenta/version.py')
24
25
26 REQUIRED_PACKAGES = [
27 'IPython',
28 'Pillow >= 3.4.2',
29 'bokeh >= 0.12.0',
30 'futures',
31 'intervaltree >= 2.1.0',
32 'matplotlib >= 1.5.3',
33 'mido == 1.2.6',
34 'numpy >= 1.11.0',
35 'pandas >= 0.18.1',
36 'pretty_midi >= 0.2.6',
37 'python-rtmidi',
38 'scipy >= 0.18.1',
39 'tensorflow >= 1.1.0',
40 'wheel',
41 ]
42
43 CONSOLE_SCRIPTS = [
44 'magenta.interfaces.midi.magenta_midi',
45 'magenta.interfaces.midi.midi_clock',
46 'magenta.models.drums_rnn.drums_rnn_create_dataset',
47 'magenta.models.drums_rnn.drums_rnn_generate',
48 'magenta.models.drums_rnn.drums_rnn_train',
49 'magenta.models.image_stylization.image_stylization_create_dataset',
50 'magenta.models.image_stylization.image_stylization_evaluate',
51 'magenta.models.image_stylization.image_stylization_finetune',
52 'magenta.models.image_stylization.image_stylization_train',
53 'magenta.models.image_stylization.image_stylization_transform',
54 'magenta.models.improv_rnn.improv_rnn_create_dataset',
55 'magenta.models.improv_rnn.improv_rnn_generate',
56 'magenta.models.improv_rnn.improv_rnn_train',
57 'magenta.models.melody_rnn.melody_rnn_create_dataset',
58 'magenta.models.melody_rnn.melody_rnn_generate',
59 'magenta.models.melody_rnn.melody_rnn_train',
60 'magenta.models.nsynth.wavenet.nsynth_generate',
61 'magenta.models.nsynth.wavenet.nsynth_save_embeddings',
62 'magenta.models.performance_rnn.performance_rnn_create_dataset',
63 'magenta.models.performance_rnn.performance_rnn_generate',
64 'magenta.models.performance_rnn.performance_rnn_train',
65 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_create_dataset',
66 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_generate',
67 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_train',
68 'magenta.models.polyphony_rnn.polyphony_rnn_create_dataset',
69 'magenta.models.polyphony_rnn.polyphony_rnn_generate',
70 'magenta.models.polyphony_rnn.polyphony_rnn_train',
71 'magenta.models.rl_tuner.rl_tuner_train',
72 'magenta.models.sketch_rnn.sketch_rnn_train',
73 'magenta.scripts.convert_dir_to_note_sequences',
74 ]
75
76 setup(
77 name='magenta',
78 version=__version__, # pylint: disable=undefined-variable
79 description='Use machine learning to create art and music',
80 long_description='',
81 url='https://magenta.tensorflow.org/',
82 author='Google Inc.',
83 author_email='[email protected]',
84 license='Apache 2',
85 # PyPI package information.
86 classifiers=[
87 'Development Status :: 4 - Beta',
88 'Intended Audience :: Developers',
89 'Intended Audience :: Education',
90 'Intended Audience :: Science/Research',
91 'License :: OSI Approved :: Apache Software License',
92 'Programming Language :: Python :: 2.7',
93 'Programming Language :: Python :: 3',
94 'Topic :: Scientific/Engineering :: Mathematics',
95 'Topic :: Software Development :: Libraries :: Python Modules',
96 'Topic :: Software Development :: Libraries',
97 ],
98 keywords='tensorflow machine learning magenta music art',
99
100 packages=find_packages(),
101 install_requires=REQUIRED_PACKAGES,
102 entry_points={
103 'console_scripts': ['%s = %s:console_entry_point' % (n, p) for n, p in
104 ((s.split('.')[-1], s) for s in CONSOLE_SCRIPTS)],
105 },
106
107 include_package_data=True,
108 package_data={
109 'magenta': ['models/image_stylization/evaluation_images/*.jpg'],
110 },
111 )
112
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/magenta/tools/pip/setup.py b/magenta/tools/pip/setup.py
--- a/magenta/tools/pip/setup.py
+++ b/magenta/tools/pip/setup.py
@@ -20,7 +20,8 @@
# executing __init__.py, which will end up requiring a bunch of dependencies to
# execute (e.g., tensorflow, pretty_midi, etc.).
# Makes the __version__ variable available.
-execfile('magenta/version.py')
+with open('magenta/version.py') as in_file:
+ exec(in_file.read())
REQUIRED_PACKAGES = [
|
{"golden_diff": "diff --git a/magenta/tools/pip/setup.py b/magenta/tools/pip/setup.py\n--- a/magenta/tools/pip/setup.py\n+++ b/magenta/tools/pip/setup.py\n@@ -20,7 +20,8 @@\n # executing __init__.py, which will end up requiring a bunch of dependencies to\n # execute (e.g., tensorflow, pretty_midi, etc.).\n # Makes the __version__ variable available.\n-execfile('magenta/version.py')\n+with open('magenta/version.py') as in_file:\n+ exec(in_file.read())\n \n \n REQUIRED_PACKAGES = [\n", "issue": "execfile() was removed from Python 3\nhttps://github.com/tensorflow/magenta/blob/master/magenta/tools/pip/setup.py#L23\n", "before_files": [{"content": "# Copyright 2016 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"A setuptools based setup module for magenta.\"\"\"\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n# Bit of a hack to parse the version string stored in version.py without\n# executing __init__.py, which will end up requiring a bunch of dependencies to\n# execute (e.g., tensorflow, pretty_midi, etc.).\n# Makes the __version__ variable available.\nexecfile('magenta/version.py')\n\n\nREQUIRED_PACKAGES = [\n 'IPython',\n 'Pillow >= 3.4.2',\n 'bokeh >= 0.12.0',\n 'futures',\n 'intervaltree >= 2.1.0',\n 'matplotlib >= 1.5.3',\n 'mido == 1.2.6',\n 'numpy >= 1.11.0',\n 'pandas >= 0.18.1',\n 'pretty_midi >= 0.2.6',\n 'python-rtmidi',\n 'scipy >= 0.18.1',\n 'tensorflow >= 1.1.0',\n 'wheel',\n]\n\nCONSOLE_SCRIPTS = [\n 'magenta.interfaces.midi.magenta_midi',\n 'magenta.interfaces.midi.midi_clock',\n 'magenta.models.drums_rnn.drums_rnn_create_dataset',\n 'magenta.models.drums_rnn.drums_rnn_generate',\n 'magenta.models.drums_rnn.drums_rnn_train',\n 'magenta.models.image_stylization.image_stylization_create_dataset',\n 'magenta.models.image_stylization.image_stylization_evaluate',\n 'magenta.models.image_stylization.image_stylization_finetune',\n 'magenta.models.image_stylization.image_stylization_train',\n 'magenta.models.image_stylization.image_stylization_transform',\n 'magenta.models.improv_rnn.improv_rnn_create_dataset',\n 'magenta.models.improv_rnn.improv_rnn_generate',\n 'magenta.models.improv_rnn.improv_rnn_train',\n 'magenta.models.melody_rnn.melody_rnn_create_dataset',\n 'magenta.models.melody_rnn.melody_rnn_generate',\n 'magenta.models.melody_rnn.melody_rnn_train',\n 'magenta.models.nsynth.wavenet.nsynth_generate',\n 'magenta.models.nsynth.wavenet.nsynth_save_embeddings',\n 'magenta.models.performance_rnn.performance_rnn_create_dataset',\n 'magenta.models.performance_rnn.performance_rnn_generate',\n 'magenta.models.performance_rnn.performance_rnn_train',\n 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_create_dataset',\n 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_generate',\n 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_train',\n 'magenta.models.polyphony_rnn.polyphony_rnn_create_dataset',\n 'magenta.models.polyphony_rnn.polyphony_rnn_generate',\n 'magenta.models.polyphony_rnn.polyphony_rnn_train',\n 'magenta.models.rl_tuner.rl_tuner_train',\n 'magenta.models.sketch_rnn.sketch_rnn_train',\n 'magenta.scripts.convert_dir_to_note_sequences',\n]\n\nsetup(\n name='magenta',\n version=__version__, # pylint: disable=undefined-variable\n description='Use machine learning to create art and music',\n long_description='',\n url='https://magenta.tensorflow.org/',\n author='Google Inc.',\n author_email='[email protected]',\n license='Apache 2',\n # PyPI package information.\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n keywords='tensorflow machine learning magenta music art',\n\n packages=find_packages(),\n install_requires=REQUIRED_PACKAGES,\n entry_points={\n 'console_scripts': ['%s = %s:console_entry_point' % (n, p) for n, p in\n ((s.split('.')[-1], s) for s in CONSOLE_SCRIPTS)],\n },\n\n include_package_data=True,\n package_data={\n 'magenta': ['models/image_stylization/evaluation_images/*.jpg'],\n },\n)\n", "path": "magenta/tools/pip/setup.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"A setuptools based setup module for magenta.\"\"\"\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n# Bit of a hack to parse the version string stored in version.py without\n# executing __init__.py, which will end up requiring a bunch of dependencies to\n# execute (e.g., tensorflow, pretty_midi, etc.).\n# Makes the __version__ variable available.\nwith open('magenta/version.py') as in_file:\n exec(in_file.read())\n\n\nREQUIRED_PACKAGES = [\n 'IPython',\n 'Pillow >= 3.4.2',\n 'bokeh >= 0.12.0',\n 'futures',\n 'intervaltree >= 2.1.0',\n 'matplotlib >= 1.5.3',\n 'mido == 1.2.6',\n 'numpy >= 1.11.0',\n 'pandas >= 0.18.1',\n 'pretty_midi >= 0.2.6',\n 'python-rtmidi',\n 'scipy >= 0.18.1',\n 'tensorflow >= 1.1.0',\n 'wheel',\n]\n\nCONSOLE_SCRIPTS = [\n 'magenta.interfaces.midi.magenta_midi',\n 'magenta.interfaces.midi.midi_clock',\n 'magenta.models.drums_rnn.drums_rnn_create_dataset',\n 'magenta.models.drums_rnn.drums_rnn_generate',\n 'magenta.models.drums_rnn.drums_rnn_train',\n 'magenta.models.image_stylization.image_stylization_create_dataset',\n 'magenta.models.image_stylization.image_stylization_evaluate',\n 'magenta.models.image_stylization.image_stylization_finetune',\n 'magenta.models.image_stylization.image_stylization_train',\n 'magenta.models.image_stylization.image_stylization_transform',\n 'magenta.models.improv_rnn.improv_rnn_create_dataset',\n 'magenta.models.improv_rnn.improv_rnn_generate',\n 'magenta.models.improv_rnn.improv_rnn_train',\n 'magenta.models.melody_rnn.melody_rnn_create_dataset',\n 'magenta.models.melody_rnn.melody_rnn_generate',\n 'magenta.models.melody_rnn.melody_rnn_train',\n 'magenta.models.nsynth.wavenet.nsynth_generate',\n 'magenta.models.nsynth.wavenet.nsynth_save_embeddings',\n 'magenta.models.performance_rnn.performance_rnn_create_dataset',\n 'magenta.models.performance_rnn.performance_rnn_generate',\n 'magenta.models.performance_rnn.performance_rnn_train',\n 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_create_dataset',\n 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_generate',\n 'magenta.models.pianoroll_rnn_nade.pianoroll_rnn_nade_train',\n 'magenta.models.polyphony_rnn.polyphony_rnn_create_dataset',\n 'magenta.models.polyphony_rnn.polyphony_rnn_generate',\n 'magenta.models.polyphony_rnn.polyphony_rnn_train',\n 'magenta.models.rl_tuner.rl_tuner_train',\n 'magenta.models.sketch_rnn.sketch_rnn_train',\n 'magenta.scripts.convert_dir_to_note_sequences',\n]\n\nsetup(\n name='magenta',\n version=__version__, # pylint: disable=undefined-variable\n description='Use machine learning to create art and music',\n long_description='',\n url='https://magenta.tensorflow.org/',\n author='Google Inc.',\n author_email='[email protected]',\n license='Apache 2',\n # PyPI package information.\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n keywords='tensorflow machine learning magenta music art',\n\n packages=find_packages(),\n install_requires=REQUIRED_PACKAGES,\n entry_points={\n 'console_scripts': ['%s = %s:console_entry_point' % (n, p) for n, p in\n ((s.split('.')[-1], s) for s in CONSOLE_SCRIPTS)],\n },\n\n include_package_data=True,\n package_data={\n 'magenta': ['models/image_stylization/evaluation_images/*.jpg'],\n },\n)\n", "path": "magenta/tools/pip/setup.py"}]}
| 1,635 | 128 |
gh_patches_debug_30100
|
rasdani/github-patches
|
git_diff
|
zestedesavoir__zds-site-3524
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Mélange de contexte entre les contenus, les tutos et les articles
Serveur : Bêta
Version : v18-RC3/ffa18f8
Système : Mac OS X El Capitain
Navigateur : Safari Version 9.0.3 (11601.4.4)
Attention, ce n'est pas simple à comprendre et la portée du bug est peut-être plus important que le scénario décrit dans cette issue mais j'ai tenté de bien cerner le bug.
Scénario :
- Rendez-vous dans le menu déroulant des tutoriels.
- Cliquez sur une catégorie ou un tag, constatez que vous êtes dans un contexte de tutoriel (cf. le fil d'ariane).
- Sur cette page de catégorie et/ou de tag, constatez que vous pouvez utiliser les liens du fil d'ariane, les boutons du menu et que vous pouvez cliqué sur une catégorie dans le menu "Catégories des tutoriels" dans la sidebar. D'ailleurs, si vous cliquez sur l'une des catégories dans la sidebar, vous restez dans le contexte des tutoriels.
- Maintenant retournez dans le menu déroulant des tutoriels et cliquez sur "Tous les tags", puis rendez-vous dans l'une des catégories listée.
- Constatez que vous êtes dans un contexte "Contenu" et non plus tutoriel. Puis vous ne pouvez
- Plus utiliser les liens dans le fil d'ariane.
- Plus créer un contenu ni aider les auteurs.
- Plus utiliser les flux.
- Vous pouvez utilisé les catégories de la sidebar mais vous restez dans le contexte des menus. Vous gardez donc les mêmes bugs.
Note 1 : Tous ces bugs sont surement identiques avec les articles.
Note 2 : Vous pouvez aussi retrouver le bug en cliquant sur les catégories et les tags depuis un contenu publié.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zds/tutorialv2/urls/urls_contents.py`
Content:
```
1 # coding: utf-8
2
3 from django.conf.urls import url
4
5 from zds.tutorialv2.views.views_contents import DisplayContent, CreateContent, EditContent, \
6 DeleteContent, CreateContainer, DisplayContainer, EditContainer, CreateExtract, EditExtract, \
7 DeleteContainerOrExtract, ManageBetaContent, DisplayHistory, DisplayDiff, ActivateJSFiddleInContent, MoveChild, \
8 DownloadContent, UpdateContentWithArchive, CreateContentFromArchive, ContentsWithHelps, AddAuthorToContent, \
9 RemoveAuthorFromContent, WarnTypo, DisplayBetaContent, DisplayBetaContainer, ContentOfAuthor
10
11 from zds.tutorialv2.views.views_published import SendNoteFormView, UpdateNoteView, \
12 HideReaction, ShowReaction, SendNoteAlert, SolveNoteAlert, TagsListView, ListOnlineContents, FollowContent
13
14 urlpatterns = [
15 url(r'^tutoriels/(?P<pk>\d+)/$',
16 ContentOfAuthor.as_view(type='TUTORIAL', context_object_name='tutorials'),
17 name="find-tutorial"),
18 url(r'^articles/(?P<pk>\d+)/$',
19 ContentOfAuthor.as_view(type='ARTICLE', context_object_name='articles'),
20 name="find-article"),
21
22 url(r'^aides/$', ContentsWithHelps.as_view(), name='helps'),
23 url(r'^(?P<pk>\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',
24 DisplayContainer.as_view(public_is_prioritary=False),
25 name='view-container'),
26 url(r'^(?P<pk>\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',
27 DisplayContainer.as_view(public_is_prioritary=False),
28 name='view-container'),
29
30 url(r'^(?P<pk>\d+)/(?P<slug>.+)/$', DisplayContent.as_view(public_is_prioritary=False),
31 name='view'),
32
33 url(r'^telecharger/(?P<pk>\d+)/(?P<slug>.+)/$', DownloadContent.as_view(),
34 name='download-zip'),
35
36 # beta:
37 url(r'^beta/(?P<pk>\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',
38 DisplayBetaContainer.as_view(public_is_prioritary=False),
39 name='beta-view-container'),
40 url(r'^beta/(?P<pk>\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',
41 DisplayBetaContainer.as_view(public_is_prioritary=False),
42 name='beta-view-container'),
43
44 url(r'^beta/(?P<pk>\d+)/(?P<slug>.+)/$', DisplayBetaContent.as_view(), name='beta-view'),
45
46 # reactions:
47 url(r'^reactions/ajouter/$', SendNoteFormView.as_view(redirection_is_needed=False), name="add-reaction"),
48 url(r'^reactions/editer/$', UpdateNoteView.as_view(redirection_is_needed=False), name="update-reaction"),
49 url(r'^reactions/cacher/(?P<pk>\d+)/$', HideReaction.as_view(), name="hide-reaction"),
50 url(r'^reactions/afficher/(?P<pk>\d+)/$', ShowReaction.as_view(), name="show-reaction"),
51 url(r'^reactions/alerter/(?P<pk>\d+)/$', SendNoteAlert.as_view(), name="alert-reaction"),
52 url(r'^reactions/resoudre/$', SolveNoteAlert.as_view(), name="resolve-reaction"),
53
54 # follow:
55 url(r'^follow/(?P<pk>\d+)/$', FollowContent.as_view(), name="follow"),
56
57 # typo:
58 url(r'^reactions/typo/$', WarnTypo.as_view(), name="warn-typo"),
59
60 # create:
61 url(r'^nouveau-tutoriel/$',
62 CreateContent.as_view(created_content_type="TUTORIAL"), name='create-tutorial'),
63 url(r'^nouvel-article/$',
64 CreateContent.as_view(created_content_type="ARTICLE"), name='create-article'),
65 url(r'^nouveau-conteneur/(?P<pk>\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',
66 CreateContainer.as_view(),
67 name='create-container'),
68 url(r'^nouveau-conteneur/(?P<pk>\d+)/(?P<slug>.+)/$',
69 CreateContainer.as_view(),
70 name='create-container'),
71
72
73 url(r'^nouvelle-section/(?P<pk>\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',
74 CreateExtract.as_view(),
75 name='create-extract'),
76 url(r'^nouvelle-section/(?P<pk>\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',
77 CreateExtract.as_view(),
78 name='create-extract'),
79 url(r'^nouvelle-section/(?P<pk>\d+)/(?P<slug>.+)/$',
80 CreateExtract.as_view(),
81 name='create-extract'),
82
83 # edit:
84 url(r'^editer-conteneur/(?P<pk>\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/'
85 r'(?P<container_slug>.+)/$',
86 EditContainer.as_view(),
87 name='edit-container'),
88 url(r'^editer-conteneur/(?P<pk>\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',
89 EditContainer.as_view(),
90 name='edit-container'),
91
92 url(r'^editer-section/(?P<pk>\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/'
93 r'(?P<container_slug>.+)/(?P<extract_slug>.+)/$',
94 EditExtract.as_view(),
95 name='edit-extract'),
96 url(r'^editer-section/(?P<pk>\d+)/(?P<slug>.+)/(?P<container_slug>.+)/(?P<extract_slug>.+)/$',
97 EditExtract.as_view(),
98 name='edit-extract'),
99 url(r'^editer-section/(?P<pk>\d+)/(?P<slug>.+)/(?P<extract_slug>.+)/$',
100 EditExtract.as_view(),
101 name='edit-extract'),
102
103 url(r'^editer/(?P<pk>\d+)/(?P<slug>.+)/$', EditContent.as_view(), name='edit'),
104 url(r'^deplacer/$', MoveChild.as_view(), name='move-element'),
105
106 url(r'^historique/(?P<pk>\d+)/(?P<slug>.+)/$', DisplayHistory.as_view(), name="history"),
107 url(r'^comparaison/(?P<pk>\d+)/(?P<slug>.+)/$', DisplayDiff.as_view(), name="diff"),
108 url(r'^ajouter-auteur/(?P<pk>\d+)/$', AddAuthorToContent.as_view(), name="add-author"),
109 url(r'^enlever-auteur/(?P<pk>\d+)/$', RemoveAuthorFromContent.as_view(), name="remove-author"),
110 # beta:
111 url(r'^activer-beta/(?P<pk>\d+)/(?P<slug>.+)/$', ManageBetaContent.as_view(action='set'),
112 name="set-beta"),
113 url(r'^desactiver-beta/(?P<pk>\d+)/(?P<slug>.+)/$', ManageBetaContent.as_view(action='inactive'),
114 name="inactive-beta"),
115
116 # jsfiddle support:
117 url(r'activer-js/', ActivateJSFiddleInContent.as_view(), name="activate-jsfiddle"),
118
119 # delete:
120 url(r'^supprimer/(?P<pk>\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/'
121 r'(?P<object_slug>.+)/$',
122 DeleteContainerOrExtract.as_view(),
123 name='delete'),
124 url(r'^supprimer/(?P<pk>\d+)/(?P<slug>.+)/(?P<container_slug>.+)/(?P<object_slug>.+)/$',
125 DeleteContainerOrExtract.as_view(),
126 name='delete'),
127 url(r'^supprimer/(?P<pk>\d+)/(?P<slug>.+)/(?P<object_slug>.+)/$',
128 DeleteContainerOrExtract.as_view(),
129 name='delete'),
130
131 url(r'^supprimer/(?P<pk>\d+)/(?P<slug>.+)/$', DeleteContent.as_view(), name='delete'),
132
133 # markdown import
134 url(r'^importer/archive/nouveau/$', CreateContentFromArchive.as_view(), name="import-new"),
135 url(r'^importer/(?P<pk>\d+)/(?P<slug>.+)/$', UpdateContentWithArchive.as_view(), name="import"),
136
137 # tags
138 url(r'^tags/$', TagsListView.as_view(), name='tags'),
139
140 url(r'^$', ListOnlineContents.as_view(), name='list'),
141 ]
142
```
Path: `zds/tutorialv2/feeds.py`
Content:
```
1 # coding: utf-8
2
3 from django.contrib.syndication.views import Feed
4 from django.conf import settings
5
6 from django.utils.feedgenerator import Atom1Feed
7
8 from zds.tutorialv2.models.models_database import PublishedContent
9 from zds.settings import ZDS_APP
10
11
12 class LastContentFeedRSS(Feed):
13 """
14 RSS feed for any type of content.
15 """
16 title = u"Contenu sur {}".format(settings.ZDS_APP['site']['litteral_name'])
17 description = u"Les derniers contenus parus sur {}.".format(settings.ZDS_APP['site']['litteral_name'])
18 link = ""
19 content_type = None
20
21 def items(self):
22 """
23 :return: The last (typically 5) contents (sorted by publication date).
24 If `self.type` is not `None`, the contents will only be of this type.
25 """
26 contents = PublishedContent.objects\
27 .prefetch_related("content")\
28 .prefetch_related("content__authors")
29
30 if self.content_type is not None:
31 contents = contents.filter(content_type=self.content_type)
32
33 return contents.order_by('-publication_date')[:ZDS_APP['content']['feed_length']]
34
35 def item_title(self, item):
36 return item.content.title
37
38 def item_pubdate(self, item):
39 return item.publication_date
40
41 def item_description(self, item):
42 return item.content.description
43
44 def item_author_name(self, item):
45 authors_list = item.content.authors.all()
46 authors = []
47 for authors_obj in authors_list:
48 authors.append(authors_obj.username)
49 authors = ", ".join(authors)
50 return authors
51
52 def item_link(self, item):
53 return item.get_absolute_url_online()
54
55
56 class LastTutorialsFeedRSS(LastContentFeedRSS):
57 """
58 Redefinition of `LastContentFeedRSS` for tutorials only
59 """
60 content_type = "TUTORIAL"
61 link = "/tutoriels/"
62 title = u"Tutoriels sur {}".format(settings.ZDS_APP['site']['litteral_name'])
63 description = u"Les derniers tutoriels parus sur {}.".format(settings.ZDS_APP['site']['litteral_name'])
64
65
66 class LastTutorialsFeedATOM(LastTutorialsFeedRSS):
67 feed_type = Atom1Feed
68 subtitle = LastTutorialsFeedRSS.description
69
70
71 class LastArticlesFeedRSS(LastContentFeedRSS):
72 """
73 Redefinition of `LastContentFeedRSS` for articles only
74 """
75 content_type = "ARTICLE"
76 link = "/articles/"
77 title = u"Articles sur {}".format(settings.ZDS_APP['site']['litteral_name'])
78 description = u"Les derniers articles parus sur {}.".format(settings.ZDS_APP['site']['litteral_name'])
79
80
81 class LastArticlesFeedATOM(LastArticlesFeedRSS):
82 feed_type = Atom1Feed
83 subtitle = LastArticlesFeedRSS.description
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/zds/tutorialv2/feeds.py b/zds/tutorialv2/feeds.py
--- a/zds/tutorialv2/feeds.py
+++ b/zds/tutorialv2/feeds.py
@@ -13,7 +13,7 @@
"""
RSS feed for any type of content.
"""
- title = u"Contenu sur {}".format(settings.ZDS_APP['site']['litteral_name'])
+ title = u"Contenus sur {}".format(settings.ZDS_APP['site']['litteral_name'])
description = u"Les derniers contenus parus sur {}.".format(settings.ZDS_APP['site']['litteral_name'])
link = ""
content_type = None
@@ -53,6 +53,11 @@
return item.get_absolute_url_online()
+class LastContentFeedATOM(LastContentFeedRSS):
+ feed_type = Atom1Feed
+ subtitle = LastContentFeedRSS.description
+
+
class LastTutorialsFeedRSS(LastContentFeedRSS):
"""
Redefinition of `LastContentFeedRSS` for tutorials only
diff --git a/zds/tutorialv2/urls/urls_contents.py b/zds/tutorialv2/urls/urls_contents.py
--- a/zds/tutorialv2/urls/urls_contents.py
+++ b/zds/tutorialv2/urls/urls_contents.py
@@ -11,7 +11,13 @@
from zds.tutorialv2.views.views_published import SendNoteFormView, UpdateNoteView, \
HideReaction, ShowReaction, SendNoteAlert, SolveNoteAlert, TagsListView, ListOnlineContents, FollowContent
+from zds.tutorialv2.feeds import LastContentFeedRSS, LastContentFeedATOM
+
urlpatterns = [
+ # Flux
+ url(r'^flux/rss/$', LastContentFeedRSS(), name='feed-rss'),
+ url(r'^flux/atom/$', LastContentFeedATOM(), name='feed-atom'),
+
url(r'^tutoriels/(?P<pk>\d+)/$',
ContentOfAuthor.as_view(type='TUTORIAL', context_object_name='tutorials'),
name="find-tutorial"),
|
{"golden_diff": "diff --git a/zds/tutorialv2/feeds.py b/zds/tutorialv2/feeds.py\n--- a/zds/tutorialv2/feeds.py\n+++ b/zds/tutorialv2/feeds.py\n@@ -13,7 +13,7 @@\n \"\"\"\n RSS feed for any type of content.\n \"\"\"\n- title = u\"Contenu sur {}\".format(settings.ZDS_APP['site']['litteral_name'])\n+ title = u\"Contenus sur {}\".format(settings.ZDS_APP['site']['litteral_name'])\n description = u\"Les derniers contenus parus sur {}.\".format(settings.ZDS_APP['site']['litteral_name'])\n link = \"\"\n content_type = None\n@@ -53,6 +53,11 @@\n return item.get_absolute_url_online()\n \n \n+class LastContentFeedATOM(LastContentFeedRSS):\n+ feed_type = Atom1Feed\n+ subtitle = LastContentFeedRSS.description\n+\n+\n class LastTutorialsFeedRSS(LastContentFeedRSS):\n \"\"\"\n Redefinition of `LastContentFeedRSS` for tutorials only\ndiff --git a/zds/tutorialv2/urls/urls_contents.py b/zds/tutorialv2/urls/urls_contents.py\n--- a/zds/tutorialv2/urls/urls_contents.py\n+++ b/zds/tutorialv2/urls/urls_contents.py\n@@ -11,7 +11,13 @@\n from zds.tutorialv2.views.views_published import SendNoteFormView, UpdateNoteView, \\\n HideReaction, ShowReaction, SendNoteAlert, SolveNoteAlert, TagsListView, ListOnlineContents, FollowContent\n \n+from zds.tutorialv2.feeds import LastContentFeedRSS, LastContentFeedATOM\n+\n urlpatterns = [\n+ # Flux\n+ url(r'^flux/rss/$', LastContentFeedRSS(), name='feed-rss'),\n+ url(r'^flux/atom/$', LastContentFeedATOM(), name='feed-atom'),\n+\n url(r'^tutoriels/(?P<pk>\\d+)/$',\n ContentOfAuthor.as_view(type='TUTORIAL', context_object_name='tutorials'),\n name=\"find-tutorial\"),\n", "issue": "M\u00e9lange de contexte entre les contenus, les tutos et les articles\nServeur : B\u00eata\nVersion : v18-RC3/ffa18f8\nSyst\u00e8me : Mac OS X El Capitain\nNavigateur : Safari Version 9.0.3 (11601.4.4)\n\nAttention, ce n'est pas simple \u00e0 comprendre et la port\u00e9e du bug est peut-\u00eatre plus important que le sc\u00e9nario d\u00e9crit dans cette issue mais j'ai tent\u00e9 de bien cerner le bug.\n\nSc\u00e9nario :\n- Rendez-vous dans le menu d\u00e9roulant des tutoriels.\n- Cliquez sur une cat\u00e9gorie ou un tag, constatez que vous \u00eates dans un contexte de tutoriel (cf. le fil d'ariane).\n- Sur cette page de cat\u00e9gorie et/ou de tag, constatez que vous pouvez utiliser les liens du fil d'ariane, les boutons du menu et que vous pouvez cliqu\u00e9 sur une cat\u00e9gorie dans le menu \"Cat\u00e9gories des tutoriels\" dans la sidebar. D'ailleurs, si vous cliquez sur l'une des cat\u00e9gories dans la sidebar, vous restez dans le contexte des tutoriels.\n- Maintenant retournez dans le menu d\u00e9roulant des tutoriels et cliquez sur \"Tous les tags\", puis rendez-vous dans l'une des cat\u00e9gories list\u00e9e.\n- Constatez que vous \u00eates dans un contexte \"Contenu\" et non plus tutoriel. Puis vous ne pouvez \n - Plus utiliser les liens dans le fil d'ariane.\n - Plus cr\u00e9er un contenu ni aider les auteurs.\n - Plus utiliser les flux.\n - Vous pouvez utilis\u00e9 les cat\u00e9gories de la sidebar mais vous restez dans le contexte des menus. Vous gardez donc les m\u00eames bugs.\n\nNote 1 : Tous ces bugs sont surement identiques avec les articles.\nNote 2 : Vous pouvez aussi retrouver le bug en cliquant sur les cat\u00e9gories et les tags depuis un contenu publi\u00e9.\n\n", "before_files": [{"content": "# coding: utf-8\n\nfrom django.conf.urls import url\n\nfrom zds.tutorialv2.views.views_contents import DisplayContent, CreateContent, EditContent, \\\n DeleteContent, CreateContainer, DisplayContainer, EditContainer, CreateExtract, EditExtract, \\\n DeleteContainerOrExtract, ManageBetaContent, DisplayHistory, DisplayDiff, ActivateJSFiddleInContent, MoveChild, \\\n DownloadContent, UpdateContentWithArchive, CreateContentFromArchive, ContentsWithHelps, AddAuthorToContent, \\\n RemoveAuthorFromContent, WarnTypo, DisplayBetaContent, DisplayBetaContainer, ContentOfAuthor\n\nfrom zds.tutorialv2.views.views_published import SendNoteFormView, UpdateNoteView, \\\n HideReaction, ShowReaction, SendNoteAlert, SolveNoteAlert, TagsListView, ListOnlineContents, FollowContent\n\nurlpatterns = [\n url(r'^tutoriels/(?P<pk>\\d+)/$',\n ContentOfAuthor.as_view(type='TUTORIAL', context_object_name='tutorials'),\n name=\"find-tutorial\"),\n url(r'^articles/(?P<pk>\\d+)/$',\n ContentOfAuthor.as_view(type='ARTICLE', context_object_name='articles'),\n name=\"find-article\"),\n\n url(r'^aides/$', ContentsWithHelps.as_view(), name='helps'),\n url(r'^(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',\n DisplayContainer.as_view(public_is_prioritary=False),\n name='view-container'),\n url(r'^(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n DisplayContainer.as_view(public_is_prioritary=False),\n name='view-container'),\n\n url(r'^(?P<pk>\\d+)/(?P<slug>.+)/$', DisplayContent.as_view(public_is_prioritary=False),\n name='view'),\n\n url(r'^telecharger/(?P<pk>\\d+)/(?P<slug>.+)/$', DownloadContent.as_view(),\n name='download-zip'),\n\n # beta:\n url(r'^beta/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',\n DisplayBetaContainer.as_view(public_is_prioritary=False),\n name='beta-view-container'),\n url(r'^beta/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n DisplayBetaContainer.as_view(public_is_prioritary=False),\n name='beta-view-container'),\n\n url(r'^beta/(?P<pk>\\d+)/(?P<slug>.+)/$', DisplayBetaContent.as_view(), name='beta-view'),\n\n # reactions:\n url(r'^reactions/ajouter/$', SendNoteFormView.as_view(redirection_is_needed=False), name=\"add-reaction\"),\n url(r'^reactions/editer/$', UpdateNoteView.as_view(redirection_is_needed=False), name=\"update-reaction\"),\n url(r'^reactions/cacher/(?P<pk>\\d+)/$', HideReaction.as_view(), name=\"hide-reaction\"),\n url(r'^reactions/afficher/(?P<pk>\\d+)/$', ShowReaction.as_view(), name=\"show-reaction\"),\n url(r'^reactions/alerter/(?P<pk>\\d+)/$', SendNoteAlert.as_view(), name=\"alert-reaction\"),\n url(r'^reactions/resoudre/$', SolveNoteAlert.as_view(), name=\"resolve-reaction\"),\n\n # follow:\n url(r'^follow/(?P<pk>\\d+)/$', FollowContent.as_view(), name=\"follow\"),\n\n # typo:\n url(r'^reactions/typo/$', WarnTypo.as_view(), name=\"warn-typo\"),\n\n # create:\n url(r'^nouveau-tutoriel/$',\n CreateContent.as_view(created_content_type=\"TUTORIAL\"), name='create-tutorial'),\n url(r'^nouvel-article/$',\n CreateContent.as_view(created_content_type=\"ARTICLE\"), name='create-article'),\n url(r'^nouveau-conteneur/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n CreateContainer.as_view(),\n name='create-container'),\n url(r'^nouveau-conteneur/(?P<pk>\\d+)/(?P<slug>.+)/$',\n CreateContainer.as_view(),\n name='create-container'),\n\n\n url(r'^nouvelle-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',\n CreateExtract.as_view(),\n name='create-extract'),\n url(r'^nouvelle-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n CreateExtract.as_view(),\n name='create-extract'),\n url(r'^nouvelle-section/(?P<pk>\\d+)/(?P<slug>.+)/$',\n CreateExtract.as_view(),\n name='create-extract'),\n\n # edit:\n url(r'^editer-conteneur/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/'\n r'(?P<container_slug>.+)/$',\n EditContainer.as_view(),\n name='edit-container'),\n url(r'^editer-conteneur/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n EditContainer.as_view(),\n name='edit-container'),\n\n url(r'^editer-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/'\n r'(?P<container_slug>.+)/(?P<extract_slug>.+)/$',\n EditExtract.as_view(),\n name='edit-extract'),\n url(r'^editer-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/(?P<extract_slug>.+)/$',\n EditExtract.as_view(),\n name='edit-extract'),\n url(r'^editer-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<extract_slug>.+)/$',\n EditExtract.as_view(),\n name='edit-extract'),\n\n url(r'^editer/(?P<pk>\\d+)/(?P<slug>.+)/$', EditContent.as_view(), name='edit'),\n url(r'^deplacer/$', MoveChild.as_view(), name='move-element'),\n\n url(r'^historique/(?P<pk>\\d+)/(?P<slug>.+)/$', DisplayHistory.as_view(), name=\"history\"),\n url(r'^comparaison/(?P<pk>\\d+)/(?P<slug>.+)/$', DisplayDiff.as_view(), name=\"diff\"),\n url(r'^ajouter-auteur/(?P<pk>\\d+)/$', AddAuthorToContent.as_view(), name=\"add-author\"),\n url(r'^enlever-auteur/(?P<pk>\\d+)/$', RemoveAuthorFromContent.as_view(), name=\"remove-author\"),\n # beta:\n url(r'^activer-beta/(?P<pk>\\d+)/(?P<slug>.+)/$', ManageBetaContent.as_view(action='set'),\n name=\"set-beta\"),\n url(r'^desactiver-beta/(?P<pk>\\d+)/(?P<slug>.+)/$', ManageBetaContent.as_view(action='inactive'),\n name=\"inactive-beta\"),\n\n # jsfiddle support:\n url(r'activer-js/', ActivateJSFiddleInContent.as_view(), name=\"activate-jsfiddle\"),\n\n # delete:\n url(r'^supprimer/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/'\n r'(?P<object_slug>.+)/$',\n DeleteContainerOrExtract.as_view(),\n name='delete'),\n url(r'^supprimer/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/(?P<object_slug>.+)/$',\n DeleteContainerOrExtract.as_view(),\n name='delete'),\n url(r'^supprimer/(?P<pk>\\d+)/(?P<slug>.+)/(?P<object_slug>.+)/$',\n DeleteContainerOrExtract.as_view(),\n name='delete'),\n\n url(r'^supprimer/(?P<pk>\\d+)/(?P<slug>.+)/$', DeleteContent.as_view(), name='delete'),\n\n # markdown import\n url(r'^importer/archive/nouveau/$', CreateContentFromArchive.as_view(), name=\"import-new\"),\n url(r'^importer/(?P<pk>\\d+)/(?P<slug>.+)/$', UpdateContentWithArchive.as_view(), name=\"import\"),\n\n # tags\n url(r'^tags/$', TagsListView.as_view(), name='tags'),\n\n url(r'^$', ListOnlineContents.as_view(), name='list'),\n]\n", "path": "zds/tutorialv2/urls/urls_contents.py"}, {"content": "# coding: utf-8\n\nfrom django.contrib.syndication.views import Feed\nfrom django.conf import settings\n\nfrom django.utils.feedgenerator import Atom1Feed\n\nfrom zds.tutorialv2.models.models_database import PublishedContent\nfrom zds.settings import ZDS_APP\n\n\nclass LastContentFeedRSS(Feed):\n \"\"\"\n RSS feed for any type of content.\n \"\"\"\n title = u\"Contenu sur {}\".format(settings.ZDS_APP['site']['litteral_name'])\n description = u\"Les derniers contenus parus sur {}.\".format(settings.ZDS_APP['site']['litteral_name'])\n link = \"\"\n content_type = None\n\n def items(self):\n \"\"\"\n :return: The last (typically 5) contents (sorted by publication date).\n If `self.type` is not `None`, the contents will only be of this type.\n \"\"\"\n contents = PublishedContent.objects\\\n .prefetch_related(\"content\")\\\n .prefetch_related(\"content__authors\")\n\n if self.content_type is not None:\n contents = contents.filter(content_type=self.content_type)\n\n return contents.order_by('-publication_date')[:ZDS_APP['content']['feed_length']]\n\n def item_title(self, item):\n return item.content.title\n\n def item_pubdate(self, item):\n return item.publication_date\n\n def item_description(self, item):\n return item.content.description\n\n def item_author_name(self, item):\n authors_list = item.content.authors.all()\n authors = []\n for authors_obj in authors_list:\n authors.append(authors_obj.username)\n authors = \", \".join(authors)\n return authors\n\n def item_link(self, item):\n return item.get_absolute_url_online()\n\n\nclass LastTutorialsFeedRSS(LastContentFeedRSS):\n \"\"\"\n Redefinition of `LastContentFeedRSS` for tutorials only\n \"\"\"\n content_type = \"TUTORIAL\"\n link = \"/tutoriels/\"\n title = u\"Tutoriels sur {}\".format(settings.ZDS_APP['site']['litteral_name'])\n description = u\"Les derniers tutoriels parus sur {}.\".format(settings.ZDS_APP['site']['litteral_name'])\n\n\nclass LastTutorialsFeedATOM(LastTutorialsFeedRSS):\n feed_type = Atom1Feed\n subtitle = LastTutorialsFeedRSS.description\n\n\nclass LastArticlesFeedRSS(LastContentFeedRSS):\n \"\"\"\n Redefinition of `LastContentFeedRSS` for articles only\n \"\"\"\n content_type = \"ARTICLE\"\n link = \"/articles/\"\n title = u\"Articles sur {}\".format(settings.ZDS_APP['site']['litteral_name'])\n description = u\"Les derniers articles parus sur {}.\".format(settings.ZDS_APP['site']['litteral_name'])\n\n\nclass LastArticlesFeedATOM(LastArticlesFeedRSS):\n feed_type = Atom1Feed\n subtitle = LastArticlesFeedRSS.description\n", "path": "zds/tutorialv2/feeds.py"}], "after_files": [{"content": "# coding: utf-8\n\nfrom django.conf.urls import url\n\nfrom zds.tutorialv2.views.views_contents import DisplayContent, CreateContent, EditContent, \\\n DeleteContent, CreateContainer, DisplayContainer, EditContainer, CreateExtract, EditExtract, \\\n DeleteContainerOrExtract, ManageBetaContent, DisplayHistory, DisplayDiff, ActivateJSFiddleInContent, MoveChild, \\\n DownloadContent, UpdateContentWithArchive, CreateContentFromArchive, ContentsWithHelps, AddAuthorToContent, \\\n RemoveAuthorFromContent, WarnTypo, DisplayBetaContent, DisplayBetaContainer, ContentOfAuthor\n\nfrom zds.tutorialv2.views.views_published import SendNoteFormView, UpdateNoteView, \\\n HideReaction, ShowReaction, SendNoteAlert, SolveNoteAlert, TagsListView, ListOnlineContents, FollowContent\n\nfrom zds.tutorialv2.feeds import LastContentFeedRSS, LastContentFeedATOM\n\nurlpatterns = [\n # Flux\n url(r'^flux/rss/$', LastContentFeedRSS(), name='feed-rss'),\n url(r'^flux/atom/$', LastContentFeedATOM(), name='feed-atom'),\n\n url(r'^tutoriels/(?P<pk>\\d+)/$',\n ContentOfAuthor.as_view(type='TUTORIAL', context_object_name='tutorials'),\n name=\"find-tutorial\"),\n url(r'^articles/(?P<pk>\\d+)/$',\n ContentOfAuthor.as_view(type='ARTICLE', context_object_name='articles'),\n name=\"find-article\"),\n\n url(r'^aides/$', ContentsWithHelps.as_view(), name='helps'),\n url(r'^(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',\n DisplayContainer.as_view(public_is_prioritary=False),\n name='view-container'),\n url(r'^(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n DisplayContainer.as_view(public_is_prioritary=False),\n name='view-container'),\n\n url(r'^(?P<pk>\\d+)/(?P<slug>.+)/$', DisplayContent.as_view(public_is_prioritary=False),\n name='view'),\n\n url(r'^telecharger/(?P<pk>\\d+)/(?P<slug>.+)/$', DownloadContent.as_view(),\n name='download-zip'),\n\n # beta:\n url(r'^beta/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',\n DisplayBetaContainer.as_view(public_is_prioritary=False),\n name='beta-view-container'),\n url(r'^beta/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n DisplayBetaContainer.as_view(public_is_prioritary=False),\n name='beta-view-container'),\n\n url(r'^beta/(?P<pk>\\d+)/(?P<slug>.+)/$', DisplayBetaContent.as_view(), name='beta-view'),\n\n # reactions:\n url(r'^reactions/ajouter/$', SendNoteFormView.as_view(redirection_is_needed=False), name=\"add-reaction\"),\n url(r'^reactions/editer/$', UpdateNoteView.as_view(redirection_is_needed=False), name=\"update-reaction\"),\n url(r'^reactions/cacher/(?P<pk>\\d+)/$', HideReaction.as_view(), name=\"hide-reaction\"),\n url(r'^reactions/afficher/(?P<pk>\\d+)/$', ShowReaction.as_view(), name=\"show-reaction\"),\n url(r'^reactions/alerter/(?P<pk>\\d+)/$', SendNoteAlert.as_view(), name=\"alert-reaction\"),\n url(r'^reactions/resoudre/$', SolveNoteAlert.as_view(), name=\"resolve-reaction\"),\n\n # follow:\n url(r'^follow/(?P<pk>\\d+)/$', FollowContent.as_view(), name=\"follow\"),\n\n # typo:\n url(r'^reactions/typo/$', WarnTypo.as_view(), name=\"warn-typo\"),\n\n # create:\n url(r'^nouveau-tutoriel/$',\n CreateContent.as_view(created_content_type=\"TUTORIAL\"), name='create-tutorial'),\n url(r'^nouvel-article/$',\n CreateContent.as_view(created_content_type=\"ARTICLE\"), name='create-article'),\n url(r'^nouveau-conteneur/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n CreateContainer.as_view(),\n name='create-container'),\n url(r'^nouveau-conteneur/(?P<pk>\\d+)/(?P<slug>.+)/$',\n CreateContainer.as_view(),\n name='create-container'),\n\n\n url(r'^nouvelle-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/$',\n CreateExtract.as_view(),\n name='create-extract'),\n url(r'^nouvelle-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n CreateExtract.as_view(),\n name='create-extract'),\n url(r'^nouvelle-section/(?P<pk>\\d+)/(?P<slug>.+)/$',\n CreateExtract.as_view(),\n name='create-extract'),\n\n # edit:\n url(r'^editer-conteneur/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/'\n r'(?P<container_slug>.+)/$',\n EditContainer.as_view(),\n name='edit-container'),\n url(r'^editer-conteneur/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/$',\n EditContainer.as_view(),\n name='edit-container'),\n\n url(r'^editer-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/'\n r'(?P<container_slug>.+)/(?P<extract_slug>.+)/$',\n EditExtract.as_view(),\n name='edit-extract'),\n url(r'^editer-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/(?P<extract_slug>.+)/$',\n EditExtract.as_view(),\n name='edit-extract'),\n url(r'^editer-section/(?P<pk>\\d+)/(?P<slug>.+)/(?P<extract_slug>.+)/$',\n EditExtract.as_view(),\n name='edit-extract'),\n\n url(r'^editer/(?P<pk>\\d+)/(?P<slug>.+)/$', EditContent.as_view(), name='edit'),\n url(r'^deplacer/$', MoveChild.as_view(), name='move-element'),\n\n url(r'^historique/(?P<pk>\\d+)/(?P<slug>.+)/$', DisplayHistory.as_view(), name=\"history\"),\n url(r'^comparaison/(?P<pk>\\d+)/(?P<slug>.+)/$', DisplayDiff.as_view(), name=\"diff\"),\n url(r'^ajouter-auteur/(?P<pk>\\d+)/$', AddAuthorToContent.as_view(), name=\"add-author\"),\n url(r'^enlever-auteur/(?P<pk>\\d+)/$', RemoveAuthorFromContent.as_view(), name=\"remove-author\"),\n # beta:\n url(r'^activer-beta/(?P<pk>\\d+)/(?P<slug>.+)/$', ManageBetaContent.as_view(action='set'),\n name=\"set-beta\"),\n url(r'^desactiver-beta/(?P<pk>\\d+)/(?P<slug>.+)/$', ManageBetaContent.as_view(action='inactive'),\n name=\"inactive-beta\"),\n\n # jsfiddle support:\n url(r'activer-js/', ActivateJSFiddleInContent.as_view(), name=\"activate-jsfiddle\"),\n\n # delete:\n url(r'^supprimer/(?P<pk>\\d+)/(?P<slug>.+)/(?P<parent_container_slug>.+)/(?P<container_slug>.+)/'\n r'(?P<object_slug>.+)/$',\n DeleteContainerOrExtract.as_view(),\n name='delete'),\n url(r'^supprimer/(?P<pk>\\d+)/(?P<slug>.+)/(?P<container_slug>.+)/(?P<object_slug>.+)/$',\n DeleteContainerOrExtract.as_view(),\n name='delete'),\n url(r'^supprimer/(?P<pk>\\d+)/(?P<slug>.+)/(?P<object_slug>.+)/$',\n DeleteContainerOrExtract.as_view(),\n name='delete'),\n\n url(r'^supprimer/(?P<pk>\\d+)/(?P<slug>.+)/$', DeleteContent.as_view(), name='delete'),\n\n # markdown import\n url(r'^importer/archive/nouveau/$', CreateContentFromArchive.as_view(), name=\"import-new\"),\n url(r'^importer/(?P<pk>\\d+)/(?P<slug>.+)/$', UpdateContentWithArchive.as_view(), name=\"import\"),\n\n # tags\n url(r'^tags/$', TagsListView.as_view(), name='tags'),\n\n url(r'^$', ListOnlineContents.as_view(), name='list'),\n]\n", "path": "zds/tutorialv2/urls/urls_contents.py"}, {"content": "# coding: utf-8\n\nfrom django.contrib.syndication.views import Feed\nfrom django.conf import settings\n\nfrom django.utils.feedgenerator import Atom1Feed\n\nfrom zds.tutorialv2.models.models_database import PublishedContent\nfrom zds.settings import ZDS_APP\n\n\nclass LastContentFeedRSS(Feed):\n \"\"\"\n RSS feed for any type of content.\n \"\"\"\n title = u\"Contenus sur {}\".format(settings.ZDS_APP['site']['litteral_name'])\n description = u\"Les derniers contenus parus sur {}.\".format(settings.ZDS_APP['site']['litteral_name'])\n link = \"\"\n content_type = None\n\n def items(self):\n \"\"\"\n :return: The last (typically 5) contents (sorted by publication date).\n If `self.type` is not `None`, the contents will only be of this type.\n \"\"\"\n contents = PublishedContent.objects\\\n .prefetch_related(\"content\")\\\n .prefetch_related(\"content__authors\")\n\n if self.content_type is not None:\n contents = contents.filter(content_type=self.content_type)\n\n return contents.order_by('-publication_date')[:ZDS_APP['content']['feed_length']]\n\n def item_title(self, item):\n return item.content.title\n\n def item_pubdate(self, item):\n return item.publication_date\n\n def item_description(self, item):\n return item.content.description\n\n def item_author_name(self, item):\n authors_list = item.content.authors.all()\n authors = []\n for authors_obj in authors_list:\n authors.append(authors_obj.username)\n authors = \", \".join(authors)\n return authors\n\n def item_link(self, item):\n return item.get_absolute_url_online()\n\n\nclass LastContentFeedATOM(LastContentFeedRSS):\n feed_type = Atom1Feed\n subtitle = LastContentFeedRSS.description\n\n\nclass LastTutorialsFeedRSS(LastContentFeedRSS):\n \"\"\"\n Redefinition of `LastContentFeedRSS` for tutorials only\n \"\"\"\n content_type = \"TUTORIAL\"\n link = \"/tutoriels/\"\n title = u\"Tutoriels sur {}\".format(settings.ZDS_APP['site']['litteral_name'])\n description = u\"Les derniers tutoriels parus sur {}.\".format(settings.ZDS_APP['site']['litteral_name'])\n\n\nclass LastTutorialsFeedATOM(LastTutorialsFeedRSS):\n feed_type = Atom1Feed\n subtitle = LastTutorialsFeedRSS.description\n\n\nclass LastArticlesFeedRSS(LastContentFeedRSS):\n \"\"\"\n Redefinition of `LastContentFeedRSS` for articles only\n \"\"\"\n content_type = \"ARTICLE\"\n link = \"/articles/\"\n title = u\"Articles sur {}\".format(settings.ZDS_APP['site']['litteral_name'])\n description = u\"Les derniers articles parus sur {}.\".format(settings.ZDS_APP['site']['litteral_name'])\n\n\nclass LastArticlesFeedATOM(LastArticlesFeedRSS):\n feed_type = Atom1Feed\n subtitle = LastArticlesFeedRSS.description\n", "path": "zds/tutorialv2/feeds.py"}]}
| 3,765 | 465 |
gh_patches_debug_25197
|
rasdani/github-patches
|
git_diff
|
goauthentik__authentik-4829
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Twilio SMS sending "U" instead of numerical MFA code
**Describe the bug**
When using the SMS authenticator stage with Twilio configured, users are being sent MFA text messages containing nothing other than the letter "U". I have confirmed in the Twilio console that the body of the message received from Authentik is indeed just the letter "U".
**To Reproduce**
Steps to reproduce the behavior:
1. Log in with a user that has an SMS device setup, or set up a new SMS device
2. See issue with received text message containing only the letter "U"
**Expected behavior**
Users should receive a text message with a numerical code for MFA.
**Version and Deployment (please complete the following information):**
- authentik version: 2023.2.2
- Deployment: docker-compose
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/stages/authenticator_sms/models.py`
Content:
```
1 """SMS Authenticator models"""
2 from hashlib import sha256
3 from typing import Optional
4
5 from django.contrib.auth import get_user_model
6 from django.db import models
7 from django.utils.translation import gettext_lazy as _
8 from django.views import View
9 from django_otp.models import SideChannelDevice
10 from requests.exceptions import RequestException
11 from rest_framework.exceptions import ValidationError
12 from rest_framework.serializers import BaseSerializer
13 from structlog.stdlib import get_logger
14 from twilio.base.exceptions import TwilioRestException
15 from twilio.rest import Client
16
17 from authentik.core.types import UserSettingSerializer
18 from authentik.events.models import Event, EventAction, NotificationWebhookMapping
19 from authentik.events.utils import sanitize_item
20 from authentik.flows.models import ConfigurableStage, Stage
21 from authentik.lib.models import SerializerModel
22 from authentik.lib.utils.errors import exception_to_string
23 from authentik.lib.utils.http import get_http_session
24
25 LOGGER = get_logger()
26
27
28 class SMSProviders(models.TextChoices):
29 """Supported SMS Providers"""
30
31 TWILIO = "twilio"
32 GENERIC = "generic"
33
34
35 class SMSAuthTypes(models.TextChoices):
36 """Supported SMS Auth Types"""
37
38 BASIC = "basic"
39 BEARER = "bearer"
40
41
42 class AuthenticatorSMSStage(ConfigurableStage, Stage):
43 """Use SMS-based TOTP instead of authenticator-based."""
44
45 provider = models.TextField(choices=SMSProviders.choices)
46
47 from_number = models.TextField()
48
49 account_sid = models.TextField()
50 auth = models.TextField()
51 auth_password = models.TextField(default="", blank=True)
52 auth_type = models.TextField(choices=SMSAuthTypes.choices, default=SMSAuthTypes.BASIC)
53
54 verify_only = models.BooleanField(
55 default=False,
56 help_text=_(
57 "When enabled, the Phone number is only used during enrollment to verify the "
58 "users authenticity. Only a hash of the phone number is saved to ensure it is "
59 "not re-used in the future."
60 ),
61 )
62
63 mapping = models.ForeignKey(
64 NotificationWebhookMapping,
65 null=True,
66 default=None,
67 on_delete=models.SET_NULL,
68 help_text=_("Optionally modify the payload being sent to custom providers."),
69 )
70
71 def send(self, token: str, device: "SMSDevice"):
72 """Send message via selected provider"""
73 if self.provider == SMSProviders.TWILIO:
74 return self.send_twilio(token, device)
75 if self.provider == SMSProviders.GENERIC:
76 return self.send_generic(token, device)
77 raise ValueError(f"invalid provider {self.provider}")
78
79 def get_message(self, token: str) -> str:
80 """Get SMS message"""
81 return _("Use this code to authenticate in authentik: %(token)s" % {"token": token})
82
83 def send_twilio(self, token: str, device: "SMSDevice"):
84 """send sms via twilio provider"""
85 client = Client(self.account_sid, self.auth)
86
87 try:
88 message = client.messages.create(
89 to=device.phone_number, from_=self.from_number, body=self.get_message(token)
90 )
91 LOGGER.debug("Sent SMS", to=device, message=message.sid)
92 except TwilioRestException as exc:
93 LOGGER.warning("Error sending token by Twilio SMS", exc=exc, msg=exc.msg)
94 raise ValidationError(exc.msg)
95
96 def send_generic(self, token: str, device: "SMSDevice"):
97 """Send SMS via outside API"""
98 payload = {
99 "From": self.from_number,
100 "To": device.phone_number,
101 "Body": token,
102 "Message": self.get_message(token),
103 }
104
105 if self.mapping:
106 payload = sanitize_item(
107 self.mapping.evaluate(
108 user=device.user,
109 request=None,
110 device=device,
111 token=token,
112 stage=self,
113 )
114 )
115
116 if self.auth_type == SMSAuthTypes.BEARER:
117 response = get_http_session().post(
118 f"{self.account_sid}",
119 json=payload,
120 headers={"Authorization": f"Bearer {self.auth}"},
121 )
122 elif self.auth_type == SMSAuthTypes.BASIC:
123 response = get_http_session().post(
124 f"{self.account_sid}",
125 json=payload,
126 auth=(self.auth, self.auth_password),
127 )
128 else:
129 raise ValueError(f"Invalid Auth type '{self.auth_type}'")
130
131 LOGGER.debug("Sent SMS", to=device.phone_number)
132 try:
133 response.raise_for_status()
134 except RequestException as exc:
135 LOGGER.warning(
136 "Error sending token by generic SMS",
137 exc=exc,
138 status=response.status_code,
139 body=response.text[:100],
140 )
141 Event.new(
142 EventAction.CONFIGURATION_ERROR,
143 message="Error sending SMS",
144 exc=exception_to_string(exc),
145 status_code=response.status_code,
146 body=response.text,
147 ).set_user(device.user).save()
148 if response.status_code >= 400:
149 raise ValidationError(response.text)
150 raise
151
152 @property
153 def serializer(self) -> type[BaseSerializer]:
154 from authentik.stages.authenticator_sms.api import AuthenticatorSMSStageSerializer
155
156 return AuthenticatorSMSStageSerializer
157
158 @property
159 def type(self) -> type[View]:
160 from authentik.stages.authenticator_sms.stage import AuthenticatorSMSStageView
161
162 return AuthenticatorSMSStageView
163
164 @property
165 def component(self) -> str:
166 return "ak-stage-authenticator-sms-form"
167
168 def ui_user_settings(self) -> Optional[UserSettingSerializer]:
169 return UserSettingSerializer(
170 data={
171 "title": str(self._meta.verbose_name),
172 "component": "ak-user-settings-authenticator-sms",
173 }
174 )
175
176 def __str__(self) -> str:
177 return f"SMS Authenticator Setup Stage {self.name}"
178
179 class Meta:
180 verbose_name = _("SMS Authenticator Setup Stage")
181 verbose_name_plural = _("SMS Authenticator Setup Stages")
182
183
184 def hash_phone_number(phone_number: str) -> str:
185 """Hash phone number with prefix"""
186 return "hash:" + sha256(phone_number.encode()).hexdigest()
187
188
189 class SMSDevice(SerializerModel, SideChannelDevice):
190 """SMS Device"""
191
192 user = models.ForeignKey(get_user_model(), on_delete=models.CASCADE)
193
194 # Connect to the stage to when validating access we know the API Credentials
195 stage = models.ForeignKey(AuthenticatorSMSStage, on_delete=models.CASCADE)
196
197 phone_number = models.TextField()
198
199 last_t = models.DateTimeField(auto_now=True)
200
201 def set_hashed_number(self):
202 """Set phone_number to hashed number"""
203 self.phone_number = hash_phone_number(self.phone_number)
204
205 @property
206 def is_hashed(self) -> bool:
207 """Check if the phone number is hashed"""
208 return self.phone_number.startswith("hash:")
209
210 @property
211 def serializer(self) -> type[BaseSerializer]:
212 from authentik.stages.authenticator_sms.api import SMSDeviceSerializer
213
214 return SMSDeviceSerializer
215
216 def verify_token(self, token):
217 valid = super().verify_token(token)
218 if valid:
219 self.save()
220 return valid
221
222 def __str__(self):
223 return str(self.name) or str(self.user)
224
225 class Meta:
226 verbose_name = _("SMS Device")
227 verbose_name_plural = _("SMS Devices")
228 unique_together = (("stage", "phone_number"),)
229
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/authentik/stages/authenticator_sms/models.py b/authentik/stages/authenticator_sms/models.py
--- a/authentik/stages/authenticator_sms/models.py
+++ b/authentik/stages/authenticator_sms/models.py
@@ -86,7 +86,7 @@
try:
message = client.messages.create(
- to=device.phone_number, from_=self.from_number, body=self.get_message(token)
+ to=device.phone_number, from_=self.from_number, body=str(self.get_message(token))
)
LOGGER.debug("Sent SMS", to=device, message=message.sid)
except TwilioRestException as exc:
@@ -115,13 +115,13 @@
if self.auth_type == SMSAuthTypes.BEARER:
response = get_http_session().post(
- f"{self.account_sid}",
+ self.account_sid,
json=payload,
headers={"Authorization": f"Bearer {self.auth}"},
)
elif self.auth_type == SMSAuthTypes.BASIC:
response = get_http_session().post(
- f"{self.account_sid}",
+ self.account_sid,
json=payload,
auth=(self.auth, self.auth_password),
)
|
{"golden_diff": "diff --git a/authentik/stages/authenticator_sms/models.py b/authentik/stages/authenticator_sms/models.py\n--- a/authentik/stages/authenticator_sms/models.py\n+++ b/authentik/stages/authenticator_sms/models.py\n@@ -86,7 +86,7 @@\n \n try:\n message = client.messages.create(\n- to=device.phone_number, from_=self.from_number, body=self.get_message(token)\n+ to=device.phone_number, from_=self.from_number, body=str(self.get_message(token))\n )\n LOGGER.debug(\"Sent SMS\", to=device, message=message.sid)\n except TwilioRestException as exc:\n@@ -115,13 +115,13 @@\n \n if self.auth_type == SMSAuthTypes.BEARER:\n response = get_http_session().post(\n- f\"{self.account_sid}\",\n+ self.account_sid,\n json=payload,\n headers={\"Authorization\": f\"Bearer {self.auth}\"},\n )\n elif self.auth_type == SMSAuthTypes.BASIC:\n response = get_http_session().post(\n- f\"{self.account_sid}\",\n+ self.account_sid,\n json=payload,\n auth=(self.auth, self.auth_password),\n )\n", "issue": "Twilio SMS sending \"U\" instead of numerical MFA code\n**Describe the bug**\r\nWhen using the SMS authenticator stage with Twilio configured, users are being sent MFA text messages containing nothing other than the letter \"U\". I have confirmed in the Twilio console that the body of the message received from Authentik is indeed just the letter \"U\".\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Log in with a user that has an SMS device setup, or set up a new SMS device\r\n2. See issue with received text message containing only the letter \"U\"\r\n\r\n**Expected behavior**\r\nUsers should receive a text message with a numerical code for MFA.\r\n\r\n**Version and Deployment (please complete the following information):**\r\n - authentik version: 2023.2.2\r\n - Deployment: docker-compose\r\n\n", "before_files": [{"content": "\"\"\"SMS Authenticator models\"\"\"\nfrom hashlib import sha256\nfrom typing import Optional\n\nfrom django.contrib.auth import get_user_model\nfrom django.db import models\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views import View\nfrom django_otp.models import SideChannelDevice\nfrom requests.exceptions import RequestException\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.serializers import BaseSerializer\nfrom structlog.stdlib import get_logger\nfrom twilio.base.exceptions import TwilioRestException\nfrom twilio.rest import Client\n\nfrom authentik.core.types import UserSettingSerializer\nfrom authentik.events.models import Event, EventAction, NotificationWebhookMapping\nfrom authentik.events.utils import sanitize_item\nfrom authentik.flows.models import ConfigurableStage, Stage\nfrom authentik.lib.models import SerializerModel\nfrom authentik.lib.utils.errors import exception_to_string\nfrom authentik.lib.utils.http import get_http_session\n\nLOGGER = get_logger()\n\n\nclass SMSProviders(models.TextChoices):\n \"\"\"Supported SMS Providers\"\"\"\n\n TWILIO = \"twilio\"\n GENERIC = \"generic\"\n\n\nclass SMSAuthTypes(models.TextChoices):\n \"\"\"Supported SMS Auth Types\"\"\"\n\n BASIC = \"basic\"\n BEARER = \"bearer\"\n\n\nclass AuthenticatorSMSStage(ConfigurableStage, Stage):\n \"\"\"Use SMS-based TOTP instead of authenticator-based.\"\"\"\n\n provider = models.TextField(choices=SMSProviders.choices)\n\n from_number = models.TextField()\n\n account_sid = models.TextField()\n auth = models.TextField()\n auth_password = models.TextField(default=\"\", blank=True)\n auth_type = models.TextField(choices=SMSAuthTypes.choices, default=SMSAuthTypes.BASIC)\n\n verify_only = models.BooleanField(\n default=False,\n help_text=_(\n \"When enabled, the Phone number is only used during enrollment to verify the \"\n \"users authenticity. Only a hash of the phone number is saved to ensure it is \"\n \"not re-used in the future.\"\n ),\n )\n\n mapping = models.ForeignKey(\n NotificationWebhookMapping,\n null=True,\n default=None,\n on_delete=models.SET_NULL,\n help_text=_(\"Optionally modify the payload being sent to custom providers.\"),\n )\n\n def send(self, token: str, device: \"SMSDevice\"):\n \"\"\"Send message via selected provider\"\"\"\n if self.provider == SMSProviders.TWILIO:\n return self.send_twilio(token, device)\n if self.provider == SMSProviders.GENERIC:\n return self.send_generic(token, device)\n raise ValueError(f\"invalid provider {self.provider}\")\n\n def get_message(self, token: str) -> str:\n \"\"\"Get SMS message\"\"\"\n return _(\"Use this code to authenticate in authentik: %(token)s\" % {\"token\": token})\n\n def send_twilio(self, token: str, device: \"SMSDevice\"):\n \"\"\"send sms via twilio provider\"\"\"\n client = Client(self.account_sid, self.auth)\n\n try:\n message = client.messages.create(\n to=device.phone_number, from_=self.from_number, body=self.get_message(token)\n )\n LOGGER.debug(\"Sent SMS\", to=device, message=message.sid)\n except TwilioRestException as exc:\n LOGGER.warning(\"Error sending token by Twilio SMS\", exc=exc, msg=exc.msg)\n raise ValidationError(exc.msg)\n\n def send_generic(self, token: str, device: \"SMSDevice\"):\n \"\"\"Send SMS via outside API\"\"\"\n payload = {\n \"From\": self.from_number,\n \"To\": device.phone_number,\n \"Body\": token,\n \"Message\": self.get_message(token),\n }\n\n if self.mapping:\n payload = sanitize_item(\n self.mapping.evaluate(\n user=device.user,\n request=None,\n device=device,\n token=token,\n stage=self,\n )\n )\n\n if self.auth_type == SMSAuthTypes.BEARER:\n response = get_http_session().post(\n f\"{self.account_sid}\",\n json=payload,\n headers={\"Authorization\": f\"Bearer {self.auth}\"},\n )\n elif self.auth_type == SMSAuthTypes.BASIC:\n response = get_http_session().post(\n f\"{self.account_sid}\",\n json=payload,\n auth=(self.auth, self.auth_password),\n )\n else:\n raise ValueError(f\"Invalid Auth type '{self.auth_type}'\")\n\n LOGGER.debug(\"Sent SMS\", to=device.phone_number)\n try:\n response.raise_for_status()\n except RequestException as exc:\n LOGGER.warning(\n \"Error sending token by generic SMS\",\n exc=exc,\n status=response.status_code,\n body=response.text[:100],\n )\n Event.new(\n EventAction.CONFIGURATION_ERROR,\n message=\"Error sending SMS\",\n exc=exception_to_string(exc),\n status_code=response.status_code,\n body=response.text,\n ).set_user(device.user).save()\n if response.status_code >= 400:\n raise ValidationError(response.text)\n raise\n\n @property\n def serializer(self) -> type[BaseSerializer]:\n from authentik.stages.authenticator_sms.api import AuthenticatorSMSStageSerializer\n\n return AuthenticatorSMSStageSerializer\n\n @property\n def type(self) -> type[View]:\n from authentik.stages.authenticator_sms.stage import AuthenticatorSMSStageView\n\n return AuthenticatorSMSStageView\n\n @property\n def component(self) -> str:\n return \"ak-stage-authenticator-sms-form\"\n\n def ui_user_settings(self) -> Optional[UserSettingSerializer]:\n return UserSettingSerializer(\n data={\n \"title\": str(self._meta.verbose_name),\n \"component\": \"ak-user-settings-authenticator-sms\",\n }\n )\n\n def __str__(self) -> str:\n return f\"SMS Authenticator Setup Stage {self.name}\"\n\n class Meta:\n verbose_name = _(\"SMS Authenticator Setup Stage\")\n verbose_name_plural = _(\"SMS Authenticator Setup Stages\")\n\n\ndef hash_phone_number(phone_number: str) -> str:\n \"\"\"Hash phone number with prefix\"\"\"\n return \"hash:\" + sha256(phone_number.encode()).hexdigest()\n\n\nclass SMSDevice(SerializerModel, SideChannelDevice):\n \"\"\"SMS Device\"\"\"\n\n user = models.ForeignKey(get_user_model(), on_delete=models.CASCADE)\n\n # Connect to the stage to when validating access we know the API Credentials\n stage = models.ForeignKey(AuthenticatorSMSStage, on_delete=models.CASCADE)\n\n phone_number = models.TextField()\n\n last_t = models.DateTimeField(auto_now=True)\n\n def set_hashed_number(self):\n \"\"\"Set phone_number to hashed number\"\"\"\n self.phone_number = hash_phone_number(self.phone_number)\n\n @property\n def is_hashed(self) -> bool:\n \"\"\"Check if the phone number is hashed\"\"\"\n return self.phone_number.startswith(\"hash:\")\n\n @property\n def serializer(self) -> type[BaseSerializer]:\n from authentik.stages.authenticator_sms.api import SMSDeviceSerializer\n\n return SMSDeviceSerializer\n\n def verify_token(self, token):\n valid = super().verify_token(token)\n if valid:\n self.save()\n return valid\n\n def __str__(self):\n return str(self.name) or str(self.user)\n\n class Meta:\n verbose_name = _(\"SMS Device\")\n verbose_name_plural = _(\"SMS Devices\")\n unique_together = ((\"stage\", \"phone_number\"),)\n", "path": "authentik/stages/authenticator_sms/models.py"}], "after_files": [{"content": "\"\"\"SMS Authenticator models\"\"\"\nfrom hashlib import sha256\nfrom typing import Optional\n\nfrom django.contrib.auth import get_user_model\nfrom django.db import models\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views import View\nfrom django_otp.models import SideChannelDevice\nfrom requests.exceptions import RequestException\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.serializers import BaseSerializer\nfrom structlog.stdlib import get_logger\nfrom twilio.base.exceptions import TwilioRestException\nfrom twilio.rest import Client\n\nfrom authentik.core.types import UserSettingSerializer\nfrom authentik.events.models import Event, EventAction, NotificationWebhookMapping\nfrom authentik.events.utils import sanitize_item\nfrom authentik.flows.models import ConfigurableStage, Stage\nfrom authentik.lib.models import SerializerModel\nfrom authentik.lib.utils.errors import exception_to_string\nfrom authentik.lib.utils.http import get_http_session\n\nLOGGER = get_logger()\n\n\nclass SMSProviders(models.TextChoices):\n \"\"\"Supported SMS Providers\"\"\"\n\n TWILIO = \"twilio\"\n GENERIC = \"generic\"\n\n\nclass SMSAuthTypes(models.TextChoices):\n \"\"\"Supported SMS Auth Types\"\"\"\n\n BASIC = \"basic\"\n BEARER = \"bearer\"\n\n\nclass AuthenticatorSMSStage(ConfigurableStage, Stage):\n \"\"\"Use SMS-based TOTP instead of authenticator-based.\"\"\"\n\n provider = models.TextField(choices=SMSProviders.choices)\n\n from_number = models.TextField()\n\n account_sid = models.TextField()\n auth = models.TextField()\n auth_password = models.TextField(default=\"\", blank=True)\n auth_type = models.TextField(choices=SMSAuthTypes.choices, default=SMSAuthTypes.BASIC)\n\n verify_only = models.BooleanField(\n default=False,\n help_text=_(\n \"When enabled, the Phone number is only used during enrollment to verify the \"\n \"users authenticity. Only a hash of the phone number is saved to ensure it is \"\n \"not re-used in the future.\"\n ),\n )\n\n mapping = models.ForeignKey(\n NotificationWebhookMapping,\n null=True,\n default=None,\n on_delete=models.SET_NULL,\n help_text=_(\"Optionally modify the payload being sent to custom providers.\"),\n )\n\n def send(self, token: str, device: \"SMSDevice\"):\n \"\"\"Send message via selected provider\"\"\"\n if self.provider == SMSProviders.TWILIO:\n return self.send_twilio(token, device)\n if self.provider == SMSProviders.GENERIC:\n return self.send_generic(token, device)\n raise ValueError(f\"invalid provider {self.provider}\")\n\n def get_message(self, token: str) -> str:\n \"\"\"Get SMS message\"\"\"\n return _(\"Use this code to authenticate in authentik: %(token)s\" % {\"token\": token})\n\n def send_twilio(self, token: str, device: \"SMSDevice\"):\n \"\"\"send sms via twilio provider\"\"\"\n client = Client(self.account_sid, self.auth)\n\n try:\n message = client.messages.create(\n to=device.phone_number, from_=self.from_number, body=str(self.get_message(token))\n )\n LOGGER.debug(\"Sent SMS\", to=device, message=message.sid)\n except TwilioRestException as exc:\n LOGGER.warning(\"Error sending token by Twilio SMS\", exc=exc, msg=exc.msg)\n raise ValidationError(exc.msg)\n\n def send_generic(self, token: str, device: \"SMSDevice\"):\n \"\"\"Send SMS via outside API\"\"\"\n payload = {\n \"From\": self.from_number,\n \"To\": device.phone_number,\n \"Body\": token,\n \"Message\": self.get_message(token),\n }\n\n if self.mapping:\n payload = sanitize_item(\n self.mapping.evaluate(\n user=device.user,\n request=None,\n device=device,\n token=token,\n stage=self,\n )\n )\n\n if self.auth_type == SMSAuthTypes.BEARER:\n response = get_http_session().post(\n self.account_sid,\n json=payload,\n headers={\"Authorization\": f\"Bearer {self.auth}\"},\n )\n elif self.auth_type == SMSAuthTypes.BASIC:\n response = get_http_session().post(\n self.account_sid,\n json=payload,\n auth=(self.auth, self.auth_password),\n )\n else:\n raise ValueError(f\"Invalid Auth type '{self.auth_type}'\")\n\n LOGGER.debug(\"Sent SMS\", to=device.phone_number)\n try:\n response.raise_for_status()\n except RequestException as exc:\n LOGGER.warning(\n \"Error sending token by generic SMS\",\n exc=exc,\n status=response.status_code,\n body=response.text[:100],\n )\n Event.new(\n EventAction.CONFIGURATION_ERROR,\n message=\"Error sending SMS\",\n exc=exception_to_string(exc),\n status_code=response.status_code,\n body=response.text,\n ).set_user(device.user).save()\n if response.status_code >= 400:\n raise ValidationError(response.text)\n raise\n\n @property\n def serializer(self) -> type[BaseSerializer]:\n from authentik.stages.authenticator_sms.api import AuthenticatorSMSStageSerializer\n\n return AuthenticatorSMSStageSerializer\n\n @property\n def type(self) -> type[View]:\n from authentik.stages.authenticator_sms.stage import AuthenticatorSMSStageView\n\n return AuthenticatorSMSStageView\n\n @property\n def component(self) -> str:\n return \"ak-stage-authenticator-sms-form\"\n\n def ui_user_settings(self) -> Optional[UserSettingSerializer]:\n return UserSettingSerializer(\n data={\n \"title\": str(self._meta.verbose_name),\n \"component\": \"ak-user-settings-authenticator-sms\",\n }\n )\n\n def __str__(self) -> str:\n return f\"SMS Authenticator Setup Stage {self.name}\"\n\n class Meta:\n verbose_name = _(\"SMS Authenticator Setup Stage\")\n verbose_name_plural = _(\"SMS Authenticator Setup Stages\")\n\n\ndef hash_phone_number(phone_number: str) -> str:\n \"\"\"Hash phone number with prefix\"\"\"\n return \"hash:\" + sha256(phone_number.encode()).hexdigest()\n\n\nclass SMSDevice(SerializerModel, SideChannelDevice):\n \"\"\"SMS Device\"\"\"\n\n user = models.ForeignKey(get_user_model(), on_delete=models.CASCADE)\n\n # Connect to the stage to when validating access we know the API Credentials\n stage = models.ForeignKey(AuthenticatorSMSStage, on_delete=models.CASCADE)\n\n phone_number = models.TextField()\n\n last_t = models.DateTimeField(auto_now=True)\n\n def set_hashed_number(self):\n \"\"\"Set phone_number to hashed number\"\"\"\n self.phone_number = hash_phone_number(self.phone_number)\n\n @property\n def is_hashed(self) -> bool:\n \"\"\"Check if the phone number is hashed\"\"\"\n return self.phone_number.startswith(\"hash:\")\n\n @property\n def serializer(self) -> type[BaseSerializer]:\n from authentik.stages.authenticator_sms.api import SMSDeviceSerializer\n\n return SMSDeviceSerializer\n\n def verify_token(self, token):\n valid = super().verify_token(token)\n if valid:\n self.save()\n return valid\n\n def __str__(self):\n return str(self.name) or str(self.user)\n\n class Meta:\n verbose_name = _(\"SMS Device\")\n verbose_name_plural = _(\"SMS Devices\")\n unique_together = ((\"stage\", \"phone_number\"),)\n", "path": "authentik/stages/authenticator_sms/models.py"}]}
| 2,599 | 264 |
gh_patches_debug_30244
|
rasdani/github-patches
|
git_diff
|
TheAlgorithms__Python-8738
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Running pytest locally fails due to no TESTING or API_KEY
### Repository commit
1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44
### Python version (python --version)
Python 3.11.3
### Dependencies version (pip freeze)
```
absl-py==1.4.0
astunparse==1.6.3
beautifulsoup4==4.12.2
cachetools==5.3.0
certifi==2023.5.7
cffi==1.15.1
cfgv==3.3.1
charset-normalizer==3.1.0
colorama==0.4.6
contourpy==1.0.7
cryptography==40.0.2
cycler==0.11.0
dill==0.3.6
distlib==0.3.6
fake-useragent==1.1.3
filelock==3.12.0
flatbuffers==23.5.9
fonttools==4.39.4
gast==0.4.0
google-auth==2.18.0
google-auth-oauthlib==1.0.0
google-pasta==0.2.0
grpcio==1.54.2
h5py==3.8.0
identify==2.5.24
idna==3.4
iniconfig==2.0.0
jax==0.4.10
joblib==1.2.0
keras==2.12.0
kiwisolver==1.4.4
libclang==16.0.0
lxml==4.9.2
Markdown==3.4.3
markdown-it-py==2.2.0
MarkupSafe==2.1.2
matplotlib==3.7.1
mdurl==0.1.2
ml-dtypes==0.1.0
mpmath==1.3.0
networkx==3.1
nodeenv==1.8.0
ntlm-auth==1.5.0
numpy==1.23.5
oauthlib==3.2.2
opencv-python==4.7.0.72
opt-einsum==3.3.0
packaging==23.1
pandas==2.0.1
patsy==0.5.3
pbr==5.11.1
Pillow==9.5.0
pip==22.3.1
platformdirs==3.5.1
pluggy==1.0.0
ply==3.11
pre-commit==3.3.1
projectq==0.8.0
protobuf==4.23.0
psutil==5.9.5
pyasn1==0.5.0
pyasn1-modules==0.3.0
pycparser==2.21
Pygments==2.15.1
pyparsing==3.0.9
pytest==7.3.1
python-dateutil==2.8.2
pytz==2023.3
PyYAML==6.0
qiskit==0.43.0
qiskit-aer==0.12.0
qiskit-ibmq-provider==0.20.2
qiskit-terra==0.24.0
requests==2.30.0
requests-ntlm==1.1.0
requests-oauthlib==1.3.1
rich==13.3.5
rsa==4.9
ruff==0.0.267
rustworkx==0.12.1
scikit-fuzzy==0.4.2
scikit-learn==1.2.2
scipy==1.10.1
setuptools==65.5.0
six==1.16.0
soupsieve==2.4.1
statsmodels==0.14.0
stevedore==5.0.0
sympy==1.12
tensorboard==2.12.3
tensorboard-data-server==0.7.0
tensorflow==2.12.0
tensorflow-estimator==2.12.0
tensorflow-intel==2.12.0
tensorflow-io-gcs-filesystem==0.31.0
termcolor==2.3.0
texttable==1.6.7
threadpoolctl==3.1.0
tweepy==4.14.0
typing_extensions==4.5.0
tzdata==2023.3
urllib3==1.26.15
virtualenv==20.23.0
websocket-client==1.5.1
websockets==11.0.3
Werkzeug==2.3.4
wheel==0.40.0
wrapt==1.14.1
xgboost==1.7.5
yulewalker==0.1.1
```
### Expected behavior
Every test running successfully
### Actual behavior
```
ERROR web_programming/currency_converter.py - KeyError: "API key must be provided in the 'AMDOREN_API_KEY' environment variable."
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `web_programming/currency_converter.py`
Content:
```
1 """
2 This is used to convert the currency using the Amdoren Currency API
3 https://www.amdoren.com
4 """
5
6 import os
7
8 import requests
9
10 URL_BASE = "https://www.amdoren.com/api/currency.php"
11 TESTING = os.getenv("CI", "")
12 API_KEY = os.getenv("AMDOREN_API_KEY", "")
13
14 if not API_KEY and not TESTING:
15 raise KeyError(
16 "API key must be provided in the 'AMDOREN_API_KEY' environment variable."
17 )
18
19 # Currency and their description
20 list_of_currencies = """
21 AED United Arab Emirates Dirham
22 AFN Afghan Afghani
23 ALL Albanian Lek
24 AMD Armenian Dram
25 ANG Netherlands Antillean Guilder
26 AOA Angolan Kwanza
27 ARS Argentine Peso
28 AUD Australian Dollar
29 AWG Aruban Florin
30 AZN Azerbaijani Manat
31 BAM Bosnia & Herzegovina Convertible Mark
32 BBD Barbadian Dollar
33 BDT Bangladeshi Taka
34 BGN Bulgarian Lev
35 BHD Bahraini Dinar
36 BIF Burundian Franc
37 BMD Bermudian Dollar
38 BND Brunei Dollar
39 BOB Bolivian Boliviano
40 BRL Brazilian Real
41 BSD Bahamian Dollar
42 BTN Bhutanese Ngultrum
43 BWP Botswana Pula
44 BYN Belarus Ruble
45 BZD Belize Dollar
46 CAD Canadian Dollar
47 CDF Congolese Franc
48 CHF Swiss Franc
49 CLP Chilean Peso
50 CNY Chinese Yuan
51 COP Colombian Peso
52 CRC Costa Rican Colon
53 CUC Cuban Convertible Peso
54 CVE Cape Verdean Escudo
55 CZK Czech Republic Koruna
56 DJF Djiboutian Franc
57 DKK Danish Krone
58 DOP Dominican Peso
59 DZD Algerian Dinar
60 EGP Egyptian Pound
61 ERN Eritrean Nakfa
62 ETB Ethiopian Birr
63 EUR Euro
64 FJD Fiji Dollar
65 GBP British Pound Sterling
66 GEL Georgian Lari
67 GHS Ghanaian Cedi
68 GIP Gibraltar Pound
69 GMD Gambian Dalasi
70 GNF Guinea Franc
71 GTQ Guatemalan Quetzal
72 GYD Guyanaese Dollar
73 HKD Hong Kong Dollar
74 HNL Honduran Lempira
75 HRK Croatian Kuna
76 HTG Haiti Gourde
77 HUF Hungarian Forint
78 IDR Indonesian Rupiah
79 ILS Israeli Shekel
80 INR Indian Rupee
81 IQD Iraqi Dinar
82 IRR Iranian Rial
83 ISK Icelandic Krona
84 JMD Jamaican Dollar
85 JOD Jordanian Dinar
86 JPY Japanese Yen
87 KES Kenyan Shilling
88 KGS Kyrgystani Som
89 KHR Cambodian Riel
90 KMF Comorian Franc
91 KPW North Korean Won
92 KRW South Korean Won
93 KWD Kuwaiti Dinar
94 KYD Cayman Islands Dollar
95 KZT Kazakhstan Tenge
96 LAK Laotian Kip
97 LBP Lebanese Pound
98 LKR Sri Lankan Rupee
99 LRD Liberian Dollar
100 LSL Lesotho Loti
101 LYD Libyan Dinar
102 MAD Moroccan Dirham
103 MDL Moldovan Leu
104 MGA Malagasy Ariary
105 MKD Macedonian Denar
106 MMK Myanma Kyat
107 MNT Mongolian Tugrik
108 MOP Macau Pataca
109 MRO Mauritanian Ouguiya
110 MUR Mauritian Rupee
111 MVR Maldivian Rufiyaa
112 MWK Malawi Kwacha
113 MXN Mexican Peso
114 MYR Malaysian Ringgit
115 MZN Mozambican Metical
116 NAD Namibian Dollar
117 NGN Nigerian Naira
118 NIO Nicaragua Cordoba
119 NOK Norwegian Krone
120 NPR Nepalese Rupee
121 NZD New Zealand Dollar
122 OMR Omani Rial
123 PAB Panamanian Balboa
124 PEN Peruvian Nuevo Sol
125 PGK Papua New Guinean Kina
126 PHP Philippine Peso
127 PKR Pakistani Rupee
128 PLN Polish Zloty
129 PYG Paraguayan Guarani
130 QAR Qatari Riyal
131 RON Romanian Leu
132 RSD Serbian Dinar
133 RUB Russian Ruble
134 RWF Rwanda Franc
135 SAR Saudi Riyal
136 SBD Solomon Islands Dollar
137 SCR Seychellois Rupee
138 SDG Sudanese Pound
139 SEK Swedish Krona
140 SGD Singapore Dollar
141 SHP Saint Helena Pound
142 SLL Sierra Leonean Leone
143 SOS Somali Shilling
144 SRD Surinamese Dollar
145 SSP South Sudanese Pound
146 STD Sao Tome and Principe Dobra
147 SYP Syrian Pound
148 SZL Swazi Lilangeni
149 THB Thai Baht
150 TJS Tajikistan Somoni
151 TMT Turkmenistani Manat
152 TND Tunisian Dinar
153 TOP Tonga Paanga
154 TRY Turkish Lira
155 TTD Trinidad and Tobago Dollar
156 TWD New Taiwan Dollar
157 TZS Tanzanian Shilling
158 UAH Ukrainian Hryvnia
159 UGX Ugandan Shilling
160 USD United States Dollar
161 UYU Uruguayan Peso
162 UZS Uzbekistan Som
163 VEF Venezuelan Bolivar
164 VND Vietnamese Dong
165 VUV Vanuatu Vatu
166 WST Samoan Tala
167 XAF Central African CFA franc
168 XCD East Caribbean Dollar
169 XOF West African CFA franc
170 XPF CFP Franc
171 YER Yemeni Rial
172 ZAR South African Rand
173 ZMW Zambian Kwacha
174 """
175
176
177 def convert_currency(
178 from_: str = "USD", to: str = "INR", amount: float = 1.0, api_key: str = API_KEY
179 ) -> str:
180 """https://www.amdoren.com/currency-api/"""
181 params = locals()
182 params["from"] = params.pop("from_")
183 res = requests.get(URL_BASE, params=params).json()
184 return str(res["amount"]) if res["error"] == 0 else res["error_message"]
185
186
187 if __name__ == "__main__":
188 print(
189 convert_currency(
190 input("Enter from currency: ").strip(),
191 input("Enter to currency: ").strip(),
192 float(input("Enter the amount: ").strip()),
193 )
194 )
195
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/web_programming/currency_converter.py b/web_programming/currency_converter.py
--- a/web_programming/currency_converter.py
+++ b/web_programming/currency_converter.py
@@ -8,13 +8,7 @@
import requests
URL_BASE = "https://www.amdoren.com/api/currency.php"
-TESTING = os.getenv("CI", "")
-API_KEY = os.getenv("AMDOREN_API_KEY", "")
-if not API_KEY and not TESTING:
- raise KeyError(
- "API key must be provided in the 'AMDOREN_API_KEY' environment variable."
- )
# Currency and their description
list_of_currencies = """
@@ -175,20 +169,31 @@
def convert_currency(
- from_: str = "USD", to: str = "INR", amount: float = 1.0, api_key: str = API_KEY
+ from_: str = "USD", to: str = "INR", amount: float = 1.0, api_key: str = ""
) -> str:
"""https://www.amdoren.com/currency-api/"""
+ # Instead of manually generating parameters
params = locals()
+ # from is a reserved keyword
params["from"] = params.pop("from_")
res = requests.get(URL_BASE, params=params).json()
return str(res["amount"]) if res["error"] == 0 else res["error_message"]
if __name__ == "__main__":
+ TESTING = os.getenv("CI", "")
+ API_KEY = os.getenv("AMDOREN_API_KEY", "")
+
+ if not API_KEY and not TESTING:
+ raise KeyError(
+ "API key must be provided in the 'AMDOREN_API_KEY' environment variable."
+ )
+
print(
convert_currency(
input("Enter from currency: ").strip(),
input("Enter to currency: ").strip(),
float(input("Enter the amount: ").strip()),
+ API_KEY,
)
)
|
{"golden_diff": "diff --git a/web_programming/currency_converter.py b/web_programming/currency_converter.py\n--- a/web_programming/currency_converter.py\n+++ b/web_programming/currency_converter.py\n@@ -8,13 +8,7 @@\n import requests\n \n URL_BASE = \"https://www.amdoren.com/api/currency.php\"\n-TESTING = os.getenv(\"CI\", \"\")\n-API_KEY = os.getenv(\"AMDOREN_API_KEY\", \"\")\n \n-if not API_KEY and not TESTING:\n- raise KeyError(\n- \"API key must be provided in the 'AMDOREN_API_KEY' environment variable.\"\n- )\n \n # Currency and their description\n list_of_currencies = \"\"\"\n@@ -175,20 +169,31 @@\n \n \n def convert_currency(\n- from_: str = \"USD\", to: str = \"INR\", amount: float = 1.0, api_key: str = API_KEY\n+ from_: str = \"USD\", to: str = \"INR\", amount: float = 1.0, api_key: str = \"\"\n ) -> str:\n \"\"\"https://www.amdoren.com/currency-api/\"\"\"\n+ # Instead of manually generating parameters\n params = locals()\n+ # from is a reserved keyword\n params[\"from\"] = params.pop(\"from_\")\n res = requests.get(URL_BASE, params=params).json()\n return str(res[\"amount\"]) if res[\"error\"] == 0 else res[\"error_message\"]\n \n \n if __name__ == \"__main__\":\n+ TESTING = os.getenv(\"CI\", \"\")\n+ API_KEY = os.getenv(\"AMDOREN_API_KEY\", \"\")\n+\n+ if not API_KEY and not TESTING:\n+ raise KeyError(\n+ \"API key must be provided in the 'AMDOREN_API_KEY' environment variable.\"\n+ )\n+\n print(\n convert_currency(\n input(\"Enter from currency: \").strip(),\n input(\"Enter to currency: \").strip(),\n float(input(\"Enter the amount: \").strip()),\n+ API_KEY,\n )\n )\n", "issue": "Running pytest locally fails due to no TESTING or API_KEY\n### Repository commit\n\n1faf10b5c2dff8cef3f5d59f60a126bd19bb1c44\n\n### Python version (python --version)\n\nPython 3.11.3\n\n### Dependencies version (pip freeze)\n\n```\r\nabsl-py==1.4.0\r\nastunparse==1.6.3\r\nbeautifulsoup4==4.12.2\r\ncachetools==5.3.0\r\ncertifi==2023.5.7\r\ncffi==1.15.1\r\ncfgv==3.3.1\r\ncharset-normalizer==3.1.0\r\ncolorama==0.4.6\r\ncontourpy==1.0.7\r\ncryptography==40.0.2\r\ncycler==0.11.0\r\ndill==0.3.6\r\ndistlib==0.3.6\r\nfake-useragent==1.1.3\r\nfilelock==3.12.0\r\nflatbuffers==23.5.9\r\nfonttools==4.39.4\r\ngast==0.4.0\r\ngoogle-auth==2.18.0\r\ngoogle-auth-oauthlib==1.0.0\r\ngoogle-pasta==0.2.0\r\ngrpcio==1.54.2\r\nh5py==3.8.0\r\nidentify==2.5.24\r\nidna==3.4\r\niniconfig==2.0.0\r\njax==0.4.10\r\njoblib==1.2.0\r\nkeras==2.12.0\r\nkiwisolver==1.4.4\r\nlibclang==16.0.0\r\nlxml==4.9.2\r\nMarkdown==3.4.3\r\nmarkdown-it-py==2.2.0\r\nMarkupSafe==2.1.2\r\nmatplotlib==3.7.1\r\nmdurl==0.1.2\r\nml-dtypes==0.1.0\r\nmpmath==1.3.0\r\nnetworkx==3.1\r\nnodeenv==1.8.0\r\nntlm-auth==1.5.0\r\nnumpy==1.23.5\r\noauthlib==3.2.2\r\nopencv-python==4.7.0.72\r\nopt-einsum==3.3.0\r\npackaging==23.1\r\npandas==2.0.1\r\npatsy==0.5.3\r\npbr==5.11.1\r\nPillow==9.5.0\r\npip==22.3.1\r\nplatformdirs==3.5.1\r\npluggy==1.0.0\r\nply==3.11\r\npre-commit==3.3.1\r\nprojectq==0.8.0\r\nprotobuf==4.23.0\r\npsutil==5.9.5\r\npyasn1==0.5.0\r\npyasn1-modules==0.3.0\r\npycparser==2.21\r\nPygments==2.15.1\r\npyparsing==3.0.9\r\npytest==7.3.1\r\npython-dateutil==2.8.2\r\npytz==2023.3\r\nPyYAML==6.0\r\nqiskit==0.43.0\r\nqiskit-aer==0.12.0\r\nqiskit-ibmq-provider==0.20.2\r\nqiskit-terra==0.24.0\r\nrequests==2.30.0\r\nrequests-ntlm==1.1.0\r\nrequests-oauthlib==1.3.1\r\nrich==13.3.5\r\nrsa==4.9\r\nruff==0.0.267\r\nrustworkx==0.12.1\r\nscikit-fuzzy==0.4.2\r\nscikit-learn==1.2.2\r\nscipy==1.10.1\r\nsetuptools==65.5.0\r\nsix==1.16.0\r\nsoupsieve==2.4.1\r\nstatsmodels==0.14.0\r\nstevedore==5.0.0\r\nsympy==1.12\r\ntensorboard==2.12.3\r\ntensorboard-data-server==0.7.0\r\ntensorflow==2.12.0\r\ntensorflow-estimator==2.12.0\r\ntensorflow-intel==2.12.0\r\ntensorflow-io-gcs-filesystem==0.31.0\r\ntermcolor==2.3.0\r\ntexttable==1.6.7\r\nthreadpoolctl==3.1.0\r\ntweepy==4.14.0\r\ntyping_extensions==4.5.0\r\ntzdata==2023.3\r\nurllib3==1.26.15\r\nvirtualenv==20.23.0\r\nwebsocket-client==1.5.1\r\nwebsockets==11.0.3\r\nWerkzeug==2.3.4\r\nwheel==0.40.0\r\nwrapt==1.14.1\r\nxgboost==1.7.5\r\nyulewalker==0.1.1\r\n```\n\n### Expected behavior\n\nEvery test running successfully\n\n### Actual behavior\n\n```\r\nERROR web_programming/currency_converter.py - KeyError: \"API key must be provided in the 'AMDOREN_API_KEY' environment variable.\"\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\nThis is used to convert the currency using the Amdoren Currency API\nhttps://www.amdoren.com\n\"\"\"\n\nimport os\n\nimport requests\n\nURL_BASE = \"https://www.amdoren.com/api/currency.php\"\nTESTING = os.getenv(\"CI\", \"\")\nAPI_KEY = os.getenv(\"AMDOREN_API_KEY\", \"\")\n\nif not API_KEY and not TESTING:\n raise KeyError(\n \"API key must be provided in the 'AMDOREN_API_KEY' environment variable.\"\n )\n\n# Currency and their description\nlist_of_currencies = \"\"\"\nAED\tUnited Arab Emirates Dirham\nAFN\tAfghan Afghani\nALL\tAlbanian Lek\nAMD\tArmenian Dram\nANG\tNetherlands Antillean Guilder\nAOA\tAngolan Kwanza\nARS\tArgentine Peso\nAUD\tAustralian Dollar\nAWG\tAruban Florin\nAZN\tAzerbaijani Manat\nBAM\tBosnia & Herzegovina Convertible Mark\nBBD\tBarbadian Dollar\nBDT\tBangladeshi Taka\nBGN\tBulgarian Lev\nBHD\tBahraini Dinar\nBIF\tBurundian Franc\nBMD\tBermudian Dollar\nBND\tBrunei Dollar\nBOB\tBolivian Boliviano\nBRL\tBrazilian Real\nBSD\tBahamian Dollar\nBTN\tBhutanese Ngultrum\nBWP\tBotswana Pula\nBYN\tBelarus Ruble\nBZD\tBelize Dollar\nCAD\tCanadian Dollar\nCDF\tCongolese Franc\nCHF\tSwiss Franc\nCLP\tChilean Peso\nCNY\tChinese Yuan\nCOP\tColombian Peso\nCRC\tCosta Rican Colon\nCUC\tCuban Convertible Peso\nCVE\tCape Verdean Escudo\nCZK\tCzech Republic Koruna\nDJF\tDjiboutian Franc\nDKK\tDanish Krone\nDOP\tDominican Peso\nDZD\tAlgerian Dinar\nEGP\tEgyptian Pound\nERN\tEritrean Nakfa\nETB\tEthiopian Birr\nEUR\tEuro\nFJD\tFiji Dollar\nGBP\tBritish Pound Sterling\nGEL\tGeorgian Lari\nGHS\tGhanaian Cedi\nGIP\tGibraltar Pound\nGMD\tGambian Dalasi\nGNF\tGuinea Franc\nGTQ\tGuatemalan Quetzal\nGYD\tGuyanaese Dollar\nHKD\tHong Kong Dollar\nHNL\tHonduran Lempira\nHRK\tCroatian Kuna\nHTG\tHaiti Gourde\nHUF\tHungarian Forint\nIDR\tIndonesian Rupiah\nILS\tIsraeli Shekel\nINR\tIndian Rupee\nIQD\tIraqi Dinar\nIRR\tIranian Rial\nISK\tIcelandic Krona\nJMD\tJamaican Dollar\nJOD\tJordanian Dinar\nJPY\tJapanese Yen\nKES\tKenyan Shilling\nKGS\tKyrgystani Som\nKHR\tCambodian Riel\nKMF\tComorian Franc\nKPW\tNorth Korean Won\nKRW\tSouth Korean Won\nKWD\tKuwaiti Dinar\nKYD\tCayman Islands Dollar\nKZT\tKazakhstan Tenge\nLAK\tLaotian Kip\nLBP\tLebanese Pound\nLKR\tSri Lankan Rupee\nLRD\tLiberian Dollar\nLSL\tLesotho Loti\nLYD\tLibyan Dinar\nMAD\tMoroccan Dirham\nMDL\tMoldovan Leu\nMGA\tMalagasy Ariary\nMKD\tMacedonian Denar\nMMK\tMyanma Kyat\nMNT\tMongolian Tugrik\nMOP\tMacau Pataca\nMRO\tMauritanian Ouguiya\nMUR\tMauritian Rupee\nMVR\tMaldivian Rufiyaa\nMWK\tMalawi Kwacha\nMXN\tMexican Peso\nMYR\tMalaysian Ringgit\nMZN\tMozambican Metical\nNAD\tNamibian Dollar\nNGN\tNigerian Naira\nNIO\tNicaragua Cordoba\nNOK\tNorwegian Krone\nNPR\tNepalese Rupee\nNZD\tNew Zealand Dollar\nOMR\tOmani Rial\nPAB\tPanamanian Balboa\nPEN\tPeruvian Nuevo Sol\nPGK\tPapua New Guinean Kina\nPHP\tPhilippine Peso\nPKR\tPakistani Rupee\nPLN\tPolish Zloty\nPYG\tParaguayan Guarani\nQAR\tQatari Riyal\nRON\tRomanian Leu\nRSD\tSerbian Dinar\nRUB\tRussian Ruble\nRWF\tRwanda Franc\nSAR\tSaudi Riyal\nSBD\tSolomon Islands Dollar\nSCR\tSeychellois Rupee\nSDG\tSudanese Pound\nSEK\tSwedish Krona\nSGD\tSingapore Dollar\nSHP\tSaint Helena Pound\nSLL\tSierra Leonean Leone\nSOS\tSomali Shilling\nSRD\tSurinamese Dollar\nSSP\tSouth Sudanese Pound\nSTD\tSao Tome and Principe Dobra\nSYP\tSyrian Pound\nSZL\tSwazi Lilangeni\nTHB\tThai Baht\nTJS\tTajikistan Somoni\nTMT\tTurkmenistani Manat\nTND\tTunisian Dinar\nTOP\tTonga Paanga\nTRY\tTurkish Lira\nTTD\tTrinidad and Tobago Dollar\nTWD\tNew Taiwan Dollar\nTZS\tTanzanian Shilling\nUAH\tUkrainian Hryvnia\nUGX\tUgandan Shilling\nUSD\tUnited States Dollar\nUYU\tUruguayan Peso\nUZS\tUzbekistan Som\nVEF\tVenezuelan Bolivar\nVND\tVietnamese Dong\nVUV\tVanuatu Vatu\nWST\tSamoan Tala\nXAF\tCentral African CFA franc\nXCD\tEast Caribbean Dollar\nXOF\tWest African CFA franc\nXPF\tCFP Franc\nYER\tYemeni Rial\nZAR\tSouth African Rand\nZMW\tZambian Kwacha\n\"\"\"\n\n\ndef convert_currency(\n from_: str = \"USD\", to: str = \"INR\", amount: float = 1.0, api_key: str = API_KEY\n) -> str:\n \"\"\"https://www.amdoren.com/currency-api/\"\"\"\n params = locals()\n params[\"from\"] = params.pop(\"from_\")\n res = requests.get(URL_BASE, params=params).json()\n return str(res[\"amount\"]) if res[\"error\"] == 0 else res[\"error_message\"]\n\n\nif __name__ == \"__main__\":\n print(\n convert_currency(\n input(\"Enter from currency: \").strip(),\n input(\"Enter to currency: \").strip(),\n float(input(\"Enter the amount: \").strip()),\n )\n )\n", "path": "web_programming/currency_converter.py"}], "after_files": [{"content": "\"\"\"\nThis is used to convert the currency using the Amdoren Currency API\nhttps://www.amdoren.com\n\"\"\"\n\nimport os\n\nimport requests\n\nURL_BASE = \"https://www.amdoren.com/api/currency.php\"\n\n\n# Currency and their description\nlist_of_currencies = \"\"\"\nAED\tUnited Arab Emirates Dirham\nAFN\tAfghan Afghani\nALL\tAlbanian Lek\nAMD\tArmenian Dram\nANG\tNetherlands Antillean Guilder\nAOA\tAngolan Kwanza\nARS\tArgentine Peso\nAUD\tAustralian Dollar\nAWG\tAruban Florin\nAZN\tAzerbaijani Manat\nBAM\tBosnia & Herzegovina Convertible Mark\nBBD\tBarbadian Dollar\nBDT\tBangladeshi Taka\nBGN\tBulgarian Lev\nBHD\tBahraini Dinar\nBIF\tBurundian Franc\nBMD\tBermudian Dollar\nBND\tBrunei Dollar\nBOB\tBolivian Boliviano\nBRL\tBrazilian Real\nBSD\tBahamian Dollar\nBTN\tBhutanese Ngultrum\nBWP\tBotswana Pula\nBYN\tBelarus Ruble\nBZD\tBelize Dollar\nCAD\tCanadian Dollar\nCDF\tCongolese Franc\nCHF\tSwiss Franc\nCLP\tChilean Peso\nCNY\tChinese Yuan\nCOP\tColombian Peso\nCRC\tCosta Rican Colon\nCUC\tCuban Convertible Peso\nCVE\tCape Verdean Escudo\nCZK\tCzech Republic Koruna\nDJF\tDjiboutian Franc\nDKK\tDanish Krone\nDOP\tDominican Peso\nDZD\tAlgerian Dinar\nEGP\tEgyptian Pound\nERN\tEritrean Nakfa\nETB\tEthiopian Birr\nEUR\tEuro\nFJD\tFiji Dollar\nGBP\tBritish Pound Sterling\nGEL\tGeorgian Lari\nGHS\tGhanaian Cedi\nGIP\tGibraltar Pound\nGMD\tGambian Dalasi\nGNF\tGuinea Franc\nGTQ\tGuatemalan Quetzal\nGYD\tGuyanaese Dollar\nHKD\tHong Kong Dollar\nHNL\tHonduran Lempira\nHRK\tCroatian Kuna\nHTG\tHaiti Gourde\nHUF\tHungarian Forint\nIDR\tIndonesian Rupiah\nILS\tIsraeli Shekel\nINR\tIndian Rupee\nIQD\tIraqi Dinar\nIRR\tIranian Rial\nISK\tIcelandic Krona\nJMD\tJamaican Dollar\nJOD\tJordanian Dinar\nJPY\tJapanese Yen\nKES\tKenyan Shilling\nKGS\tKyrgystani Som\nKHR\tCambodian Riel\nKMF\tComorian Franc\nKPW\tNorth Korean Won\nKRW\tSouth Korean Won\nKWD\tKuwaiti Dinar\nKYD\tCayman Islands Dollar\nKZT\tKazakhstan Tenge\nLAK\tLaotian Kip\nLBP\tLebanese Pound\nLKR\tSri Lankan Rupee\nLRD\tLiberian Dollar\nLSL\tLesotho Loti\nLYD\tLibyan Dinar\nMAD\tMoroccan Dirham\nMDL\tMoldovan Leu\nMGA\tMalagasy Ariary\nMKD\tMacedonian Denar\nMMK\tMyanma Kyat\nMNT\tMongolian Tugrik\nMOP\tMacau Pataca\nMRO\tMauritanian Ouguiya\nMUR\tMauritian Rupee\nMVR\tMaldivian Rufiyaa\nMWK\tMalawi Kwacha\nMXN\tMexican Peso\nMYR\tMalaysian Ringgit\nMZN\tMozambican Metical\nNAD\tNamibian Dollar\nNGN\tNigerian Naira\nNIO\tNicaragua Cordoba\nNOK\tNorwegian Krone\nNPR\tNepalese Rupee\nNZD\tNew Zealand Dollar\nOMR\tOmani Rial\nPAB\tPanamanian Balboa\nPEN\tPeruvian Nuevo Sol\nPGK\tPapua New Guinean Kina\nPHP\tPhilippine Peso\nPKR\tPakistani Rupee\nPLN\tPolish Zloty\nPYG\tParaguayan Guarani\nQAR\tQatari Riyal\nRON\tRomanian Leu\nRSD\tSerbian Dinar\nRUB\tRussian Ruble\nRWF\tRwanda Franc\nSAR\tSaudi Riyal\nSBD\tSolomon Islands Dollar\nSCR\tSeychellois Rupee\nSDG\tSudanese Pound\nSEK\tSwedish Krona\nSGD\tSingapore Dollar\nSHP\tSaint Helena Pound\nSLL\tSierra Leonean Leone\nSOS\tSomali Shilling\nSRD\tSurinamese Dollar\nSSP\tSouth Sudanese Pound\nSTD\tSao Tome and Principe Dobra\nSYP\tSyrian Pound\nSZL\tSwazi Lilangeni\nTHB\tThai Baht\nTJS\tTajikistan Somoni\nTMT\tTurkmenistani Manat\nTND\tTunisian Dinar\nTOP\tTonga Paanga\nTRY\tTurkish Lira\nTTD\tTrinidad and Tobago Dollar\nTWD\tNew Taiwan Dollar\nTZS\tTanzanian Shilling\nUAH\tUkrainian Hryvnia\nUGX\tUgandan Shilling\nUSD\tUnited States Dollar\nUYU\tUruguayan Peso\nUZS\tUzbekistan Som\nVEF\tVenezuelan Bolivar\nVND\tVietnamese Dong\nVUV\tVanuatu Vatu\nWST\tSamoan Tala\nXAF\tCentral African CFA franc\nXCD\tEast Caribbean Dollar\nXOF\tWest African CFA franc\nXPF\tCFP Franc\nYER\tYemeni Rial\nZAR\tSouth African Rand\nZMW\tZambian Kwacha\n\"\"\"\n\n\ndef convert_currency(\n from_: str = \"USD\", to: str = \"INR\", amount: float = 1.0, api_key: str = \"\"\n) -> str:\n \"\"\"https://www.amdoren.com/currency-api/\"\"\"\n # Instead of manually generating parameters\n params = locals()\n # from is a reserved keyword\n params[\"from\"] = params.pop(\"from_\")\n res = requests.get(URL_BASE, params=params).json()\n return str(res[\"amount\"]) if res[\"error\"] == 0 else res[\"error_message\"]\n\n\nif __name__ == \"__main__\":\n TESTING = os.getenv(\"CI\", \"\")\n API_KEY = os.getenv(\"AMDOREN_API_KEY\", \"\")\n\n if not API_KEY and not TESTING:\n raise KeyError(\n \"API key must be provided in the 'AMDOREN_API_KEY' environment variable.\"\n )\n\n print(\n convert_currency(\n input(\"Enter from currency: \").strip(),\n input(\"Enter to currency: \").strip(),\n float(input(\"Enter the amount: \").strip()),\n API_KEY,\n )\n )\n", "path": "web_programming/currency_converter.py"}]}
| 3,466 | 438 |
gh_patches_debug_19061
|
rasdani/github-patches
|
git_diff
|
ycm-core__ycmd-623
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Processing .ycm_extra_conf.py creates __pycache__ directory
I'm not sure if this is the intended behaviour. When YCM reads configuration creates a compiled version in `__pycache__`. I know that this behaviour can be disabled passing to `python` the `-B` argument or setting `PYTHONDONTWRITEBYTECODE=1` environmental variable. I don't want to disable global bytecode generation but I want to disable for `.ycm_extra_conf.py` because I feel it pollutes my project directory.
Is there a easy/reliable way to disable it in the YCM config?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ycmd/extra_conf_store.py`
Content:
```
1 # Copyright (C) 2011, 2012 Google Inc.
2 #
3 # This file is part of ycmd.
4 #
5 # ycmd is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # ycmd is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with ycmd. If not, see <http://www.gnu.org/licenses/>.
17
18 # NOTE: This module is used as a Singleton
19
20 from __future__ import unicode_literals
21 from __future__ import print_function
22 from __future__ import division
23 from __future__ import absolute_import
24 from future import standard_library
25 standard_library.install_aliases()
26 from builtins import * # noqa
27
28 import os
29 import random
30 import string
31 import sys
32 import logging
33 from threading import Lock
34 from ycmd import user_options_store
35 from ycmd.responses import UnknownExtraConf, YCM_EXTRA_CONF_FILENAME
36 from ycmd.utils import LoadPythonSource, PathsToAllParentFolders
37 from fnmatch import fnmatch
38
39
40 # Singleton variables
41 _module_for_module_file = {}
42 _module_for_module_file_lock = Lock()
43 _module_file_for_source_file = {}
44 _module_file_for_source_file_lock = Lock()
45
46
47 def Reset():
48 global _module_for_module_file, _module_file_for_source_file
49 _module_for_module_file = {}
50 _module_file_for_source_file = {}
51
52
53 def ModuleForSourceFile( filename ):
54 return Load( ModuleFileForSourceFile( filename ) )
55
56
57 def ModuleFileForSourceFile( filename ):
58 """This will try all files returned by _ExtraConfModuleSourceFilesForFile in
59 order and return the filename of the first module that was allowed to load.
60 If no module was found or allowed to load, None is returned."""
61
62 with _module_file_for_source_file_lock:
63 if filename not in _module_file_for_source_file:
64 for module_file in _ExtraConfModuleSourceFilesForFile( filename ):
65 if Load( module_file ):
66 _module_file_for_source_file[ filename ] = module_file
67 break
68
69 return _module_file_for_source_file.setdefault( filename )
70
71
72 def CallGlobalExtraConfYcmCorePreloadIfExists():
73 _CallGlobalExtraConfMethod( 'YcmCorePreload' )
74
75
76 def Shutdown():
77 # VimClose is for the sake of backwards compatibility; it's a no-op when it
78 # doesn't exist.
79 _CallGlobalExtraConfMethod( 'VimClose' )
80 _CallGlobalExtraConfMethod( 'Shutdown' )
81
82
83 def _CallGlobalExtraConfMethod( function_name ):
84 logger = _Logger()
85 global_ycm_extra_conf = _GlobalYcmExtraConfFileLocation()
86 if not ( global_ycm_extra_conf and
87 os.path.exists( global_ycm_extra_conf ) ):
88 logger.debug( 'No global extra conf, not calling method ' + function_name )
89 return
90
91 module = Load( global_ycm_extra_conf, force = True )
92 if not module or not hasattr( module, function_name ):
93 logger.debug( 'Global extra conf not loaded or no function ' +
94 function_name )
95 return
96
97 logger.info( 'Calling global extra conf method {0} on conf file {1}'.format(
98 function_name, global_ycm_extra_conf ) )
99 getattr( module, function_name )()
100
101
102 def Disable( module_file ):
103 """Disables the loading of a module for the current session."""
104 with _module_for_module_file_lock:
105 _module_for_module_file[ module_file ] = None
106
107
108 def _ShouldLoad( module_file ):
109 """Checks if a module is safe to be loaded. By default this will try to
110 decide using a white-/blacklist and ask the user for confirmation as a
111 fallback."""
112
113 if ( module_file == _GlobalYcmExtraConfFileLocation() or
114 not user_options_store.Value( 'confirm_extra_conf' ) ):
115 return True
116
117 globlist = user_options_store.Value( 'extra_conf_globlist' )
118 for glob in globlist:
119 is_blacklisted = glob[0] == '!'
120 if _MatchesGlobPattern( module_file, glob.lstrip('!') ):
121 return not is_blacklisted
122
123 raise UnknownExtraConf( module_file )
124
125
126 def Load( module_file, force = False ):
127 """Load and return the module contained in a file.
128 Using force = True the module will be loaded regardless
129 of the criteria in _ShouldLoad.
130 This will return None if the module was not allowed to be loaded."""
131
132 if not module_file:
133 return None
134
135 if not force:
136 with _module_for_module_file_lock:
137 if module_file in _module_for_module_file:
138 return _module_for_module_file[ module_file ]
139
140 if not _ShouldLoad( module_file ):
141 Disable( module_file )
142 return None
143
144 # This has to be here because a long time ago, the ycm_extra_conf.py files
145 # used to import clang_helpers.py from the cpp folder. This is not needed
146 # anymore, but there are a lot of old ycm_extra_conf.py files that we don't
147 # want to break.
148 sys.path.insert( 0, _PathToCppCompleterFolder() )
149 module = LoadPythonSource( _RandomName(), module_file )
150 del sys.path[ 0 ]
151
152 with _module_for_module_file_lock:
153 _module_for_module_file[ module_file ] = module
154 return module
155
156
157 def _MatchesGlobPattern( filename, glob ):
158 """Returns true if a filename matches a given pattern. A '~' in glob will be
159 expanded to the home directory and checking will be performed using absolute
160 paths. See the documentation of fnmatch for the supported patterns."""
161
162 abspath = os.path.abspath( filename )
163 return fnmatch( abspath, os.path.abspath( os.path.expanduser( glob ) ) )
164
165
166 def _ExtraConfModuleSourceFilesForFile( filename ):
167 """For a given filename, search all parent folders for YCM_EXTRA_CONF_FILENAME
168 files that will compute the flags necessary to compile the file.
169 If _GlobalYcmExtraConfFileLocation() exists it is returned as a fallback."""
170
171 for folder in PathsToAllParentFolders( filename ):
172 candidate = os.path.join( folder, YCM_EXTRA_CONF_FILENAME )
173 if os.path.exists( candidate ):
174 yield candidate
175 global_ycm_extra_conf = _GlobalYcmExtraConfFileLocation()
176 if ( global_ycm_extra_conf
177 and os.path.exists( global_ycm_extra_conf ) ):
178 yield global_ycm_extra_conf
179
180
181 def _PathToCppCompleterFolder():
182 """Returns the path to the 'cpp' completer folder. This is necessary
183 because ycm_extra_conf files need it on the path."""
184 return os.path.join( _DirectoryOfThisScript(), 'completers', 'cpp' )
185
186
187 def _DirectoryOfThisScript():
188 return os.path.dirname( os.path.abspath( __file__ ) )
189
190
191 def _RandomName():
192 """Generates a random module name."""
193 return ''.join( random.choice( string.ascii_lowercase ) for x in range( 15 ) )
194
195
196 def _GlobalYcmExtraConfFileLocation():
197 return os.path.expanduser(
198 user_options_store.Value( 'global_ycm_extra_conf' ) )
199
200
201 def _Logger():
202 return logging.getLogger( __name__ )
203
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ycmd/extra_conf_store.py b/ycmd/extra_conf_store.py
--- a/ycmd/extra_conf_store.py
+++ b/ycmd/extra_conf_store.py
@@ -146,7 +146,21 @@
# anymore, but there are a lot of old ycm_extra_conf.py files that we don't
# want to break.
sys.path.insert( 0, _PathToCppCompleterFolder() )
- module = LoadPythonSource( _RandomName(), module_file )
+
+ # By default, the Python interpreter compiles source files into bytecode to
+ # load them faster next time they are run. These *.pyc files are generated
+ # along the source files prior to Python 3.2 or in a __pycache__ folder for
+ # newer versions. We disable the generation of these files when loading
+ # ycm_extra_conf.py files as users do not want them inside their projects.
+ # The drawback is negligible since ycm_extra_conf.py files are generally small
+ # files thus really fast to compile and only loaded once by editing session.
+ old_dont_write_bytecode = sys.dont_write_bytecode
+ sys.dont_write_bytecode = True
+ try:
+ module = LoadPythonSource( _RandomName(), module_file )
+ finally:
+ sys.dont_write_bytecode = old_dont_write_bytecode
+
del sys.path[ 0 ]
with _module_for_module_file_lock:
|
{"golden_diff": "diff --git a/ycmd/extra_conf_store.py b/ycmd/extra_conf_store.py\n--- a/ycmd/extra_conf_store.py\n+++ b/ycmd/extra_conf_store.py\n@@ -146,7 +146,21 @@\n # anymore, but there are a lot of old ycm_extra_conf.py files that we don't\n # want to break.\n sys.path.insert( 0, _PathToCppCompleterFolder() )\n- module = LoadPythonSource( _RandomName(), module_file )\n+\n+ # By default, the Python interpreter compiles source files into bytecode to\n+ # load them faster next time they are run. These *.pyc files are generated\n+ # along the source files prior to Python 3.2 or in a __pycache__ folder for\n+ # newer versions. We disable the generation of these files when loading\n+ # ycm_extra_conf.py files as users do not want them inside their projects.\n+ # The drawback is negligible since ycm_extra_conf.py files are generally small\n+ # files thus really fast to compile and only loaded once by editing session.\n+ old_dont_write_bytecode = sys.dont_write_bytecode\n+ sys.dont_write_bytecode = True\n+ try:\n+ module = LoadPythonSource( _RandomName(), module_file )\n+ finally:\n+ sys.dont_write_bytecode = old_dont_write_bytecode\n+\n del sys.path[ 0 ]\n \n with _module_for_module_file_lock:\n", "issue": "Processing .ycm_extra_conf.py creates __pycache__ directory\nI'm not sure if this is the intended behaviour. When YCM reads configuration creates a compiled version in `__pycache__`. I know that this behaviour can be disabled passing to `python` the `-B` argument or setting `PYTHONDONTWRITEBYTECODE=1` environmental variable. I don't want to disable global bytecode generation but I want to disable for `.ycm_extra_conf.py` because I feel it pollutes my project directory. \n\nIs there a easy/reliable way to disable it in the YCM config?\n\n", "before_files": [{"content": "# Copyright (C) 2011, 2012 Google Inc.\n#\n# This file is part of ycmd.\n#\n# ycmd is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# ycmd is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with ycmd. If not, see <http://www.gnu.org/licenses/>.\n\n# NOTE: This module is used as a Singleton\n\nfrom __future__ import unicode_literals\nfrom __future__ import print_function\nfrom __future__ import division\nfrom __future__ import absolute_import\nfrom future import standard_library\nstandard_library.install_aliases()\nfrom builtins import * # noqa\n\nimport os\nimport random\nimport string\nimport sys\nimport logging\nfrom threading import Lock\nfrom ycmd import user_options_store\nfrom ycmd.responses import UnknownExtraConf, YCM_EXTRA_CONF_FILENAME\nfrom ycmd.utils import LoadPythonSource, PathsToAllParentFolders\nfrom fnmatch import fnmatch\n\n\n# Singleton variables\n_module_for_module_file = {}\n_module_for_module_file_lock = Lock()\n_module_file_for_source_file = {}\n_module_file_for_source_file_lock = Lock()\n\n\ndef Reset():\n global _module_for_module_file, _module_file_for_source_file\n _module_for_module_file = {}\n _module_file_for_source_file = {}\n\n\ndef ModuleForSourceFile( filename ):\n return Load( ModuleFileForSourceFile( filename ) )\n\n\ndef ModuleFileForSourceFile( filename ):\n \"\"\"This will try all files returned by _ExtraConfModuleSourceFilesForFile in\n order and return the filename of the first module that was allowed to load.\n If no module was found or allowed to load, None is returned.\"\"\"\n\n with _module_file_for_source_file_lock:\n if filename not in _module_file_for_source_file:\n for module_file in _ExtraConfModuleSourceFilesForFile( filename ):\n if Load( module_file ):\n _module_file_for_source_file[ filename ] = module_file\n break\n\n return _module_file_for_source_file.setdefault( filename )\n\n\ndef CallGlobalExtraConfYcmCorePreloadIfExists():\n _CallGlobalExtraConfMethod( 'YcmCorePreload' )\n\n\ndef Shutdown():\n # VimClose is for the sake of backwards compatibility; it's a no-op when it\n # doesn't exist.\n _CallGlobalExtraConfMethod( 'VimClose' )\n _CallGlobalExtraConfMethod( 'Shutdown' )\n\n\ndef _CallGlobalExtraConfMethod( function_name ):\n logger = _Logger()\n global_ycm_extra_conf = _GlobalYcmExtraConfFileLocation()\n if not ( global_ycm_extra_conf and\n os.path.exists( global_ycm_extra_conf ) ):\n logger.debug( 'No global extra conf, not calling method ' + function_name )\n return\n\n module = Load( global_ycm_extra_conf, force = True )\n if not module or not hasattr( module, function_name ):\n logger.debug( 'Global extra conf not loaded or no function ' +\n function_name )\n return\n\n logger.info( 'Calling global extra conf method {0} on conf file {1}'.format(\n function_name, global_ycm_extra_conf ) )\n getattr( module, function_name )()\n\n\ndef Disable( module_file ):\n \"\"\"Disables the loading of a module for the current session.\"\"\"\n with _module_for_module_file_lock:\n _module_for_module_file[ module_file ] = None\n\n\ndef _ShouldLoad( module_file ):\n \"\"\"Checks if a module is safe to be loaded. By default this will try to\n decide using a white-/blacklist and ask the user for confirmation as a\n fallback.\"\"\"\n\n if ( module_file == _GlobalYcmExtraConfFileLocation() or\n not user_options_store.Value( 'confirm_extra_conf' ) ):\n return True\n\n globlist = user_options_store.Value( 'extra_conf_globlist' )\n for glob in globlist:\n is_blacklisted = glob[0] == '!'\n if _MatchesGlobPattern( module_file, glob.lstrip('!') ):\n return not is_blacklisted\n\n raise UnknownExtraConf( module_file )\n\n\ndef Load( module_file, force = False ):\n \"\"\"Load and return the module contained in a file.\n Using force = True the module will be loaded regardless\n of the criteria in _ShouldLoad.\n This will return None if the module was not allowed to be loaded.\"\"\"\n\n if not module_file:\n return None\n\n if not force:\n with _module_for_module_file_lock:\n if module_file in _module_for_module_file:\n return _module_for_module_file[ module_file ]\n\n if not _ShouldLoad( module_file ):\n Disable( module_file )\n return None\n\n # This has to be here because a long time ago, the ycm_extra_conf.py files\n # used to import clang_helpers.py from the cpp folder. This is not needed\n # anymore, but there are a lot of old ycm_extra_conf.py files that we don't\n # want to break.\n sys.path.insert( 0, _PathToCppCompleterFolder() )\n module = LoadPythonSource( _RandomName(), module_file )\n del sys.path[ 0 ]\n\n with _module_for_module_file_lock:\n _module_for_module_file[ module_file ] = module\n return module\n\n\ndef _MatchesGlobPattern( filename, glob ):\n \"\"\"Returns true if a filename matches a given pattern. A '~' in glob will be\n expanded to the home directory and checking will be performed using absolute\n paths. See the documentation of fnmatch for the supported patterns.\"\"\"\n\n abspath = os.path.abspath( filename )\n return fnmatch( abspath, os.path.abspath( os.path.expanduser( glob ) ) )\n\n\ndef _ExtraConfModuleSourceFilesForFile( filename ):\n \"\"\"For a given filename, search all parent folders for YCM_EXTRA_CONF_FILENAME\n files that will compute the flags necessary to compile the file.\n If _GlobalYcmExtraConfFileLocation() exists it is returned as a fallback.\"\"\"\n\n for folder in PathsToAllParentFolders( filename ):\n candidate = os.path.join( folder, YCM_EXTRA_CONF_FILENAME )\n if os.path.exists( candidate ):\n yield candidate\n global_ycm_extra_conf = _GlobalYcmExtraConfFileLocation()\n if ( global_ycm_extra_conf\n and os.path.exists( global_ycm_extra_conf ) ):\n yield global_ycm_extra_conf\n\n\ndef _PathToCppCompleterFolder():\n \"\"\"Returns the path to the 'cpp' completer folder. This is necessary\n because ycm_extra_conf files need it on the path.\"\"\"\n return os.path.join( _DirectoryOfThisScript(), 'completers', 'cpp' )\n\n\ndef _DirectoryOfThisScript():\n return os.path.dirname( os.path.abspath( __file__ ) )\n\n\ndef _RandomName():\n \"\"\"Generates a random module name.\"\"\"\n return ''.join( random.choice( string.ascii_lowercase ) for x in range( 15 ) )\n\n\ndef _GlobalYcmExtraConfFileLocation():\n return os.path.expanduser(\n user_options_store.Value( 'global_ycm_extra_conf' ) )\n\n\ndef _Logger():\n return logging.getLogger( __name__ )\n", "path": "ycmd/extra_conf_store.py"}], "after_files": [{"content": "# Copyright (C) 2011, 2012 Google Inc.\n#\n# This file is part of ycmd.\n#\n# ycmd is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# ycmd is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with ycmd. If not, see <http://www.gnu.org/licenses/>.\n\n# NOTE: This module is used as a Singleton\n\nfrom __future__ import unicode_literals\nfrom __future__ import print_function\nfrom __future__ import division\nfrom __future__ import absolute_import\nfrom future import standard_library\nstandard_library.install_aliases()\nfrom builtins import * # noqa\n\nimport os\nimport random\nimport string\nimport sys\nimport logging\nfrom threading import Lock\nfrom ycmd import user_options_store\nfrom ycmd.responses import UnknownExtraConf, YCM_EXTRA_CONF_FILENAME\nfrom ycmd.utils import LoadPythonSource, PathsToAllParentFolders\nfrom fnmatch import fnmatch\n\n\n# Singleton variables\n_module_for_module_file = {}\n_module_for_module_file_lock = Lock()\n_module_file_for_source_file = {}\n_module_file_for_source_file_lock = Lock()\n\n\ndef Reset():\n global _module_for_module_file, _module_file_for_source_file\n _module_for_module_file = {}\n _module_file_for_source_file = {}\n\n\ndef ModuleForSourceFile( filename ):\n return Load( ModuleFileForSourceFile( filename ) )\n\n\ndef ModuleFileForSourceFile( filename ):\n \"\"\"This will try all files returned by _ExtraConfModuleSourceFilesForFile in\n order and return the filename of the first module that was allowed to load.\n If no module was found or allowed to load, None is returned.\"\"\"\n\n with _module_file_for_source_file_lock:\n if filename not in _module_file_for_source_file:\n for module_file in _ExtraConfModuleSourceFilesForFile( filename ):\n if Load( module_file ):\n _module_file_for_source_file[ filename ] = module_file\n break\n\n return _module_file_for_source_file.setdefault( filename )\n\n\ndef CallGlobalExtraConfYcmCorePreloadIfExists():\n _CallGlobalExtraConfMethod( 'YcmCorePreload' )\n\n\ndef Shutdown():\n # VimClose is for the sake of backwards compatibility; it's a no-op when it\n # doesn't exist.\n _CallGlobalExtraConfMethod( 'VimClose' )\n _CallGlobalExtraConfMethod( 'Shutdown' )\n\n\ndef _CallGlobalExtraConfMethod( function_name ):\n logger = _Logger()\n global_ycm_extra_conf = _GlobalYcmExtraConfFileLocation()\n if not ( global_ycm_extra_conf and\n os.path.exists( global_ycm_extra_conf ) ):\n logger.debug( 'No global extra conf, not calling method ' + function_name )\n return\n\n module = Load( global_ycm_extra_conf, force = True )\n if not module or not hasattr( module, function_name ):\n logger.debug( 'Global extra conf not loaded or no function ' +\n function_name )\n return\n\n logger.info( 'Calling global extra conf method {0} on conf file {1}'.format(\n function_name, global_ycm_extra_conf ) )\n getattr( module, function_name )()\n\n\ndef Disable( module_file ):\n \"\"\"Disables the loading of a module for the current session.\"\"\"\n with _module_for_module_file_lock:\n _module_for_module_file[ module_file ] = None\n\n\ndef _ShouldLoad( module_file ):\n \"\"\"Checks if a module is safe to be loaded. By default this will try to\n decide using a white-/blacklist and ask the user for confirmation as a\n fallback.\"\"\"\n\n if ( module_file == _GlobalYcmExtraConfFileLocation() or\n not user_options_store.Value( 'confirm_extra_conf' ) ):\n return True\n\n globlist = user_options_store.Value( 'extra_conf_globlist' )\n for glob in globlist:\n is_blacklisted = glob[0] == '!'\n if _MatchesGlobPattern( module_file, glob.lstrip('!') ):\n return not is_blacklisted\n\n raise UnknownExtraConf( module_file )\n\n\ndef Load( module_file, force = False ):\n \"\"\"Load and return the module contained in a file.\n Using force = True the module will be loaded regardless\n of the criteria in _ShouldLoad.\n This will return None if the module was not allowed to be loaded.\"\"\"\n\n if not module_file:\n return None\n\n if not force:\n with _module_for_module_file_lock:\n if module_file in _module_for_module_file:\n return _module_for_module_file[ module_file ]\n\n if not _ShouldLoad( module_file ):\n Disable( module_file )\n return None\n\n # This has to be here because a long time ago, the ycm_extra_conf.py files\n # used to import clang_helpers.py from the cpp folder. This is not needed\n # anymore, but there are a lot of old ycm_extra_conf.py files that we don't\n # want to break.\n sys.path.insert( 0, _PathToCppCompleterFolder() )\n\n # By default, the Python interpreter compiles source files into bytecode to\n # load them faster next time they are run. These *.pyc files are generated\n # along the source files prior to Python 3.2 or in a __pycache__ folder for\n # newer versions. We disable the generation of these files when loading\n # ycm_extra_conf.py files as users do not want them inside their projects.\n # The drawback is negligible since ycm_extra_conf.py files are generally small\n # files thus really fast to compile and only loaded once by editing session.\n old_dont_write_bytecode = sys.dont_write_bytecode\n sys.dont_write_bytecode = True\n try:\n module = LoadPythonSource( _RandomName(), module_file )\n finally:\n sys.dont_write_bytecode = old_dont_write_bytecode\n\n del sys.path[ 0 ]\n\n with _module_for_module_file_lock:\n _module_for_module_file[ module_file ] = module\n return module\n\n\ndef _MatchesGlobPattern( filename, glob ):\n \"\"\"Returns true if a filename matches a given pattern. A '~' in glob will be\n expanded to the home directory and checking will be performed using absolute\n paths. See the documentation of fnmatch for the supported patterns.\"\"\"\n\n abspath = os.path.abspath( filename )\n return fnmatch( abspath, os.path.abspath( os.path.expanduser( glob ) ) )\n\n\ndef _ExtraConfModuleSourceFilesForFile( filename ):\n \"\"\"For a given filename, search all parent folders for YCM_EXTRA_CONF_FILENAME\n files that will compute the flags necessary to compile the file.\n If _GlobalYcmExtraConfFileLocation() exists it is returned as a fallback.\"\"\"\n\n for folder in PathsToAllParentFolders( filename ):\n candidate = os.path.join( folder, YCM_EXTRA_CONF_FILENAME )\n if os.path.exists( candidate ):\n yield candidate\n global_ycm_extra_conf = _GlobalYcmExtraConfFileLocation()\n if ( global_ycm_extra_conf\n and os.path.exists( global_ycm_extra_conf ) ):\n yield global_ycm_extra_conf\n\n\ndef _PathToCppCompleterFolder():\n \"\"\"Returns the path to the 'cpp' completer folder. This is necessary\n because ycm_extra_conf files need it on the path.\"\"\"\n return os.path.join( _DirectoryOfThisScript(), 'completers', 'cpp' )\n\n\ndef _DirectoryOfThisScript():\n return os.path.dirname( os.path.abspath( __file__ ) )\n\n\ndef _RandomName():\n \"\"\"Generates a random module name.\"\"\"\n return ''.join( random.choice( string.ascii_lowercase ) for x in range( 15 ) )\n\n\ndef _GlobalYcmExtraConfFileLocation():\n return os.path.expanduser(\n user_options_store.Value( 'global_ycm_extra_conf' ) )\n\n\ndef _Logger():\n return logging.getLogger( __name__ )\n", "path": "ycmd/extra_conf_store.py"}]}
| 2,572 | 336 |
gh_patches_debug_18735
|
rasdani/github-patches
|
git_diff
|
openfun__marsha-1060
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create XAPI statements for live video
## Feature Request
**Is your feature request related to a problem or unsupported use case? Please describe.**
When a video is a live all the existing XAPI statement are sent like a regular videos. Some events should not be sent and some data can't be computed
**Describe the solution you'd like**
Change the activity-type to `http://id.tincanapi.com/activitytype/webinar`
Send statement for those events :
- initialized
- play
- pause
- interacted
Also, do not send video length info, we can't have it. The completion threshold can not be computed too.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/backend/marsha/core/xapi.py`
Content:
```
1 """XAPI module."""
2 import re
3 import uuid
4
5 from django.conf import settings
6 from django.utils import timezone
7 from django.utils.translation import to_locale
8
9 import requests
10
11
12 class XAPIStatement:
13 """Object to work on a XAPI Statement."""
14
15 statement = None
16
17 def __init__(self, video, statement, lti_user):
18 """Compute a valid xapi satement.
19
20 Parameters
21 ----------
22 video : Type[.models/videos]
23 The video object used in the xAPI statement
24
25 statement : dictionary
26 Statement containing base information to send to the LRS
27 An example of expected statement:
28 {
29 "verb": {
30 "id": "http://adlnet.gov/expapi/verbs/initialized",
31 "display": {
32 "en-US": "initialized"
33 }
34 },
35 "context": {
36 "extensions": {
37 "https://w3id.org/xapi/video/extensions/volume": 1,
38 "https://w3id.org/xapi/video/extensions/video-playback-size": "640x264",
39 }
40 }
41 }
42
43 lti_user : Type[lti.LTIUser]
44 Object representing data stored in the JWT Token and related to the user authenticated
45 with LTI
46
47 """
48 try:
49 user_id = lti_user.user.get("id")
50 except AttributeError:
51 user_id = lti_user.session_id
52
53 homepage = video.playlist.consumer_site.domain
54
55 if re.match(r"^http(s?):\/\/.*", homepage) is None:
56 homepage = f"http://{homepage}"
57
58 if "id" not in statement:
59 statement["id"] = str(uuid.uuid4())
60
61 statement["timestamp"] = timezone.now().isoformat()
62 statement["context"].update(
63 {"contextActivities": {"category": [{"id": "https://w3id.org/xapi/video"}]}}
64 )
65
66 statement["actor"] = {
67 "objectType": "Agent",
68 "account": {"name": user_id, "homePage": homepage},
69 }
70
71 statement["object"] = {
72 "definition": {
73 "type": "https://w3id.org/xapi/video/activity-type/video",
74 "name": {
75 to_locale(settings.LANGUAGE_CODE).replace("_", "-"): video.title
76 },
77 },
78 "id": "uuid://{id}".format(id=str(video.id)),
79 "objectType": "Activity",
80 }
81
82 object_extensions = {}
83 if lti_user.course.get("school_name") is not None:
84 object_extensions[
85 "https://w3id.org/xapi/acrossx/extensions/school"
86 ] = lti_user.course["school_name"]
87
88 if lti_user.course.get("course_name") is not None:
89 object_extensions[
90 "http://adlnet.gov/expapi/activities/course"
91 ] = lti_user.course["course_name"]
92
93 if lti_user.course.get("course_run") is not None:
94 object_extensions[
95 "http://adlnet.gov/expapi/activities/module"
96 ] = lti_user.course["course_run"]
97
98 if object_extensions:
99 statement["object"]["definition"]["extensions"] = object_extensions
100
101 self.statement = statement
102
103 def get_statement(self):
104 """Return the enriched statement."""
105 return self.statement
106
107
108 class XAPI:
109 """The XAPI object compute statements and send them to a LRS."""
110
111 def __init__(self, url, auth_token, xapi_version="1.0.3"):
112 """Initialize the XAPI module.
113
114 Parameters
115 ----------
116 url: string
117 The LRS endpoint to fetch
118
119 auth_token: string
120 The basic_auth token used to authenticate on the LRS
121
122 xapi_version: string
123 The xAPI version used.
124
125 """
126 self.url = url
127 self.auth_token = auth_token
128 self.xapi_version = xapi_version
129
130 def send(self, xapi_statement):
131 """Send the statement to a LRS.
132
133 Parameters
134 ----------
135 statement : Type[.XAPIStatement]
136
137 """
138 headers = {
139 "Authorization": self.auth_token,
140 "Content-Type": "application/json",
141 "X-Experience-API-Version": self.xapi_version,
142 }
143
144 response = requests.post(
145 self.url, json=xapi_statement.get_statement(), headers=headers
146 )
147
148 response.raise_for_status()
149
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/backend/marsha/core/xapi.py b/src/backend/marsha/core/xapi.py
--- a/src/backend/marsha/core/xapi.py
+++ b/src/backend/marsha/core/xapi.py
@@ -52,6 +52,12 @@
homepage = video.playlist.consumer_site.domain
+ activity_type = "https://w3id.org/xapi/video/activity-type/video"
+
+ # When the video is a live we change the activity to webinar
+ if video.live_state is not None:
+ activity_type = "http://id.tincanapi.com/activitytype/webinar"
+
if re.match(r"^http(s?):\/\/.*", homepage) is None:
homepage = f"http://{homepage}"
@@ -70,7 +76,7 @@
statement["object"] = {
"definition": {
- "type": "https://w3id.org/xapi/video/activity-type/video",
+ "type": activity_type,
"name": {
to_locale(settings.LANGUAGE_CODE).replace("_", "-"): video.title
},
|
{"golden_diff": "diff --git a/src/backend/marsha/core/xapi.py b/src/backend/marsha/core/xapi.py\n--- a/src/backend/marsha/core/xapi.py\n+++ b/src/backend/marsha/core/xapi.py\n@@ -52,6 +52,12 @@\n \n homepage = video.playlist.consumer_site.domain\n \n+ activity_type = \"https://w3id.org/xapi/video/activity-type/video\"\n+\n+ # When the video is a live we change the activity to webinar\n+ if video.live_state is not None:\n+ activity_type = \"http://id.tincanapi.com/activitytype/webinar\"\n+\n if re.match(r\"^http(s?):\\/\\/.*\", homepage) is None:\n homepage = f\"http://{homepage}\"\n \n@@ -70,7 +76,7 @@\n \n statement[\"object\"] = {\n \"definition\": {\n- \"type\": \"https://w3id.org/xapi/video/activity-type/video\",\n+ \"type\": activity_type,\n \"name\": {\n to_locale(settings.LANGUAGE_CODE).replace(\"_\", \"-\"): video.title\n },\n", "issue": "Create XAPI statements for live video\n## Feature Request\r\n\r\n**Is your feature request related to a problem or unsupported use case? Please describe.**\r\n\r\nWhen a video is a live all the existing XAPI statement are sent like a regular videos. Some events should not be sent and some data can't be computed\r\n\r\n**Describe the solution you'd like**\r\n\r\nChange the activity-type to `http://id.tincanapi.com/activitytype/webinar`\r\nSend statement for those events : \r\n- initialized\r\n- play\r\n- pause\r\n- interacted\r\n\r\nAlso, do not send video length info, we can't have it. The completion threshold can not be computed too.\r\n\n", "before_files": [{"content": "\"\"\"XAPI module.\"\"\"\nimport re\nimport uuid\n\nfrom django.conf import settings\nfrom django.utils import timezone\nfrom django.utils.translation import to_locale\n\nimport requests\n\n\nclass XAPIStatement:\n \"\"\"Object to work on a XAPI Statement.\"\"\"\n\n statement = None\n\n def __init__(self, video, statement, lti_user):\n \"\"\"Compute a valid xapi satement.\n\n Parameters\n ----------\n video : Type[.models/videos]\n The video object used in the xAPI statement\n\n statement : dictionary\n Statement containing base information to send to the LRS\n An example of expected statement:\n {\n \"verb\": {\n \"id\": \"http://adlnet.gov/expapi/verbs/initialized\",\n \"display\": {\n \"en-US\": \"initialized\"\n }\n },\n \"context\": {\n \"extensions\": {\n \"https://w3id.org/xapi/video/extensions/volume\": 1,\n \"https://w3id.org/xapi/video/extensions/video-playback-size\": \"640x264\",\n }\n }\n }\n\n lti_user : Type[lti.LTIUser]\n Object representing data stored in the JWT Token and related to the user authenticated\n with LTI\n\n \"\"\"\n try:\n user_id = lti_user.user.get(\"id\")\n except AttributeError:\n user_id = lti_user.session_id\n\n homepage = video.playlist.consumer_site.domain\n\n if re.match(r\"^http(s?):\\/\\/.*\", homepage) is None:\n homepage = f\"http://{homepage}\"\n\n if \"id\" not in statement:\n statement[\"id\"] = str(uuid.uuid4())\n\n statement[\"timestamp\"] = timezone.now().isoformat()\n statement[\"context\"].update(\n {\"contextActivities\": {\"category\": [{\"id\": \"https://w3id.org/xapi/video\"}]}}\n )\n\n statement[\"actor\"] = {\n \"objectType\": \"Agent\",\n \"account\": {\"name\": user_id, \"homePage\": homepage},\n }\n\n statement[\"object\"] = {\n \"definition\": {\n \"type\": \"https://w3id.org/xapi/video/activity-type/video\",\n \"name\": {\n to_locale(settings.LANGUAGE_CODE).replace(\"_\", \"-\"): video.title\n },\n },\n \"id\": \"uuid://{id}\".format(id=str(video.id)),\n \"objectType\": \"Activity\",\n }\n\n object_extensions = {}\n if lti_user.course.get(\"school_name\") is not None:\n object_extensions[\n \"https://w3id.org/xapi/acrossx/extensions/school\"\n ] = lti_user.course[\"school_name\"]\n\n if lti_user.course.get(\"course_name\") is not None:\n object_extensions[\n \"http://adlnet.gov/expapi/activities/course\"\n ] = lti_user.course[\"course_name\"]\n\n if lti_user.course.get(\"course_run\") is not None:\n object_extensions[\n \"http://adlnet.gov/expapi/activities/module\"\n ] = lti_user.course[\"course_run\"]\n\n if object_extensions:\n statement[\"object\"][\"definition\"][\"extensions\"] = object_extensions\n\n self.statement = statement\n\n def get_statement(self):\n \"\"\"Return the enriched statement.\"\"\"\n return self.statement\n\n\nclass XAPI:\n \"\"\"The XAPI object compute statements and send them to a LRS.\"\"\"\n\n def __init__(self, url, auth_token, xapi_version=\"1.0.3\"):\n \"\"\"Initialize the XAPI module.\n\n Parameters\n ----------\n url: string\n The LRS endpoint to fetch\n\n auth_token: string\n The basic_auth token used to authenticate on the LRS\n\n xapi_version: string\n The xAPI version used.\n\n \"\"\"\n self.url = url\n self.auth_token = auth_token\n self.xapi_version = xapi_version\n\n def send(self, xapi_statement):\n \"\"\"Send the statement to a LRS.\n\n Parameters\n ----------\n statement : Type[.XAPIStatement]\n\n \"\"\"\n headers = {\n \"Authorization\": self.auth_token,\n \"Content-Type\": \"application/json\",\n \"X-Experience-API-Version\": self.xapi_version,\n }\n\n response = requests.post(\n self.url, json=xapi_statement.get_statement(), headers=headers\n )\n\n response.raise_for_status()\n", "path": "src/backend/marsha/core/xapi.py"}], "after_files": [{"content": "\"\"\"XAPI module.\"\"\"\nimport re\nimport uuid\n\nfrom django.conf import settings\nfrom django.utils import timezone\nfrom django.utils.translation import to_locale\n\nimport requests\n\n\nclass XAPIStatement:\n \"\"\"Object to work on a XAPI Statement.\"\"\"\n\n statement = None\n\n def __init__(self, video, statement, lti_user):\n \"\"\"Compute a valid xapi satement.\n\n Parameters\n ----------\n video : Type[.models/videos]\n The video object used in the xAPI statement\n\n statement : dictionary\n Statement containing base information to send to the LRS\n An example of expected statement:\n {\n \"verb\": {\n \"id\": \"http://adlnet.gov/expapi/verbs/initialized\",\n \"display\": {\n \"en-US\": \"initialized\"\n }\n },\n \"context\": {\n \"extensions\": {\n \"https://w3id.org/xapi/video/extensions/volume\": 1,\n \"https://w3id.org/xapi/video/extensions/video-playback-size\": \"640x264\",\n }\n }\n }\n\n lti_user : Type[lti.LTIUser]\n Object representing data stored in the JWT Token and related to the user authenticated\n with LTI\n\n \"\"\"\n try:\n user_id = lti_user.user.get(\"id\")\n except AttributeError:\n user_id = lti_user.session_id\n\n homepage = video.playlist.consumer_site.domain\n\n activity_type = \"https://w3id.org/xapi/video/activity-type/video\"\n\n # When the video is a live we change the activity to webinar\n if video.live_state is not None:\n activity_type = \"http://id.tincanapi.com/activitytype/webinar\"\n\n if re.match(r\"^http(s?):\\/\\/.*\", homepage) is None:\n homepage = f\"http://{homepage}\"\n\n if \"id\" not in statement:\n statement[\"id\"] = str(uuid.uuid4())\n\n statement[\"timestamp\"] = timezone.now().isoformat()\n statement[\"context\"].update(\n {\"contextActivities\": {\"category\": [{\"id\": \"https://w3id.org/xapi/video\"}]}}\n )\n\n statement[\"actor\"] = {\n \"objectType\": \"Agent\",\n \"account\": {\"name\": user_id, \"homePage\": homepage},\n }\n\n statement[\"object\"] = {\n \"definition\": {\n \"type\": activity_type,\n \"name\": {\n to_locale(settings.LANGUAGE_CODE).replace(\"_\", \"-\"): video.title\n },\n },\n \"id\": \"uuid://{id}\".format(id=str(video.id)),\n \"objectType\": \"Activity\",\n }\n\n object_extensions = {}\n if lti_user.course.get(\"school_name\") is not None:\n object_extensions[\n \"https://w3id.org/xapi/acrossx/extensions/school\"\n ] = lti_user.course[\"school_name\"]\n\n if lti_user.course.get(\"course_name\") is not None:\n object_extensions[\n \"http://adlnet.gov/expapi/activities/course\"\n ] = lti_user.course[\"course_name\"]\n\n if lti_user.course.get(\"course_run\") is not None:\n object_extensions[\n \"http://adlnet.gov/expapi/activities/module\"\n ] = lti_user.course[\"course_run\"]\n\n if object_extensions:\n statement[\"object\"][\"definition\"][\"extensions\"] = object_extensions\n\n self.statement = statement\n\n def get_statement(self):\n \"\"\"Return the enriched statement.\"\"\"\n return self.statement\n\n\nclass XAPI:\n \"\"\"The XAPI object compute statements and send them to a LRS.\"\"\"\n\n def __init__(self, url, auth_token, xapi_version=\"1.0.3\"):\n \"\"\"Initialize the XAPI module.\n\n Parameters\n ----------\n url: string\n The LRS endpoint to fetch\n\n auth_token: string\n The basic_auth token used to authenticate on the LRS\n\n xapi_version: string\n The xAPI version used.\n\n \"\"\"\n self.url = url\n self.auth_token = auth_token\n self.xapi_version = xapi_version\n\n def send(self, xapi_statement):\n \"\"\"Send the statement to a LRS.\n\n Parameters\n ----------\n statement : Type[.XAPIStatement]\n\n \"\"\"\n headers = {\n \"Authorization\": self.auth_token,\n \"Content-Type\": \"application/json\",\n \"X-Experience-API-Version\": self.xapi_version,\n }\n\n response = requests.post(\n self.url, json=xapi_statement.get_statement(), headers=headers\n )\n\n response.raise_for_status()\n", "path": "src/backend/marsha/core/xapi.py"}]}
| 1,693 | 238 |
gh_patches_debug_34117
|
rasdani/github-patches
|
git_diff
|
ESMCI__cime-1090
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
case.lt_archive
lt_archive script has several problems preventing functionality.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `utils/python/CIME/case_lt_archive.py`
Content:
```
1 from CIME.XML.standard_module_setup import *
2 from CIME.utils import expect, does_file_have_string, append_status
3 from CIME.XML.lt_archive import LTArchive
4
5 import time
6
7 logger = logging.getLogger(__name__)
8
9 ###############################################################################
10 def case_lt_archive(case):
11 ###############################################################################
12 caseroot = case.get_value("CASEROOT")
13
14 # max number of threads needed by scripts
15 os.environ["maxthrds"] = 1
16
17 # document start
18 append_status("lt_archive starting",caseroot=caseroot,sfile="CaseStatus")
19
20 # determine status of run and short term archiving
21 runComplete = does_file_have_string(os.path.join(caseroot, "CaseStatus"),
22 "run SUCCESSFUL")
23 staComplete = does_file_have_string(os.path.join(caseroot, "stArchiveStatus"),
24 "st_archive_complete")
25
26 # set up envrionment vars and call the lt_archive.sh script
27 if runComplete and staComplete:
28 os.environ["DOUT_S_ROOT"] = case.get_value("DOUT_S_ROOT")
29 os.environ["DOUT_L_MSROOT"] = case.get_value("DOUT_L_MSROOT")
30 os.environ["DOUT_L_HPSS_ACCNT"] = case.get_value("DOUT_L_HPSS_ACCNT")
31
32 lid = time.strftime("%y%m%d-%H%M%S")
33 lt_archive = LTArchive(case.get_value("MACH"))
34 lt_archive_args = lt_archive.get_lt_archive_args()
35 cmd = os.path.join(caseroot, "Tools/lt_archive.sh") \
36 + lt_archive_args + "ltArchiveStatus." + lid + " 2>&1"
37 run_cmd_no_fail(cmd, from_dir=caseroot)
38 else:
39 expect(False,
40 "lt_archive: run or st_archive is not yet complete or was not successful."
41 "Unable to perform long term archive...")
42
43 # document completion
44 append_status("lt_archive completed" ,caseroot=caseroot, sfile="CaseStatus")
45
46 return True
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/utils/python/CIME/case_lt_archive.py b/utils/python/CIME/case_lt_archive.py
--- a/utils/python/CIME/case_lt_archive.py
+++ b/utils/python/CIME/case_lt_archive.py
@@ -12,17 +12,16 @@
caseroot = case.get_value("CASEROOT")
# max number of threads needed by scripts
- os.environ["maxthrds"] = 1
+ os.environ["maxthrds"] = "1"
# document start
append_status("lt_archive starting",caseroot=caseroot,sfile="CaseStatus")
# determine status of run and short term archiving
runComplete = does_file_have_string(os.path.join(caseroot, "CaseStatus"),
- "run SUCCESSFUL")
- staComplete = does_file_have_string(os.path.join(caseroot, "stArchiveStatus"),
- "st_archive_complete")
-
+ "Run SUCCESSFUL")
+ staComplete = does_file_have_string(os.path.join(caseroot, "CaseStatus"),
+ "st_archiving completed")
# set up envrionment vars and call the lt_archive.sh script
if runComplete and staComplete:
os.environ["DOUT_S_ROOT"] = case.get_value("DOUT_S_ROOT")
@@ -32,10 +31,13 @@
lid = time.strftime("%y%m%d-%H%M%S")
lt_archive = LTArchive(case.get_value("MACH"))
lt_archive_args = lt_archive.get_lt_archive_args()
- cmd = os.path.join(caseroot, "Tools/lt_archive.sh") \
+ if lt_archive_args is None:
+ lt_archive_args = " "
+ cmd = os.path.join(caseroot, "Tools", "lt_archive.sh") \
+ lt_archive_args + "ltArchiveStatus." + lid + " 2>&1"
run_cmd_no_fail(cmd, from_dir=caseroot)
else:
+ logger.warn("runComplete %s staComplete %s"%(runComplete, staComplete))
expect(False,
"lt_archive: run or st_archive is not yet complete or was not successful."
"Unable to perform long term archive...")
|
{"golden_diff": "diff --git a/utils/python/CIME/case_lt_archive.py b/utils/python/CIME/case_lt_archive.py\n--- a/utils/python/CIME/case_lt_archive.py\n+++ b/utils/python/CIME/case_lt_archive.py\n@@ -12,17 +12,16 @@\n caseroot = case.get_value(\"CASEROOT\")\n \n # max number of threads needed by scripts\n- os.environ[\"maxthrds\"] = 1\n+ os.environ[\"maxthrds\"] = \"1\"\n \n # document start\n append_status(\"lt_archive starting\",caseroot=caseroot,sfile=\"CaseStatus\")\n \n # determine status of run and short term archiving\n runComplete = does_file_have_string(os.path.join(caseroot, \"CaseStatus\"),\n- \"run SUCCESSFUL\")\n- staComplete = does_file_have_string(os.path.join(caseroot, \"stArchiveStatus\"),\n- \"st_archive_complete\")\n-\n+ \"Run SUCCESSFUL\")\n+ staComplete = does_file_have_string(os.path.join(caseroot, \"CaseStatus\"),\n+ \"st_archiving completed\")\n # set up envrionment vars and call the lt_archive.sh script\n if runComplete and staComplete:\n os.environ[\"DOUT_S_ROOT\"] = case.get_value(\"DOUT_S_ROOT\")\n@@ -32,10 +31,13 @@\n lid = time.strftime(\"%y%m%d-%H%M%S\")\n lt_archive = LTArchive(case.get_value(\"MACH\"))\n lt_archive_args = lt_archive.get_lt_archive_args()\n- cmd = os.path.join(caseroot, \"Tools/lt_archive.sh\") \\\n+ if lt_archive_args is None:\n+ lt_archive_args = \" \"\n+ cmd = os.path.join(caseroot, \"Tools\", \"lt_archive.sh\") \\\n + lt_archive_args + \"ltArchiveStatus.\" + lid + \" 2>&1\"\n run_cmd_no_fail(cmd, from_dir=caseroot)\n else:\n+ logger.warn(\"runComplete %s staComplete %s\"%(runComplete, staComplete))\n expect(False,\n \"lt_archive: run or st_archive is not yet complete or was not successful.\"\n \"Unable to perform long term archive...\")\n", "issue": "case.lt_archive\nlt_archive script has several problems preventing functionality. \n", "before_files": [{"content": "from CIME.XML.standard_module_setup import *\nfrom CIME.utils import expect, does_file_have_string, append_status\nfrom CIME.XML.lt_archive import LTArchive\n\nimport time\n\nlogger = logging.getLogger(__name__)\n\n###############################################################################\ndef case_lt_archive(case):\n###############################################################################\n caseroot = case.get_value(\"CASEROOT\")\n\n # max number of threads needed by scripts\n os.environ[\"maxthrds\"] = 1\n\n # document start\n append_status(\"lt_archive starting\",caseroot=caseroot,sfile=\"CaseStatus\")\n\n # determine status of run and short term archiving\n runComplete = does_file_have_string(os.path.join(caseroot, \"CaseStatus\"),\n \"run SUCCESSFUL\")\n staComplete = does_file_have_string(os.path.join(caseroot, \"stArchiveStatus\"),\n \"st_archive_complete\")\n\n # set up envrionment vars and call the lt_archive.sh script\n if runComplete and staComplete:\n os.environ[\"DOUT_S_ROOT\"] = case.get_value(\"DOUT_S_ROOT\")\n os.environ[\"DOUT_L_MSROOT\"] = case.get_value(\"DOUT_L_MSROOT\")\n os.environ[\"DOUT_L_HPSS_ACCNT\"] = case.get_value(\"DOUT_L_HPSS_ACCNT\")\n\n lid = time.strftime(\"%y%m%d-%H%M%S\")\n lt_archive = LTArchive(case.get_value(\"MACH\"))\n lt_archive_args = lt_archive.get_lt_archive_args()\n cmd = os.path.join(caseroot, \"Tools/lt_archive.sh\") \\\n + lt_archive_args + \"ltArchiveStatus.\" + lid + \" 2>&1\"\n run_cmd_no_fail(cmd, from_dir=caseroot)\n else:\n expect(False,\n \"lt_archive: run or st_archive is not yet complete or was not successful.\"\n \"Unable to perform long term archive...\")\n\n # document completion\n append_status(\"lt_archive completed\" ,caseroot=caseroot, sfile=\"CaseStatus\")\n\n return True\n", "path": "utils/python/CIME/case_lt_archive.py"}], "after_files": [{"content": "from CIME.XML.standard_module_setup import *\nfrom CIME.utils import expect, does_file_have_string, append_status\nfrom CIME.XML.lt_archive import LTArchive\n\nimport time\n\nlogger = logging.getLogger(__name__)\n\n###############################################################################\ndef case_lt_archive(case):\n###############################################################################\n caseroot = case.get_value(\"CASEROOT\")\n\n # max number of threads needed by scripts\n os.environ[\"maxthrds\"] = \"1\"\n\n # document start\n append_status(\"lt_archive starting\",caseroot=caseroot,sfile=\"CaseStatus\")\n\n # determine status of run and short term archiving\n runComplete = does_file_have_string(os.path.join(caseroot, \"CaseStatus\"),\n \"Run SUCCESSFUL\")\n staComplete = does_file_have_string(os.path.join(caseroot, \"CaseStatus\"),\n \"st_archiving completed\")\n # set up envrionment vars and call the lt_archive.sh script\n if runComplete and staComplete:\n os.environ[\"DOUT_S_ROOT\"] = case.get_value(\"DOUT_S_ROOT\")\n os.environ[\"DOUT_L_MSROOT\"] = case.get_value(\"DOUT_L_MSROOT\")\n os.environ[\"DOUT_L_HPSS_ACCNT\"] = case.get_value(\"DOUT_L_HPSS_ACCNT\")\n\n lid = time.strftime(\"%y%m%d-%H%M%S\")\n lt_archive = LTArchive(case.get_value(\"MACH\"))\n lt_archive_args = lt_archive.get_lt_archive_args()\n if lt_archive_args is None:\n lt_archive_args = \" \"\n cmd = os.path.join(caseroot, \"Tools\", \"lt_archive.sh\") \\\n + lt_archive_args + \"ltArchiveStatus.\" + lid + \" 2>&1\"\n run_cmd_no_fail(cmd, from_dir=caseroot)\n else:\n logger.warn(\"runComplete %s staComplete %s\"%(runComplete, staComplete))\n expect(False,\n \"lt_archive: run or st_archive is not yet complete or was not successful.\"\n \"Unable to perform long term archive...\")\n\n # document completion\n append_status(\"lt_archive completed\" ,caseroot=caseroot, sfile=\"CaseStatus\")\n\n return True\n", "path": "utils/python/CIME/case_lt_archive.py"}]}
| 796 | 481 |
gh_patches_debug_14320
|
rasdani/github-patches
|
git_diff
|
dynaconf__dynaconf-875
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[bug][Documentation] Exporting: write() got an unexpected keyword argument 'merge'
**Describe the bug**
Following the example on the documentation to [export](https://www.dynaconf.com/advanced/#exporting) Dynaconf data to a file raises an exception with the `merge` argument
**To Reproduce**
~~~Python
loaders.write("/a/b/c", DynaBox(config).to_dict(), merge=False)
~~~
**Expected behavior**
The file should have been written
**Actual Behavior**
`TypeError: write() got an unexpected keyword argument 'merge'`
Just a quick documentation fix,
thanks !
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dynaconf/loaders/__init__.py`
Content:
```
1 from __future__ import annotations
2
3 import importlib
4 import os
5
6 from dynaconf import constants as ct
7 from dynaconf import default_settings
8 from dynaconf.loaders import ini_loader
9 from dynaconf.loaders import json_loader
10 from dynaconf.loaders import py_loader
11 from dynaconf.loaders import toml_loader
12 from dynaconf.loaders import yaml_loader
13 from dynaconf.utils import deduplicate
14 from dynaconf.utils import ensure_a_list
15 from dynaconf.utils.boxing import DynaBox
16 from dynaconf.utils.files import get_local_filename
17 from dynaconf.utils.parse_conf import false_values
18
19
20 def default_loader(obj, defaults=None):
21 """Loads default settings and check if there are overridings
22 exported as environment variables"""
23 defaults = defaults or {}
24 default_settings_values = {
25 key: value
26 for key, value in default_settings.__dict__.items() # noqa
27 if key.isupper()
28 }
29
30 all_keys = deduplicate(
31 list(defaults.keys()) + list(default_settings_values.keys())
32 )
33
34 for key in all_keys:
35 if not obj.exists(key):
36 value = defaults.get(key, default_settings_values.get(key))
37 obj.set(key, value)
38
39 # start dotenv to get default env vars from there
40 # check overrides in env vars
41 if obj.get("load_dotenv") is True:
42 default_settings.start_dotenv(obj)
43
44 # Deal with cases where a custom ENV_SWITCHER_IS_PROVIDED
45 # Example: Flask and Django Extensions
46 env_switcher = defaults.get(
47 "ENV_SWITCHER_FOR_DYNACONF", "ENV_FOR_DYNACONF"
48 )
49
50 for key in all_keys:
51 if key not in default_settings_values.keys():
52 continue
53
54 env_value = obj.get_environ(
55 env_switcher if key == "ENV_FOR_DYNACONF" else key,
56 default="_not_found",
57 )
58
59 if env_value != "_not_found":
60 obj.set(key, env_value, tomlfy=True)
61
62
63 def _run_hook_module(hook, hook_module, obj, key=None):
64 """Run the hook function from the settings obj.
65
66 given a hook name, a hook_module and a settings object
67 load the function and execute if found.
68 """
69 if hook in obj._loaded_hooks.get(hook_module.__file__, {}):
70 # already loaded
71 return
72
73 if hook_module and getattr(hook_module, "_error", False):
74 if not isinstance(hook_module._error, FileNotFoundError):
75 raise hook_module._error
76
77 hook_func = getattr(hook_module, hook, None)
78 if hook_func:
79 hook_dict = hook_func(obj.dynaconf.clone())
80 if hook_dict:
81 merge = hook_dict.pop(
82 "dynaconf_merge", hook_dict.pop("DYNACONF_MERGE", False)
83 )
84 if key and key in hook_dict:
85 obj.set(key, hook_dict[key], tomlfy=False, merge=merge)
86 elif not key:
87 obj.update(hook_dict, tomlfy=False, merge=merge)
88 obj._loaded_hooks[hook_module.__file__][hook] = hook_dict
89
90
91 def execute_hooks(
92 hook, obj, env=None, silent=True, key=None, modules=None, files=None
93 ):
94 """Execute dynaconf_hooks from module or filepath."""
95 if hook not in ["post"]:
96 raise ValueError(f"hook {hook} not supported yet.")
97
98 # try to load hooks using python module __name__
99 modules = modules or obj._loaded_py_modules
100 for loaded_module in modules:
101 hook_module_name = ".".join(
102 loaded_module.split(".")[:-1] + ["dynaconf_hooks"]
103 )
104 try:
105 hook_module = importlib.import_module(hook_module_name)
106 except (ImportError, TypeError):
107 # There was no hook on the same path as a python module
108 continue
109 else:
110 _run_hook_module(
111 hook=hook,
112 hook_module=hook_module,
113 obj=obj,
114 key=key,
115 )
116
117 # Try to load from python filename path
118 files = files or obj._loaded_files
119 for loaded_file in files:
120 hook_file = os.path.join(
121 os.path.dirname(loaded_file), "dynaconf_hooks.py"
122 )
123 hook_module = py_loader.import_from_filename(
124 obj, hook_file, silent=silent
125 )
126 if not hook_module:
127 # There was no hook on the same path as a python file
128 continue
129 _run_hook_module(
130 hook=hook,
131 hook_module=hook_module,
132 obj=obj,
133 key=key,
134 )
135
136
137 def settings_loader(
138 obj, settings_module=None, env=None, silent=True, key=None, filename=None
139 ):
140 """Loads from defined settings module
141
142 :param obj: A dynaconf instance
143 :param settings_module: A path or a list of paths e.g settings.toml
144 :param env: Env to look for data defaults: development
145 :param silent: Boolean to raise loading errors
146 :param key: Load a single key if provided
147 :param filename: optional filename to override the settings_module
148 """
149 if filename is None:
150 settings_module = settings_module or obj.settings_module
151 if not settings_module: # pragma: no cover
152 return
153 files = ensure_a_list(settings_module)
154 else:
155 files = ensure_a_list(filename)
156
157 files.extend(ensure_a_list(obj.get("SECRETS_FOR_DYNACONF", None)))
158
159 found_files = []
160 modules_names = []
161 for item in files:
162 item = str(item) # Ensure str in case of LocalPath/Path is passed.
163 if item.endswith(ct.ALL_EXTENSIONS + (".py",)):
164 p_root = obj._root_path or (
165 os.path.dirname(found_files[0]) if found_files else None
166 )
167 found = obj.find_file(item, project_root=p_root)
168 if found:
169 found_files.append(found)
170 else:
171 # a bare python module name w/o extension
172 modules_names.append(item)
173
174 enabled_core_loaders = [
175 item.upper() for item in obj.get("CORE_LOADERS_FOR_DYNACONF") or []
176 ]
177
178 # add `.local.` to found_files list to search for local files.
179 found_files.extend(
180 [
181 get_local_filename(item)
182 for item in found_files
183 if ".local." not in str(item)
184 ]
185 )
186
187 for mod_file in modules_names + found_files:
188 # can be set to multiple files settings.py,settings.yaml,...
189
190 # Cascade all loaders
191 loaders = [
192 {"ext": ct.YAML_EXTENSIONS, "name": "YAML", "loader": yaml_loader},
193 {"ext": ct.TOML_EXTENSIONS, "name": "TOML", "loader": toml_loader},
194 {"ext": ct.INI_EXTENSIONS, "name": "INI", "loader": ini_loader},
195 {"ext": ct.JSON_EXTENSIONS, "name": "JSON", "loader": json_loader},
196 ]
197
198 for loader in loaders:
199 if loader["name"] not in enabled_core_loaders:
200 continue
201
202 if mod_file.endswith(loader["ext"]):
203 loader["loader"].load(
204 obj, filename=mod_file, env=env, silent=silent, key=key
205 )
206 continue
207
208 if mod_file.endswith(ct.ALL_EXTENSIONS):
209 continue
210
211 if "PY" not in enabled_core_loaders:
212 # pyloader is disabled
213 continue
214
215 # must be Python file or module
216 # load from default defined module settings.py or .secrets.py if exists
217 py_loader.load(obj, mod_file, key=key)
218
219 # load from the current env e.g: development_settings.py
220 env = env or obj.current_env
221 if mod_file.endswith(".py"):
222 if ".secrets.py" == mod_file:
223 tmpl = ".{0}_{1}{2}"
224 mod_file = "secrets.py"
225 else:
226 tmpl = "{0}_{1}{2}"
227
228 dirname = os.path.dirname(mod_file)
229 filename, extension = os.path.splitext(os.path.basename(mod_file))
230 new_filename = tmpl.format(env.lower(), filename, extension)
231 env_mod_file = os.path.join(dirname, new_filename)
232 global_filename = tmpl.format("global", filename, extension)
233 global_mod_file = os.path.join(dirname, global_filename)
234 else:
235 env_mod_file = f"{env.lower()}_{mod_file}"
236 global_mod_file = f"global_{mod_file}"
237
238 py_loader.load(
239 obj,
240 env_mod_file,
241 identifier=f"py_{env.upper()}",
242 silent=True,
243 key=key,
244 )
245
246 # load from global_settings.py
247 py_loader.load(
248 obj, global_mod_file, identifier="py_global", silent=True, key=key
249 )
250
251
252 def enable_external_loaders(obj):
253 """Enable external service loaders like `VAULT_` and `REDIS_`
254 looks forenv variables like `REDIS_ENABLED_FOR_DYNACONF`
255 """
256 for name, loader in ct.EXTERNAL_LOADERS.items():
257 enabled = getattr(obj, f"{name.upper()}_ENABLED_FOR_DYNACONF", False)
258 if (
259 enabled
260 and enabled not in false_values
261 and loader not in obj.LOADERS_FOR_DYNACONF
262 ): # noqa
263 obj.LOADERS_FOR_DYNACONF.insert(0, loader)
264
265
266 def write(filename, data, env=None):
267 """Writes `data` to `filename` infers format by file extension."""
268 loader_name = f"{filename.rpartition('.')[-1]}_loader"
269 loader = globals().get(loader_name)
270 if not loader:
271 raise OSError(f"{loader_name} cannot be found.")
272
273 data = DynaBox(data, box_settings={}).to_dict()
274 if loader is not py_loader and env and env not in data:
275 data = {env: data}
276
277 loader.write(filename, data, merge=False)
278
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/dynaconf/loaders/__init__.py b/dynaconf/loaders/__init__.py
--- a/dynaconf/loaders/__init__.py
+++ b/dynaconf/loaders/__init__.py
@@ -263,7 +263,7 @@
obj.LOADERS_FOR_DYNACONF.insert(0, loader)
-def write(filename, data, env=None):
+def write(filename, data, env=None, merge=False):
"""Writes `data` to `filename` infers format by file extension."""
loader_name = f"{filename.rpartition('.')[-1]}_loader"
loader = globals().get(loader_name)
@@ -274,4 +274,4 @@
if loader is not py_loader and env and env not in data:
data = {env: data}
- loader.write(filename, data, merge=False)
+ loader.write(filename, data, merge=merge)
|
{"golden_diff": "diff --git a/dynaconf/loaders/__init__.py b/dynaconf/loaders/__init__.py\n--- a/dynaconf/loaders/__init__.py\n+++ b/dynaconf/loaders/__init__.py\n@@ -263,7 +263,7 @@\n obj.LOADERS_FOR_DYNACONF.insert(0, loader)\n \n \n-def write(filename, data, env=None):\n+def write(filename, data, env=None, merge=False):\n \"\"\"Writes `data` to `filename` infers format by file extension.\"\"\"\n loader_name = f\"{filename.rpartition('.')[-1]}_loader\"\n loader = globals().get(loader_name)\n@@ -274,4 +274,4 @@\n if loader is not py_loader and env and env not in data:\n data = {env: data}\n \n- loader.write(filename, data, merge=False)\n+ loader.write(filename, data, merge=merge)\n", "issue": "[bug][Documentation] Exporting: write() got an unexpected keyword argument 'merge'\n**Describe the bug**\r\nFollowing the example on the documentation to [export](https://www.dynaconf.com/advanced/#exporting) Dynaconf data to a file raises an exception with the `merge` argument\r\n\r\n**To Reproduce**\r\n~~~Python\r\nloaders.write(\"/a/b/c\", DynaBox(config).to_dict(), merge=False)\r\n~~~\r\n\r\n**Expected behavior**\r\nThe file should have been written\r\n\r\n**Actual Behavior**\r\n`TypeError: write() got an unexpected keyword argument 'merge'`\r\n\r\nJust a quick documentation fix,\r\nthanks !\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport importlib\nimport os\n\nfrom dynaconf import constants as ct\nfrom dynaconf import default_settings\nfrom dynaconf.loaders import ini_loader\nfrom dynaconf.loaders import json_loader\nfrom dynaconf.loaders import py_loader\nfrom dynaconf.loaders import toml_loader\nfrom dynaconf.loaders import yaml_loader\nfrom dynaconf.utils import deduplicate\nfrom dynaconf.utils import ensure_a_list\nfrom dynaconf.utils.boxing import DynaBox\nfrom dynaconf.utils.files import get_local_filename\nfrom dynaconf.utils.parse_conf import false_values\n\n\ndef default_loader(obj, defaults=None):\n \"\"\"Loads default settings and check if there are overridings\n exported as environment variables\"\"\"\n defaults = defaults or {}\n default_settings_values = {\n key: value\n for key, value in default_settings.__dict__.items() # noqa\n if key.isupper()\n }\n\n all_keys = deduplicate(\n list(defaults.keys()) + list(default_settings_values.keys())\n )\n\n for key in all_keys:\n if not obj.exists(key):\n value = defaults.get(key, default_settings_values.get(key))\n obj.set(key, value)\n\n # start dotenv to get default env vars from there\n # check overrides in env vars\n if obj.get(\"load_dotenv\") is True:\n default_settings.start_dotenv(obj)\n\n # Deal with cases where a custom ENV_SWITCHER_IS_PROVIDED\n # Example: Flask and Django Extensions\n env_switcher = defaults.get(\n \"ENV_SWITCHER_FOR_DYNACONF\", \"ENV_FOR_DYNACONF\"\n )\n\n for key in all_keys:\n if key not in default_settings_values.keys():\n continue\n\n env_value = obj.get_environ(\n env_switcher if key == \"ENV_FOR_DYNACONF\" else key,\n default=\"_not_found\",\n )\n\n if env_value != \"_not_found\":\n obj.set(key, env_value, tomlfy=True)\n\n\ndef _run_hook_module(hook, hook_module, obj, key=None):\n \"\"\"Run the hook function from the settings obj.\n\n given a hook name, a hook_module and a settings object\n load the function and execute if found.\n \"\"\"\n if hook in obj._loaded_hooks.get(hook_module.__file__, {}):\n # already loaded\n return\n\n if hook_module and getattr(hook_module, \"_error\", False):\n if not isinstance(hook_module._error, FileNotFoundError):\n raise hook_module._error\n\n hook_func = getattr(hook_module, hook, None)\n if hook_func:\n hook_dict = hook_func(obj.dynaconf.clone())\n if hook_dict:\n merge = hook_dict.pop(\n \"dynaconf_merge\", hook_dict.pop(\"DYNACONF_MERGE\", False)\n )\n if key and key in hook_dict:\n obj.set(key, hook_dict[key], tomlfy=False, merge=merge)\n elif not key:\n obj.update(hook_dict, tomlfy=False, merge=merge)\n obj._loaded_hooks[hook_module.__file__][hook] = hook_dict\n\n\ndef execute_hooks(\n hook, obj, env=None, silent=True, key=None, modules=None, files=None\n):\n \"\"\"Execute dynaconf_hooks from module or filepath.\"\"\"\n if hook not in [\"post\"]:\n raise ValueError(f\"hook {hook} not supported yet.\")\n\n # try to load hooks using python module __name__\n modules = modules or obj._loaded_py_modules\n for loaded_module in modules:\n hook_module_name = \".\".join(\n loaded_module.split(\".\")[:-1] + [\"dynaconf_hooks\"]\n )\n try:\n hook_module = importlib.import_module(hook_module_name)\n except (ImportError, TypeError):\n # There was no hook on the same path as a python module\n continue\n else:\n _run_hook_module(\n hook=hook,\n hook_module=hook_module,\n obj=obj,\n key=key,\n )\n\n # Try to load from python filename path\n files = files or obj._loaded_files\n for loaded_file in files:\n hook_file = os.path.join(\n os.path.dirname(loaded_file), \"dynaconf_hooks.py\"\n )\n hook_module = py_loader.import_from_filename(\n obj, hook_file, silent=silent\n )\n if not hook_module:\n # There was no hook on the same path as a python file\n continue\n _run_hook_module(\n hook=hook,\n hook_module=hook_module,\n obj=obj,\n key=key,\n )\n\n\ndef settings_loader(\n obj, settings_module=None, env=None, silent=True, key=None, filename=None\n):\n \"\"\"Loads from defined settings module\n\n :param obj: A dynaconf instance\n :param settings_module: A path or a list of paths e.g settings.toml\n :param env: Env to look for data defaults: development\n :param silent: Boolean to raise loading errors\n :param key: Load a single key if provided\n :param filename: optional filename to override the settings_module\n \"\"\"\n if filename is None:\n settings_module = settings_module or obj.settings_module\n if not settings_module: # pragma: no cover\n return\n files = ensure_a_list(settings_module)\n else:\n files = ensure_a_list(filename)\n\n files.extend(ensure_a_list(obj.get(\"SECRETS_FOR_DYNACONF\", None)))\n\n found_files = []\n modules_names = []\n for item in files:\n item = str(item) # Ensure str in case of LocalPath/Path is passed.\n if item.endswith(ct.ALL_EXTENSIONS + (\".py\",)):\n p_root = obj._root_path or (\n os.path.dirname(found_files[0]) if found_files else None\n )\n found = obj.find_file(item, project_root=p_root)\n if found:\n found_files.append(found)\n else:\n # a bare python module name w/o extension\n modules_names.append(item)\n\n enabled_core_loaders = [\n item.upper() for item in obj.get(\"CORE_LOADERS_FOR_DYNACONF\") or []\n ]\n\n # add `.local.` to found_files list to search for local files.\n found_files.extend(\n [\n get_local_filename(item)\n for item in found_files\n if \".local.\" not in str(item)\n ]\n )\n\n for mod_file in modules_names + found_files:\n # can be set to multiple files settings.py,settings.yaml,...\n\n # Cascade all loaders\n loaders = [\n {\"ext\": ct.YAML_EXTENSIONS, \"name\": \"YAML\", \"loader\": yaml_loader},\n {\"ext\": ct.TOML_EXTENSIONS, \"name\": \"TOML\", \"loader\": toml_loader},\n {\"ext\": ct.INI_EXTENSIONS, \"name\": \"INI\", \"loader\": ini_loader},\n {\"ext\": ct.JSON_EXTENSIONS, \"name\": \"JSON\", \"loader\": json_loader},\n ]\n\n for loader in loaders:\n if loader[\"name\"] not in enabled_core_loaders:\n continue\n\n if mod_file.endswith(loader[\"ext\"]):\n loader[\"loader\"].load(\n obj, filename=mod_file, env=env, silent=silent, key=key\n )\n continue\n\n if mod_file.endswith(ct.ALL_EXTENSIONS):\n continue\n\n if \"PY\" not in enabled_core_loaders:\n # pyloader is disabled\n continue\n\n # must be Python file or module\n # load from default defined module settings.py or .secrets.py if exists\n py_loader.load(obj, mod_file, key=key)\n\n # load from the current env e.g: development_settings.py\n env = env or obj.current_env\n if mod_file.endswith(\".py\"):\n if \".secrets.py\" == mod_file:\n tmpl = \".{0}_{1}{2}\"\n mod_file = \"secrets.py\"\n else:\n tmpl = \"{0}_{1}{2}\"\n\n dirname = os.path.dirname(mod_file)\n filename, extension = os.path.splitext(os.path.basename(mod_file))\n new_filename = tmpl.format(env.lower(), filename, extension)\n env_mod_file = os.path.join(dirname, new_filename)\n global_filename = tmpl.format(\"global\", filename, extension)\n global_mod_file = os.path.join(dirname, global_filename)\n else:\n env_mod_file = f\"{env.lower()}_{mod_file}\"\n global_mod_file = f\"global_{mod_file}\"\n\n py_loader.load(\n obj,\n env_mod_file,\n identifier=f\"py_{env.upper()}\",\n silent=True,\n key=key,\n )\n\n # load from global_settings.py\n py_loader.load(\n obj, global_mod_file, identifier=\"py_global\", silent=True, key=key\n )\n\n\ndef enable_external_loaders(obj):\n \"\"\"Enable external service loaders like `VAULT_` and `REDIS_`\n looks forenv variables like `REDIS_ENABLED_FOR_DYNACONF`\n \"\"\"\n for name, loader in ct.EXTERNAL_LOADERS.items():\n enabled = getattr(obj, f\"{name.upper()}_ENABLED_FOR_DYNACONF\", False)\n if (\n enabled\n and enabled not in false_values\n and loader not in obj.LOADERS_FOR_DYNACONF\n ): # noqa\n obj.LOADERS_FOR_DYNACONF.insert(0, loader)\n\n\ndef write(filename, data, env=None):\n \"\"\"Writes `data` to `filename` infers format by file extension.\"\"\"\n loader_name = f\"{filename.rpartition('.')[-1]}_loader\"\n loader = globals().get(loader_name)\n if not loader:\n raise OSError(f\"{loader_name} cannot be found.\")\n\n data = DynaBox(data, box_settings={}).to_dict()\n if loader is not py_loader and env and env not in data:\n data = {env: data}\n\n loader.write(filename, data, merge=False)\n", "path": "dynaconf/loaders/__init__.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport importlib\nimport os\n\nfrom dynaconf import constants as ct\nfrom dynaconf import default_settings\nfrom dynaconf.loaders import ini_loader\nfrom dynaconf.loaders import json_loader\nfrom dynaconf.loaders import py_loader\nfrom dynaconf.loaders import toml_loader\nfrom dynaconf.loaders import yaml_loader\nfrom dynaconf.utils import deduplicate\nfrom dynaconf.utils import ensure_a_list\nfrom dynaconf.utils.boxing import DynaBox\nfrom dynaconf.utils.files import get_local_filename\nfrom dynaconf.utils.parse_conf import false_values\n\n\ndef default_loader(obj, defaults=None):\n \"\"\"Loads default settings and check if there are overridings\n exported as environment variables\"\"\"\n defaults = defaults or {}\n default_settings_values = {\n key: value\n for key, value in default_settings.__dict__.items() # noqa\n if key.isupper()\n }\n\n all_keys = deduplicate(\n list(defaults.keys()) + list(default_settings_values.keys())\n )\n\n for key in all_keys:\n if not obj.exists(key):\n value = defaults.get(key, default_settings_values.get(key))\n obj.set(key, value)\n\n # start dotenv to get default env vars from there\n # check overrides in env vars\n if obj.get(\"load_dotenv\") is True:\n default_settings.start_dotenv(obj)\n\n # Deal with cases where a custom ENV_SWITCHER_IS_PROVIDED\n # Example: Flask and Django Extensions\n env_switcher = defaults.get(\n \"ENV_SWITCHER_FOR_DYNACONF\", \"ENV_FOR_DYNACONF\"\n )\n\n for key in all_keys:\n if key not in default_settings_values.keys():\n continue\n\n env_value = obj.get_environ(\n env_switcher if key == \"ENV_FOR_DYNACONF\" else key,\n default=\"_not_found\",\n )\n\n if env_value != \"_not_found\":\n obj.set(key, env_value, tomlfy=True)\n\n\ndef _run_hook_module(hook, hook_module, obj, key=None):\n \"\"\"Run the hook function from the settings obj.\n\n given a hook name, a hook_module and a settings object\n load the function and execute if found.\n \"\"\"\n if hook in obj._loaded_hooks.get(hook_module.__file__, {}):\n # already loaded\n return\n\n if hook_module and getattr(hook_module, \"_error\", False):\n if not isinstance(hook_module._error, FileNotFoundError):\n raise hook_module._error\n\n hook_func = getattr(hook_module, hook, None)\n if hook_func:\n hook_dict = hook_func(obj.dynaconf.clone())\n if hook_dict:\n merge = hook_dict.pop(\n \"dynaconf_merge\", hook_dict.pop(\"DYNACONF_MERGE\", False)\n )\n if key and key in hook_dict:\n obj.set(key, hook_dict[key], tomlfy=False, merge=merge)\n elif not key:\n obj.update(hook_dict, tomlfy=False, merge=merge)\n obj._loaded_hooks[hook_module.__file__][hook] = hook_dict\n\n\ndef execute_hooks(\n hook, obj, env=None, silent=True, key=None, modules=None, files=None\n):\n \"\"\"Execute dynaconf_hooks from module or filepath.\"\"\"\n if hook not in [\"post\"]:\n raise ValueError(f\"hook {hook} not supported yet.\")\n\n # try to load hooks using python module __name__\n modules = modules or obj._loaded_py_modules\n for loaded_module in modules:\n hook_module_name = \".\".join(\n loaded_module.split(\".\")[:-1] + [\"dynaconf_hooks\"]\n )\n try:\n hook_module = importlib.import_module(hook_module_name)\n except (ImportError, TypeError):\n # There was no hook on the same path as a python module\n continue\n else:\n _run_hook_module(\n hook=hook,\n hook_module=hook_module,\n obj=obj,\n key=key,\n )\n\n # Try to load from python filename path\n files = files or obj._loaded_files\n for loaded_file in files:\n hook_file = os.path.join(\n os.path.dirname(loaded_file), \"dynaconf_hooks.py\"\n )\n hook_module = py_loader.import_from_filename(\n obj, hook_file, silent=silent\n )\n if not hook_module:\n # There was no hook on the same path as a python file\n continue\n _run_hook_module(\n hook=hook,\n hook_module=hook_module,\n obj=obj,\n key=key,\n )\n\n\ndef settings_loader(\n obj, settings_module=None, env=None, silent=True, key=None, filename=None\n):\n \"\"\"Loads from defined settings module\n\n :param obj: A dynaconf instance\n :param settings_module: A path or a list of paths e.g settings.toml\n :param env: Env to look for data defaults: development\n :param silent: Boolean to raise loading errors\n :param key: Load a single key if provided\n :param filename: optional filename to override the settings_module\n \"\"\"\n if filename is None:\n settings_module = settings_module or obj.settings_module\n if not settings_module: # pragma: no cover\n return\n files = ensure_a_list(settings_module)\n else:\n files = ensure_a_list(filename)\n\n files.extend(ensure_a_list(obj.get(\"SECRETS_FOR_DYNACONF\", None)))\n\n found_files = []\n modules_names = []\n for item in files:\n item = str(item) # Ensure str in case of LocalPath/Path is passed.\n if item.endswith(ct.ALL_EXTENSIONS + (\".py\",)):\n p_root = obj._root_path or (\n os.path.dirname(found_files[0]) if found_files else None\n )\n found = obj.find_file(item, project_root=p_root)\n if found:\n found_files.append(found)\n else:\n # a bare python module name w/o extension\n modules_names.append(item)\n\n enabled_core_loaders = [\n item.upper() for item in obj.get(\"CORE_LOADERS_FOR_DYNACONF\") or []\n ]\n\n # add `.local.` to found_files list to search for local files.\n found_files.extend(\n [\n get_local_filename(item)\n for item in found_files\n if \".local.\" not in str(item)\n ]\n )\n\n for mod_file in modules_names + found_files:\n # can be set to multiple files settings.py,settings.yaml,...\n\n # Cascade all loaders\n loaders = [\n {\"ext\": ct.YAML_EXTENSIONS, \"name\": \"YAML\", \"loader\": yaml_loader},\n {\"ext\": ct.TOML_EXTENSIONS, \"name\": \"TOML\", \"loader\": toml_loader},\n {\"ext\": ct.INI_EXTENSIONS, \"name\": \"INI\", \"loader\": ini_loader},\n {\"ext\": ct.JSON_EXTENSIONS, \"name\": \"JSON\", \"loader\": json_loader},\n ]\n\n for loader in loaders:\n if loader[\"name\"] not in enabled_core_loaders:\n continue\n\n if mod_file.endswith(loader[\"ext\"]):\n loader[\"loader\"].load(\n obj, filename=mod_file, env=env, silent=silent, key=key\n )\n continue\n\n if mod_file.endswith(ct.ALL_EXTENSIONS):\n continue\n\n if \"PY\" not in enabled_core_loaders:\n # pyloader is disabled\n continue\n\n # must be Python file or module\n # load from default defined module settings.py or .secrets.py if exists\n py_loader.load(obj, mod_file, key=key)\n\n # load from the current env e.g: development_settings.py\n env = env or obj.current_env\n if mod_file.endswith(\".py\"):\n if \".secrets.py\" == mod_file:\n tmpl = \".{0}_{1}{2}\"\n mod_file = \"secrets.py\"\n else:\n tmpl = \"{0}_{1}{2}\"\n\n dirname = os.path.dirname(mod_file)\n filename, extension = os.path.splitext(os.path.basename(mod_file))\n new_filename = tmpl.format(env.lower(), filename, extension)\n env_mod_file = os.path.join(dirname, new_filename)\n global_filename = tmpl.format(\"global\", filename, extension)\n global_mod_file = os.path.join(dirname, global_filename)\n else:\n env_mod_file = f\"{env.lower()}_{mod_file}\"\n global_mod_file = f\"global_{mod_file}\"\n\n py_loader.load(\n obj,\n env_mod_file,\n identifier=f\"py_{env.upper()}\",\n silent=True,\n key=key,\n )\n\n # load from global_settings.py\n py_loader.load(\n obj, global_mod_file, identifier=\"py_global\", silent=True, key=key\n )\n\n\ndef enable_external_loaders(obj):\n \"\"\"Enable external service loaders like `VAULT_` and `REDIS_`\n looks forenv variables like `REDIS_ENABLED_FOR_DYNACONF`\n \"\"\"\n for name, loader in ct.EXTERNAL_LOADERS.items():\n enabled = getattr(obj, f\"{name.upper()}_ENABLED_FOR_DYNACONF\", False)\n if (\n enabled\n and enabled not in false_values\n and loader not in obj.LOADERS_FOR_DYNACONF\n ): # noqa\n obj.LOADERS_FOR_DYNACONF.insert(0, loader)\n\n\ndef write(filename, data, env=None, merge=False):\n \"\"\"Writes `data` to `filename` infers format by file extension.\"\"\"\n loader_name = f\"{filename.rpartition('.')[-1]}_loader\"\n loader = globals().get(loader_name)\n if not loader:\n raise OSError(f\"{loader_name} cannot be found.\")\n\n data = DynaBox(data, box_settings={}).to_dict()\n if loader is not py_loader and env and env not in data:\n data = {env: data}\n\n loader.write(filename, data, merge=merge)\n", "path": "dynaconf/loaders/__init__.py"}]}
| 3,297 | 205 |
gh_patches_debug_34113
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-1777
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TypeError: initializer for ctype 'HMAC_CTX *
I sometimes get the below exception when using Fernet within an Apache mod_wsgi runtime. It does not occur when using a non-Apache environment (Eventlet).
Is this cryptography module thread safe? Or maybe is there an issue in how I used the module?
TypeError: initializer for ctype 'HMAC_CTX *' must be a pointer to same type, not cdata 'HMAC_CTX *'
Cryptography version: 0.7.2
Sample code:
```
def __init__(self):
key = self._load_key()
self.fernet = Fernet(key)
def encode(self, input, errors='strict'):
return (self.fernet.encrypt(input), len(input))
def decode(self, input, errors='strict'):
return (self.fernet.decrypt(input), len(input))
def _load_key(self):
# Load the key from a file
username = pwd.getpwuid(os.getuid()).pw_name
filename = '/etc/fernet/%s.key' % username
try:
with open(filename) as f:
key = f.read()
return key
except IOError:
raise UnicodeEncodeError()
```
```
2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi File "/usr/lib/python2.7/encodings/fernet.py", line 19, in decode
2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi return (self.fernet.decrypt(input), len(input))
2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi File "/usr/local/lib/python2.7/dist-packages/cryptography/fernet.py", line 96, in decrypt
2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi h = HMAC(self._signing_key, hashes.SHA256(), backend=self._backend)
2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/primitives/hmac.py", line 32, in __init__
2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi self._ctx = self._backend.create_hmac_ctx(key, self.algorithm)
2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/multibackend.py", line 99, in create_hmac_ctx
2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi return b.create_hmac_ctx(key, algorithm)
2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/openssl/backend.py", line 140, in create_hmac_ctx
2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi return _HMACContext(self, key, algorithm)
2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/openssl/hmac.py", line 24, in __init__
2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi self._backend._lib.HMAC_CTX_init(ctx)
2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi TypeError: initializer for ctype 'HMAC_CTX *' must be a pointer to same type, not cdata 'HMAC_CTX *'
2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cryptography/hazmat/bindings/commoncrypto/binding.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 from cryptography.hazmat.bindings.utils import (
8 build_ffi_for_binding, load_library_for_binding,
9 )
10
11
12 class Binding(object):
13 """
14 CommonCrypto API wrapper.
15 """
16 _module_prefix = "cryptography.hazmat.bindings.commoncrypto."
17 _modules = [
18 "cf",
19 "common_digest",
20 "common_hmac",
21 "common_key_derivation",
22 "common_cryptor",
23 "common_symmetric_key_wrap",
24 "secimport",
25 "secitem",
26 "seckey",
27 "seckeychain",
28 "sectransform",
29 ]
30
31 ffi = build_ffi_for_binding(
32 module_prefix=_module_prefix,
33 modules=_modules,
34 extra_link_args=[
35 "-framework", "Security", "-framework", "CoreFoundation"
36 ],
37 )
38 lib = None
39
40 def __init__(self):
41 self._ensure_ffi_initialized()
42
43 @classmethod
44 def _ensure_ffi_initialized(cls):
45 if cls.lib is not None:
46 return
47
48 cls.lib = load_library_for_binding(
49 cls.ffi,
50 module_prefix=cls._module_prefix,
51 modules=cls._modules,
52 )
53
```
Path: `src/cryptography/hazmat/bindings/openssl/binding.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import os
8 import sys
9 import threading
10
11 from cryptography.hazmat.bindings.utils import (
12 build_ffi_for_binding, load_library_for_binding,
13 )
14
15
16 _OSX_PRE_INCLUDE = """
17 #ifdef __APPLE__
18 #include <AvailabilityMacros.h>
19 #define __ORIG_DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER \
20 DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER
21 #undef DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER
22 #define DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER
23 #endif
24 """
25
26 _OSX_POST_INCLUDE = """
27 #ifdef __APPLE__
28 #undef DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER
29 #define DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER \
30 __ORIG_DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER
31 #endif
32 """
33
34
35 def _get_libraries(platform):
36 # OpenSSL goes by a different library name on different operating systems.
37 if platform != "win32":
38 # In some circumstances, the order in which these libs are
39 # specified on the linker command-line is significant;
40 # libssl must come before libcrypto
41 # (http://marc.info/?l=openssl-users&m=135361825921871)
42 return ["ssl", "crypto"]
43 else:
44 link_type = os.environ.get("PYCA_WINDOWS_LINK_TYPE", "static")
45 return _get_windows_libraries(link_type)
46
47
48 def _get_windows_libraries(link_type):
49 if link_type == "dynamic":
50 return ["libeay32", "ssleay32", "advapi32"]
51 elif link_type == "static" or link_type == "":
52 return ["libeay32mt", "ssleay32mt", "advapi32",
53 "crypt32", "gdi32", "user32", "ws2_32"]
54 else:
55 raise ValueError(
56 "PYCA_WINDOWS_LINK_TYPE must be 'static' or 'dynamic'"
57 )
58
59
60 class Binding(object):
61 """
62 OpenSSL API wrapper.
63 """
64 _module_prefix = "cryptography.hazmat.bindings.openssl."
65 _modules = [
66 "aes",
67 "asn1",
68 "bignum",
69 "bio",
70 "cmac",
71 "cms",
72 "conf",
73 "crypto",
74 "dh",
75 "dsa",
76 "ec",
77 "ecdh",
78 "ecdsa",
79 "engine",
80 "err",
81 "evp",
82 "hmac",
83 "nid",
84 "objects",
85 "opensslv",
86 "osrandom_engine",
87 "pem",
88 "pkcs7",
89 "pkcs12",
90 "rand",
91 "rsa",
92 "ssl",
93 "x509",
94 "x509name",
95 "x509v3",
96 "x509_vfy"
97 ]
98
99 _locks = None
100 _lock_cb_handle = None
101 _lock_init_lock = threading.Lock()
102
103 ffi = build_ffi_for_binding(
104 module_prefix=_module_prefix,
105 modules=_modules,
106 pre_include=_OSX_PRE_INCLUDE,
107 post_include=_OSX_POST_INCLUDE,
108 libraries=_get_libraries(sys.platform)
109 )
110 lib = None
111
112 def __init__(self):
113 self._ensure_ffi_initialized()
114
115 @classmethod
116 def _ensure_ffi_initialized(cls):
117 if cls.lib is not None:
118 return
119
120 cls.lib = load_library_for_binding(
121 cls.ffi,
122 cls._module_prefix,
123 cls._modules,
124 )
125
126 res = cls.lib.Cryptography_add_osrandom_engine()
127 assert res != 0
128
129 @classmethod
130 def init_static_locks(cls):
131 with cls._lock_init_lock:
132 cls._ensure_ffi_initialized()
133
134 if not cls._lock_cb_handle:
135 cls._lock_cb_handle = cls.ffi.callback(
136 "void(int, int, const char *, int)",
137 cls._lock_cb
138 )
139
140 # Use Python's implementation if available, importing _ssl triggers
141 # the setup for this.
142 __import__("_ssl")
143
144 if cls.lib.CRYPTO_get_locking_callback() != cls.ffi.NULL:
145 return
146
147 # If nothing else has setup a locking callback already, we set up
148 # our own
149 num_locks = cls.lib.CRYPTO_num_locks()
150 cls._locks = [threading.Lock() for n in range(num_locks)]
151
152 cls.lib.CRYPTO_set_locking_callback(cls._lock_cb_handle)
153
154 @classmethod
155 def _lock_cb(cls, mode, n, file, line):
156 lock = cls._locks[n]
157
158 if mode & cls.lib.CRYPTO_LOCK:
159 lock.acquire()
160 elif mode & cls.lib.CRYPTO_UNLOCK:
161 lock.release()
162 else:
163 raise RuntimeError(
164 "Unknown lock mode {0}: lock={1}, file={2}, line={3}.".format(
165 mode, n, file, line
166 )
167 )
168
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/cryptography/hazmat/bindings/commoncrypto/binding.py b/src/cryptography/hazmat/bindings/commoncrypto/binding.py
--- a/src/cryptography/hazmat/bindings/commoncrypto/binding.py
+++ b/src/cryptography/hazmat/bindings/commoncrypto/binding.py
@@ -4,6 +4,8 @@
from __future__ import absolute_import, division, print_function
+import threading
+
from cryptography.hazmat.bindings.utils import (
build_ffi_for_binding, load_library_for_binding,
)
@@ -36,6 +38,7 @@
],
)
lib = None
+ _init_lock = threading.Lock()
def __init__(self):
self._ensure_ffi_initialized()
@@ -45,8 +48,10 @@
if cls.lib is not None:
return
- cls.lib = load_library_for_binding(
- cls.ffi,
- module_prefix=cls._module_prefix,
- modules=cls._modules,
- )
+ with cls._init_lock:
+ if cls.lib is None:
+ cls.lib = load_library_for_binding(
+ cls.ffi,
+ module_prefix=cls._module_prefix,
+ modules=cls._modules,
+ )
diff --git a/src/cryptography/hazmat/bindings/openssl/binding.py b/src/cryptography/hazmat/bindings/openssl/binding.py
--- a/src/cryptography/hazmat/bindings/openssl/binding.py
+++ b/src/cryptography/hazmat/bindings/openssl/binding.py
@@ -98,6 +98,7 @@
_locks = None
_lock_cb_handle = None
+ _init_lock = threading.Lock()
_lock_init_lock = threading.Lock()
ffi = build_ffi_for_binding(
@@ -117,14 +118,16 @@
if cls.lib is not None:
return
- cls.lib = load_library_for_binding(
- cls.ffi,
- cls._module_prefix,
- cls._modules,
- )
+ with cls._init_lock:
+ if cls.lib is None:
+ cls.lib = load_library_for_binding(
+ cls.ffi,
+ cls._module_prefix,
+ cls._modules,
+ )
- res = cls.lib.Cryptography_add_osrandom_engine()
- assert res != 0
+ res = cls.lib.Cryptography_add_osrandom_engine()
+ assert res != 0
@classmethod
def init_static_locks(cls):
|
{"golden_diff": "diff --git a/src/cryptography/hazmat/bindings/commoncrypto/binding.py b/src/cryptography/hazmat/bindings/commoncrypto/binding.py\n--- a/src/cryptography/hazmat/bindings/commoncrypto/binding.py\n+++ b/src/cryptography/hazmat/bindings/commoncrypto/binding.py\n@@ -4,6 +4,8 @@\n \n from __future__ import absolute_import, division, print_function\n \n+import threading\n+\n from cryptography.hazmat.bindings.utils import (\n build_ffi_for_binding, load_library_for_binding,\n )\n@@ -36,6 +38,7 @@\n ],\n )\n lib = None\n+ _init_lock = threading.Lock()\n \n def __init__(self):\n self._ensure_ffi_initialized()\n@@ -45,8 +48,10 @@\n if cls.lib is not None:\n return\n \n- cls.lib = load_library_for_binding(\n- cls.ffi,\n- module_prefix=cls._module_prefix,\n- modules=cls._modules,\n- )\n+ with cls._init_lock:\n+ if cls.lib is None:\n+ cls.lib = load_library_for_binding(\n+ cls.ffi,\n+ module_prefix=cls._module_prefix,\n+ modules=cls._modules,\n+ )\ndiff --git a/src/cryptography/hazmat/bindings/openssl/binding.py b/src/cryptography/hazmat/bindings/openssl/binding.py\n--- a/src/cryptography/hazmat/bindings/openssl/binding.py\n+++ b/src/cryptography/hazmat/bindings/openssl/binding.py\n@@ -98,6 +98,7 @@\n \n _locks = None\n _lock_cb_handle = None\n+ _init_lock = threading.Lock()\n _lock_init_lock = threading.Lock()\n \n ffi = build_ffi_for_binding(\n@@ -117,14 +118,16 @@\n if cls.lib is not None:\n return\n \n- cls.lib = load_library_for_binding(\n- cls.ffi,\n- cls._module_prefix,\n- cls._modules,\n- )\n+ with cls._init_lock:\n+ if cls.lib is None:\n+ cls.lib = load_library_for_binding(\n+ cls.ffi,\n+ cls._module_prefix,\n+ cls._modules,\n+ )\n \n- res = cls.lib.Cryptography_add_osrandom_engine()\n- assert res != 0\n+ res = cls.lib.Cryptography_add_osrandom_engine()\n+ assert res != 0\n \n @classmethod\n def init_static_locks(cls):\n", "issue": "TypeError: initializer for ctype 'HMAC_CTX *\nI sometimes get the below exception when using Fernet within an Apache mod_wsgi runtime. It does not occur when using a non-Apache environment (Eventlet).\n\nIs this cryptography module thread safe? Or maybe is there an issue in how I used the module?\n\nTypeError: initializer for ctype 'HMAC_CTX *' must be a pointer to same type, not cdata 'HMAC_CTX *'\nCryptography version: 0.7.2\n\nSample code:\n\n```\ndef __init__(self):\n key = self._load_key()\n self.fernet = Fernet(key)\n\ndef encode(self, input, errors='strict'):\n return (self.fernet.encrypt(input), len(input))\n\ndef decode(self, input, errors='strict'):\n return (self.fernet.decrypt(input), len(input))\n\ndef _load_key(self):\n # Load the key from a file\n username = pwd.getpwuid(os.getuid()).pw_name\n filename = '/etc/fernet/%s.key' % username\n try:\n with open(filename) as f:\n key = f.read()\n return key\n except IOError:\n raise UnicodeEncodeError()\n```\n\n```\n2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi File \"/usr/lib/python2.7/encodings/fernet.py\", line 19, in decode\n2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi return (self.fernet.decrypt(input), len(input))\n2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi File \"/usr/local/lib/python2.7/dist-packages/cryptography/fernet.py\", line 96, in decrypt\n2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi h = HMAC(self._signing_key, hashes.SHA256(), backend=self._backend)\n2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi File \"/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/primitives/hmac.py\", line 32, in __init__\n2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi self._ctx = self._backend.create_hmac_ctx(key, self.algorithm)\n2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi File \"/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/multibackend.py\", line 99, in create_hmac_ctx\n2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi return b.create_hmac_ctx(key, algorithm)\n2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi File \"/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/openssl/backend.py\", line 140, in create_hmac_ctx\n2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi return _HMACContext(self, key, algorithm)\n2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi File \"/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/openssl/hmac.py\", line 24, in __init__\n2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi self._backend._lib.HMAC_CTX_init(ctx)\n2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi TypeError: initializer for ctype 'HMAC_CTX *' must be a pointer to same type, not cdata 'HMAC_CTX *'\n2015-03-18 22:55:08.512 6509 TRACE keystone.common.wsgi \n```\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom cryptography.hazmat.bindings.utils import (\n build_ffi_for_binding, load_library_for_binding,\n)\n\n\nclass Binding(object):\n \"\"\"\n CommonCrypto API wrapper.\n \"\"\"\n _module_prefix = \"cryptography.hazmat.bindings.commoncrypto.\"\n _modules = [\n \"cf\",\n \"common_digest\",\n \"common_hmac\",\n \"common_key_derivation\",\n \"common_cryptor\",\n \"common_symmetric_key_wrap\",\n \"secimport\",\n \"secitem\",\n \"seckey\",\n \"seckeychain\",\n \"sectransform\",\n ]\n\n ffi = build_ffi_for_binding(\n module_prefix=_module_prefix,\n modules=_modules,\n extra_link_args=[\n \"-framework\", \"Security\", \"-framework\", \"CoreFoundation\"\n ],\n )\n lib = None\n\n def __init__(self):\n self._ensure_ffi_initialized()\n\n @classmethod\n def _ensure_ffi_initialized(cls):\n if cls.lib is not None:\n return\n\n cls.lib = load_library_for_binding(\n cls.ffi,\n module_prefix=cls._module_prefix,\n modules=cls._modules,\n )\n", "path": "src/cryptography/hazmat/bindings/commoncrypto/binding.py"}, {"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport sys\nimport threading\n\nfrom cryptography.hazmat.bindings.utils import (\n build_ffi_for_binding, load_library_for_binding,\n)\n\n\n_OSX_PRE_INCLUDE = \"\"\"\n#ifdef __APPLE__\n#include <AvailabilityMacros.h>\n#define __ORIG_DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER \\\n DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#undef DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#define DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#endif\n\"\"\"\n\n_OSX_POST_INCLUDE = \"\"\"\n#ifdef __APPLE__\n#undef DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#define DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER \\\n __ORIG_DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#endif\n\"\"\"\n\n\ndef _get_libraries(platform):\n # OpenSSL goes by a different library name on different operating systems.\n if platform != \"win32\":\n # In some circumstances, the order in which these libs are\n # specified on the linker command-line is significant;\n # libssl must come before libcrypto\n # (http://marc.info/?l=openssl-users&m=135361825921871)\n return [\"ssl\", \"crypto\"]\n else:\n link_type = os.environ.get(\"PYCA_WINDOWS_LINK_TYPE\", \"static\")\n return _get_windows_libraries(link_type)\n\n\ndef _get_windows_libraries(link_type):\n if link_type == \"dynamic\":\n return [\"libeay32\", \"ssleay32\", \"advapi32\"]\n elif link_type == \"static\" or link_type == \"\":\n return [\"libeay32mt\", \"ssleay32mt\", \"advapi32\",\n \"crypt32\", \"gdi32\", \"user32\", \"ws2_32\"]\n else:\n raise ValueError(\n \"PYCA_WINDOWS_LINK_TYPE must be 'static' or 'dynamic'\"\n )\n\n\nclass Binding(object):\n \"\"\"\n OpenSSL API wrapper.\n \"\"\"\n _module_prefix = \"cryptography.hazmat.bindings.openssl.\"\n _modules = [\n \"aes\",\n \"asn1\",\n \"bignum\",\n \"bio\",\n \"cmac\",\n \"cms\",\n \"conf\",\n \"crypto\",\n \"dh\",\n \"dsa\",\n \"ec\",\n \"ecdh\",\n \"ecdsa\",\n \"engine\",\n \"err\",\n \"evp\",\n \"hmac\",\n \"nid\",\n \"objects\",\n \"opensslv\",\n \"osrandom_engine\",\n \"pem\",\n \"pkcs7\",\n \"pkcs12\",\n \"rand\",\n \"rsa\",\n \"ssl\",\n \"x509\",\n \"x509name\",\n \"x509v3\",\n \"x509_vfy\"\n ]\n\n _locks = None\n _lock_cb_handle = None\n _lock_init_lock = threading.Lock()\n\n ffi = build_ffi_for_binding(\n module_prefix=_module_prefix,\n modules=_modules,\n pre_include=_OSX_PRE_INCLUDE,\n post_include=_OSX_POST_INCLUDE,\n libraries=_get_libraries(sys.platform)\n )\n lib = None\n\n def __init__(self):\n self._ensure_ffi_initialized()\n\n @classmethod\n def _ensure_ffi_initialized(cls):\n if cls.lib is not None:\n return\n\n cls.lib = load_library_for_binding(\n cls.ffi,\n cls._module_prefix,\n cls._modules,\n )\n\n res = cls.lib.Cryptography_add_osrandom_engine()\n assert res != 0\n\n @classmethod\n def init_static_locks(cls):\n with cls._lock_init_lock:\n cls._ensure_ffi_initialized()\n\n if not cls._lock_cb_handle:\n cls._lock_cb_handle = cls.ffi.callback(\n \"void(int, int, const char *, int)\",\n cls._lock_cb\n )\n\n # Use Python's implementation if available, importing _ssl triggers\n # the setup for this.\n __import__(\"_ssl\")\n\n if cls.lib.CRYPTO_get_locking_callback() != cls.ffi.NULL:\n return\n\n # If nothing else has setup a locking callback already, we set up\n # our own\n num_locks = cls.lib.CRYPTO_num_locks()\n cls._locks = [threading.Lock() for n in range(num_locks)]\n\n cls.lib.CRYPTO_set_locking_callback(cls._lock_cb_handle)\n\n @classmethod\n def _lock_cb(cls, mode, n, file, line):\n lock = cls._locks[n]\n\n if mode & cls.lib.CRYPTO_LOCK:\n lock.acquire()\n elif mode & cls.lib.CRYPTO_UNLOCK:\n lock.release()\n else:\n raise RuntimeError(\n \"Unknown lock mode {0}: lock={1}, file={2}, line={3}.\".format(\n mode, n, file, line\n )\n )\n", "path": "src/cryptography/hazmat/bindings/openssl/binding.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport threading\n\nfrom cryptography.hazmat.bindings.utils import (\n build_ffi_for_binding, load_library_for_binding,\n)\n\n\nclass Binding(object):\n \"\"\"\n CommonCrypto API wrapper.\n \"\"\"\n _module_prefix = \"cryptography.hazmat.bindings.commoncrypto.\"\n _modules = [\n \"cf\",\n \"common_digest\",\n \"common_hmac\",\n \"common_key_derivation\",\n \"common_cryptor\",\n \"common_symmetric_key_wrap\",\n \"secimport\",\n \"secitem\",\n \"seckey\",\n \"seckeychain\",\n \"sectransform\",\n ]\n\n ffi = build_ffi_for_binding(\n module_prefix=_module_prefix,\n modules=_modules,\n extra_link_args=[\n \"-framework\", \"Security\", \"-framework\", \"CoreFoundation\"\n ],\n )\n lib = None\n _init_lock = threading.Lock()\n\n def __init__(self):\n self._ensure_ffi_initialized()\n\n @classmethod\n def _ensure_ffi_initialized(cls):\n if cls.lib is not None:\n return\n\n with cls._init_lock:\n if cls.lib is None:\n cls.lib = load_library_for_binding(\n cls.ffi,\n module_prefix=cls._module_prefix,\n modules=cls._modules,\n )\n", "path": "src/cryptography/hazmat/bindings/commoncrypto/binding.py"}, {"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport sys\nimport threading\n\nfrom cryptography.hazmat.bindings.utils import (\n build_ffi_for_binding, load_library_for_binding,\n)\n\n\n_OSX_PRE_INCLUDE = \"\"\"\n#ifdef __APPLE__\n#include <AvailabilityMacros.h>\n#define __ORIG_DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER \\\n DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#undef DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#define DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#endif\n\"\"\"\n\n_OSX_POST_INCLUDE = \"\"\"\n#ifdef __APPLE__\n#undef DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#define DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER \\\n __ORIG_DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER\n#endif\n\"\"\"\n\n\ndef _get_libraries(platform):\n # OpenSSL goes by a different library name on different operating systems.\n if platform != \"win32\":\n # In some circumstances, the order in which these libs are\n # specified on the linker command-line is significant;\n # libssl must come before libcrypto\n # (http://marc.info/?l=openssl-users&m=135361825921871)\n return [\"ssl\", \"crypto\"]\n else:\n link_type = os.environ.get(\"PYCA_WINDOWS_LINK_TYPE\", \"static\")\n return _get_windows_libraries(link_type)\n\n\ndef _get_windows_libraries(link_type):\n if link_type == \"dynamic\":\n return [\"libeay32\", \"ssleay32\", \"advapi32\"]\n elif link_type == \"static\" or link_type == \"\":\n return [\"libeay32mt\", \"ssleay32mt\", \"advapi32\",\n \"crypt32\", \"gdi32\", \"user32\", \"ws2_32\"]\n else:\n raise ValueError(\n \"PYCA_WINDOWS_LINK_TYPE must be 'static' or 'dynamic'\"\n )\n\n\nclass Binding(object):\n \"\"\"\n OpenSSL API wrapper.\n \"\"\"\n _module_prefix = \"cryptography.hazmat.bindings.openssl.\"\n _modules = [\n \"aes\",\n \"asn1\",\n \"bignum\",\n \"bio\",\n \"cmac\",\n \"cms\",\n \"conf\",\n \"crypto\",\n \"dh\",\n \"dsa\",\n \"ec\",\n \"ecdh\",\n \"ecdsa\",\n \"engine\",\n \"err\",\n \"evp\",\n \"hmac\",\n \"nid\",\n \"objects\",\n \"opensslv\",\n \"osrandom_engine\",\n \"pem\",\n \"pkcs7\",\n \"pkcs12\",\n \"rand\",\n \"rsa\",\n \"ssl\",\n \"x509\",\n \"x509name\",\n \"x509v3\",\n \"x509_vfy\"\n ]\n\n _locks = None\n _lock_cb_handle = None\n _init_lock = threading.Lock()\n _lock_init_lock = threading.Lock()\n\n ffi = build_ffi_for_binding(\n module_prefix=_module_prefix,\n modules=_modules,\n pre_include=_OSX_PRE_INCLUDE,\n post_include=_OSX_POST_INCLUDE,\n libraries=_get_libraries(sys.platform)\n )\n lib = None\n\n def __init__(self):\n self._ensure_ffi_initialized()\n\n @classmethod\n def _ensure_ffi_initialized(cls):\n if cls.lib is not None:\n return\n\n with cls._init_lock:\n if cls.lib is None:\n cls.lib = load_library_for_binding(\n cls.ffi,\n cls._module_prefix,\n cls._modules,\n )\n\n res = cls.lib.Cryptography_add_osrandom_engine()\n assert res != 0\n\n @classmethod\n def init_static_locks(cls):\n with cls._lock_init_lock:\n cls._ensure_ffi_initialized()\n\n if not cls._lock_cb_handle:\n cls._lock_cb_handle = cls.ffi.callback(\n \"void(int, int, const char *, int)\",\n cls._lock_cb\n )\n\n # Use Python's implementation if available, importing _ssl triggers\n # the setup for this.\n __import__(\"_ssl\")\n\n if cls.lib.CRYPTO_get_locking_callback() != cls.ffi.NULL:\n return\n\n # If nothing else has setup a locking callback already, we set up\n # our own\n num_locks = cls.lib.CRYPTO_num_locks()\n cls._locks = [threading.Lock() for n in range(num_locks)]\n\n cls.lib.CRYPTO_set_locking_callback(cls._lock_cb_handle)\n\n @classmethod\n def _lock_cb(cls, mode, n, file, line):\n lock = cls._locks[n]\n\n if mode & cls.lib.CRYPTO_LOCK:\n lock.acquire()\n elif mode & cls.lib.CRYPTO_UNLOCK:\n lock.release()\n else:\n raise RuntimeError(\n \"Unknown lock mode {0}: lock={1}, file={2}, line={3}.\".format(\n mode, n, file, line\n )\n )\n", "path": "src/cryptography/hazmat/bindings/openssl/binding.py"}]}
| 3,316 | 555 |
gh_patches_debug_27655
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-1481
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add leaderboard history
Each time the ranking for the leaderboard is generated save the resulting ranking dictionary so that we know what the leaderboard looked like at a certain date/time. Would be helpful if someone says in their paper "We were first on the leaderboard (at the time of writing)" and we can then independently verify this.
Maybe this would require moving out the leaderboard into its own app, deprecating the results list view for a leaderboard detail view.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/evaluation/views.py`
Content:
```
1 from datetime import datetime, timedelta
2 from typing import Dict
3
4 from django.contrib.messages.views import SuccessMessageMixin
5 from django.contrib.postgres.aggregates import ArrayAgg
6 from django.core.exceptions import ObjectDoesNotExist
7 from django.core.files import File
8 from django.db.models import Q
9 from django.utils import timezone
10 from django.utils.functional import cached_property
11 from django.views.generic import CreateView, DetailView, ListView, UpdateView
12
13 from grandchallenge.core.permissions.mixins import (
14 UserIsChallengeAdminMixin,
15 UserIsChallengeParticipantOrAdminMixin,
16 )
17 from grandchallenge.core.views import Column, PaginatedTableListView
18 from grandchallenge.evaluation.forms import (
19 ConfigForm,
20 LegacySubmissionForm,
21 MethodForm,
22 SubmissionForm,
23 )
24 from grandchallenge.evaluation.models import (
25 Config,
26 Evaluation,
27 Method,
28 Submission,
29 )
30 from grandchallenge.jqfileupload.widgets.uploader import StagedAjaxFile
31 from grandchallenge.subdomains.utils import reverse
32 from grandchallenge.teams.models import Team
33
34
35 class ConfigUpdate(UserIsChallengeAdminMixin, SuccessMessageMixin, UpdateView):
36 form_class = ConfigForm
37 success_message = "Configuration successfully updated"
38
39 def get_object(self, queryset=None):
40 challenge = self.request.challenge
41 return challenge.evaluation_config
42
43
44 class MethodCreate(UserIsChallengeAdminMixin, CreateView):
45 model = Method
46 form_class = MethodForm
47
48 def get_form_kwargs(self):
49 kwargs = super().get_form_kwargs()
50 kwargs.update({"user": self.request.user})
51 return kwargs
52
53 def form_valid(self, form):
54 form.instance.creator = self.request.user
55 form.instance.challenge = self.request.challenge
56
57 uploaded_file: StagedAjaxFile = form.cleaned_data["chunked_upload"][0]
58 form.instance.staged_image_uuid = uploaded_file.uuid
59
60 return super().form_valid(form)
61
62
63 class MethodList(UserIsChallengeAdminMixin, ListView):
64 model = Method
65
66 def get_queryset(self):
67 queryset = super().get_queryset()
68 return queryset.filter(challenge=self.request.challenge)
69
70
71 class MethodDetail(UserIsChallengeAdminMixin, DetailView):
72 model = Method
73
74
75 class SubmissionCreateBase(SuccessMessageMixin, CreateView):
76 """
77 Base class for the submission creation forms.
78
79 It has no permissions, do not use it directly! See the subclasses.
80 """
81
82 model = Submission
83 success_message = (
84 "Your submission was successful. "
85 "Your result will appear on the leaderboard when it is ready."
86 )
87
88 def get_form_kwargs(self):
89 kwargs = super().get_form_kwargs()
90
91 config: Config = Config.objects.get(challenge=self.request.challenge)
92
93 kwargs.update(
94 {
95 "user": self.request.user,
96 "display_comment_field": config.allow_submission_comments,
97 "supplementary_file_choice": config.supplementary_file_choice,
98 "supplementary_file_label": config.supplementary_file_label,
99 "supplementary_file_help_text": config.supplementary_file_help_text,
100 "publication_url_choice": config.publication_url_choice,
101 "algorithm_submission": config.submission_kind
102 == config.SubmissionKind.ALGORITHM,
103 }
104 )
105
106 return kwargs
107
108 def get_context_data(self, **kwargs):
109 context = super().get_context_data(**kwargs)
110
111 config = Config.objects.get(challenge=self.request.challenge)
112
113 context.update(
114 self.get_next_submission(max_subs=config.daily_submission_limit)
115 )
116
117 pending_evaluations = Evaluation.objects.filter(
118 submission__challenge=self.request.challenge,
119 submission__creator=self.request.user,
120 status__in=(Evaluation.PENDING, Evaluation.STARTED),
121 ).count()
122
123 context.update({"pending_evaluations": pending_evaluations})
124
125 return context
126
127 def get_next_submission(
128 self, *, max_subs: int, period: timedelta = None, now: datetime = None
129 ) -> Dict:
130 """
131 Determines the number of submissions left for the user in a given time
132 period, and when they can next submit.
133
134 :return: A dictionary containing remaining_submissions (int) and
135 next_submission_at (datetime)
136 """
137 if now is None:
138 now = timezone.now()
139
140 if period is None:
141 period = timedelta(days=1)
142
143 subs = (
144 Submission.objects.filter(
145 challenge=self.request.challenge,
146 creator=self.request.user,
147 created__gte=now - period,
148 )
149 .exclude(evaluation__status=Evaluation.FAILURE)
150 .order_by("-created")
151 .distinct()
152 )
153
154 try:
155 next_sub_at = subs[max_subs - 1].created + period
156 except (IndexError, AssertionError):
157 next_sub_at = now
158
159 return {
160 "remaining_submissions": max_subs - len(subs),
161 "next_submission_at": next_sub_at,
162 }
163
164 def form_valid(self, form):
165 if form.instance.creator is None:
166 form.instance.creator = self.request.user
167
168 form.instance.challenge = self.request.challenge
169
170 if "algorithm" in form.cleaned_data:
171 # Algorithm submission
172 form.instance.algorithm_image = form.cleaned_data[
173 "algorithm"
174 ].latest_ready_image
175 else:
176 # Predictions file submission
177 uploaded_file = form.cleaned_data["chunked_upload"][0]
178 with uploaded_file.open() as f:
179 form.instance.predictions_file.save(
180 uploaded_file.name, File(f)
181 )
182
183 return super().form_valid(form)
184
185 def get_success_url(self):
186 return reverse(
187 "evaluation:list",
188 kwargs={"challenge_short_name": self.object.challenge.short_name},
189 )
190
191
192 class SubmissionCreate(
193 UserIsChallengeParticipantOrAdminMixin, SubmissionCreateBase
194 ):
195 form_class = SubmissionForm
196
197
198 class LegacySubmissionCreate(UserIsChallengeAdminMixin, SubmissionCreateBase):
199 form_class = LegacySubmissionForm
200
201
202 class SubmissionList(UserIsChallengeParticipantOrAdminMixin, ListView):
203 model = Submission
204
205 def get_queryset(self):
206 """Admins see everything, participants just their submissions."""
207 queryset = super().get_queryset()
208 challenge = self.request.challenge
209 if challenge.is_admin(self.request.user):
210 return queryset.filter(challenge=self.request.challenge)
211
212 else:
213 return queryset.filter(
214 Q(challenge=self.request.challenge),
215 Q(creator__pk=self.request.user.pk),
216 )
217
218
219 class SubmissionDetail(UserIsChallengeAdminMixin, DetailView):
220 # TODO - if participant: list only their submissions
221 model = Submission
222
223
224 class TeamContextMixin:
225 def get_context_data(self, *args, **kwargs):
226 context = super().get_context_data(*args, **kwargs)
227
228 evaluation_config = self.request.challenge.evaluation_config
229
230 if evaluation_config.use_teams:
231 user_teams = {
232 teammember.user.username: (team.name, team.get_absolute_url())
233 for team in Team.objects.filter(
234 challenge=self.request.challenge
235 )
236 .select_related("challenge")
237 .prefetch_related("teammember_set__user")
238 for teammember in team.teammember_set.all()
239 }
240 else:
241 user_teams = {}
242
243 context.update(
244 {"evaluation_config": evaluation_config, "user_teams": user_teams}
245 )
246
247 return context
248
249
250 class EvaluationList(
251 UserIsChallengeParticipantOrAdminMixin, TeamContextMixin, ListView
252 ):
253 model = Evaluation
254
255 def get_queryset(self):
256 """Admins see everything, participants just their evaluations."""
257 challenge = self.request.challenge
258
259 queryset = super().get_queryset()
260 queryset = queryset.select_related(
261 "submission__creator__user_profile", "submission__challenge"
262 ).filter(submission__challenge=challenge)
263
264 if challenge.is_admin(self.request.user):
265 return queryset
266 else:
267 return queryset.filter(
268 Q(submission__creator__pk=self.request.user.pk)
269 )
270
271
272 class EvaluationDetail(DetailView):
273 # TODO - if participant: list only their evaluations
274 model = Evaluation
275
276 def get_context_data(self, **kwargs):
277 context = super().get_context_data(**kwargs)
278
279 try:
280 metrics = self.object.outputs.get(
281 interface__slug="metrics-json-file"
282 ).value
283 except ObjectDoesNotExist:
284 metrics = None
285
286 context.update({"metrics": metrics})
287
288 return context
289
290
291 class LeaderboardDetail(TeamContextMixin, PaginatedTableListView):
292 model = Evaluation
293 template_name = "evaluation/leaderboard_detail.html"
294 row_template = "evaluation/leaderboard_row.html"
295 search_fields = ["pk", "submission__creator__username"]
296
297 @property
298 def columns(self):
299 columns = [
300 Column(title="#", sort_field="rank"),
301 Column(
302 title="User (Team)" if self.config.use_teams else "User",
303 sort_field="submission__creator__username",
304 ),
305 Column(title="Created", sort_field="created"),
306 ]
307
308 if self.config.scoring_method_choice == self.config.MEAN:
309 columns.append(Column(title="Mean Position", sort_field="rank"))
310 elif self.config.scoring_method_choice == self.config.MEDIAN:
311 columns.append(Column(title="Median Position", sort_field="rank"))
312
313 if self.config.scoring_method_choice == self.config.ABSOLUTE:
314 columns.append(
315 Column(title=self.config.score_title, sort_field="rank")
316 )
317 else:
318 columns.append(
319 Column(
320 title=f"{self.config.score_title} (Position)",
321 sort_field="rank",
322 toggleable=True,
323 )
324 )
325
326 for c in self.config.extra_results_columns:
327 columns.append(
328 Column(
329 title=c["title"]
330 if self.config.scoring_method_choice
331 == self.config.ABSOLUTE
332 else f"{c['title']} (Position)",
333 sort_field="rank",
334 toggleable=True,
335 )
336 )
337
338 if self.config.display_submission_comments:
339 columns.append(
340 Column(title="Comment", sort_field="submission__comment")
341 )
342
343 if self.config.show_publication_url:
344 columns.append(
345 Column(
346 title="Publication",
347 sort_field="submission__publication_url",
348 )
349 )
350
351 if self.config.show_supplementary_file_link:
352 columns.append(
353 Column(
354 title=self.config.supplementary_file_label,
355 sort_field="submission__supplementary_file",
356 )
357 )
358
359 return columns
360
361 @cached_property
362 def config(self):
363 return self.request.challenge.evaluation_config
364
365 def get_row_context(self, job, *args, **kwargs):
366 return {"evaluation": job, "evaluation_config": self.config}
367
368 def get_unfiltered_queryset(self):
369 queryset = super().get_queryset()
370 queryset = (
371 queryset.select_related(
372 "submission__creator__user_profile", "submission__challenge"
373 )
374 .filter(
375 submission__challenge=self.request.challenge,
376 published=True,
377 status=Evaluation.SUCCESS,
378 rank__gt=0,
379 )
380 .annotate(
381 metrics=ArrayAgg(
382 "outputs__value",
383 filter=Q(outputs__interface__slug="metrics-json-file"),
384 )
385 )
386 )
387 return queryset
388
389
390 class EvaluationUpdate(
391 UserIsChallengeAdminMixin, SuccessMessageMixin, UpdateView
392 ):
393 model = Evaluation
394 fields = ("published",)
395 success_message = "Result successfully updated."
396
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/grandchallenge/evaluation/views.py b/app/grandchallenge/evaluation/views.py
--- a/app/grandchallenge/evaluation/views.py
+++ b/app/grandchallenge/evaluation/views.py
@@ -1,6 +1,7 @@
from datetime import datetime, timedelta
from typing import Dict
+from dateutil.relativedelta import relativedelta
from django.contrib.messages.views import SuccessMessageMixin
from django.contrib.postgres.aggregates import ArrayAgg
from django.core.exceptions import ObjectDoesNotExist
@@ -367,6 +368,7 @@
def get_unfiltered_queryset(self):
queryset = super().get_queryset()
+ queryset = self.filter_by_date(queryset=queryset)
queryset = (
queryset.select_related(
"submission__creator__user_profile", "submission__challenge"
@@ -386,6 +388,16 @@
)
return queryset
+ def filter_by_date(self, queryset):
+ if "leaderboardDate" in self.request.GET:
+ year, month, day = self.request.GET["leaderboardDate"].split("-")
+ before = datetime(
+ year=int(year), month=int(month), day=int(day),
+ ) + relativedelta(days=1)
+ return queryset.filter(submission__created__lt=before)
+ else:
+ return queryset
+
class EvaluationUpdate(
UserIsChallengeAdminMixin, SuccessMessageMixin, UpdateView
|
{"golden_diff": "diff --git a/app/grandchallenge/evaluation/views.py b/app/grandchallenge/evaluation/views.py\n--- a/app/grandchallenge/evaluation/views.py\n+++ b/app/grandchallenge/evaluation/views.py\n@@ -1,6 +1,7 @@\n from datetime import datetime, timedelta\n from typing import Dict\n \n+from dateutil.relativedelta import relativedelta\n from django.contrib.messages.views import SuccessMessageMixin\n from django.contrib.postgres.aggregates import ArrayAgg\n from django.core.exceptions import ObjectDoesNotExist\n@@ -367,6 +368,7 @@\n \n def get_unfiltered_queryset(self):\n queryset = super().get_queryset()\n+ queryset = self.filter_by_date(queryset=queryset)\n queryset = (\n queryset.select_related(\n \"submission__creator__user_profile\", \"submission__challenge\"\n@@ -386,6 +388,16 @@\n )\n return queryset\n \n+ def filter_by_date(self, queryset):\n+ if \"leaderboardDate\" in self.request.GET:\n+ year, month, day = self.request.GET[\"leaderboardDate\"].split(\"-\")\n+ before = datetime(\n+ year=int(year), month=int(month), day=int(day),\n+ ) + relativedelta(days=1)\n+ return queryset.filter(submission__created__lt=before)\n+ else:\n+ return queryset\n+\n \n class EvaluationUpdate(\n UserIsChallengeAdminMixin, SuccessMessageMixin, UpdateView\n", "issue": "Add leaderboard history\nEach time the ranking for the leaderboard is generated save the resulting ranking dictionary so that we know what the leaderboard looked like at a certain date/time. Would be helpful if someone says in their paper \"We were first on the leaderboard (at the time of writing)\" and we can then independently verify this. \r\n\r\nMaybe this would require moving out the leaderboard into its own app, deprecating the results list view for a leaderboard detail view.\n", "before_files": [{"content": "from datetime import datetime, timedelta\nfrom typing import Dict\n\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.contrib.postgres.aggregates import ArrayAgg\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.core.files import File\nfrom django.db.models import Q\nfrom django.utils import timezone\nfrom django.utils.functional import cached_property\nfrom django.views.generic import CreateView, DetailView, ListView, UpdateView\n\nfrom grandchallenge.core.permissions.mixins import (\n UserIsChallengeAdminMixin,\n UserIsChallengeParticipantOrAdminMixin,\n)\nfrom grandchallenge.core.views import Column, PaginatedTableListView\nfrom grandchallenge.evaluation.forms import (\n ConfigForm,\n LegacySubmissionForm,\n MethodForm,\n SubmissionForm,\n)\nfrom grandchallenge.evaluation.models import (\n Config,\n Evaluation,\n Method,\n Submission,\n)\nfrom grandchallenge.jqfileupload.widgets.uploader import StagedAjaxFile\nfrom grandchallenge.subdomains.utils import reverse\nfrom grandchallenge.teams.models import Team\n\n\nclass ConfigUpdate(UserIsChallengeAdminMixin, SuccessMessageMixin, UpdateView):\n form_class = ConfigForm\n success_message = \"Configuration successfully updated\"\n\n def get_object(self, queryset=None):\n challenge = self.request.challenge\n return challenge.evaluation_config\n\n\nclass MethodCreate(UserIsChallengeAdminMixin, CreateView):\n model = Method\n form_class = MethodForm\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs.update({\"user\": self.request.user})\n return kwargs\n\n def form_valid(self, form):\n form.instance.creator = self.request.user\n form.instance.challenge = self.request.challenge\n\n uploaded_file: StagedAjaxFile = form.cleaned_data[\"chunked_upload\"][0]\n form.instance.staged_image_uuid = uploaded_file.uuid\n\n return super().form_valid(form)\n\n\nclass MethodList(UserIsChallengeAdminMixin, ListView):\n model = Method\n\n def get_queryset(self):\n queryset = super().get_queryset()\n return queryset.filter(challenge=self.request.challenge)\n\n\nclass MethodDetail(UserIsChallengeAdminMixin, DetailView):\n model = Method\n\n\nclass SubmissionCreateBase(SuccessMessageMixin, CreateView):\n \"\"\"\n Base class for the submission creation forms.\n\n It has no permissions, do not use it directly! See the subclasses.\n \"\"\"\n\n model = Submission\n success_message = (\n \"Your submission was successful. \"\n \"Your result will appear on the leaderboard when it is ready.\"\n )\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n\n config: Config = Config.objects.get(challenge=self.request.challenge)\n\n kwargs.update(\n {\n \"user\": self.request.user,\n \"display_comment_field\": config.allow_submission_comments,\n \"supplementary_file_choice\": config.supplementary_file_choice,\n \"supplementary_file_label\": config.supplementary_file_label,\n \"supplementary_file_help_text\": config.supplementary_file_help_text,\n \"publication_url_choice\": config.publication_url_choice,\n \"algorithm_submission\": config.submission_kind\n == config.SubmissionKind.ALGORITHM,\n }\n )\n\n return kwargs\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n config = Config.objects.get(challenge=self.request.challenge)\n\n context.update(\n self.get_next_submission(max_subs=config.daily_submission_limit)\n )\n\n pending_evaluations = Evaluation.objects.filter(\n submission__challenge=self.request.challenge,\n submission__creator=self.request.user,\n status__in=(Evaluation.PENDING, Evaluation.STARTED),\n ).count()\n\n context.update({\"pending_evaluations\": pending_evaluations})\n\n return context\n\n def get_next_submission(\n self, *, max_subs: int, period: timedelta = None, now: datetime = None\n ) -> Dict:\n \"\"\"\n Determines the number of submissions left for the user in a given time\n period, and when they can next submit.\n\n :return: A dictionary containing remaining_submissions (int) and\n next_submission_at (datetime)\n \"\"\"\n if now is None:\n now = timezone.now()\n\n if period is None:\n period = timedelta(days=1)\n\n subs = (\n Submission.objects.filter(\n challenge=self.request.challenge,\n creator=self.request.user,\n created__gte=now - period,\n )\n .exclude(evaluation__status=Evaluation.FAILURE)\n .order_by(\"-created\")\n .distinct()\n )\n\n try:\n next_sub_at = subs[max_subs - 1].created + period\n except (IndexError, AssertionError):\n next_sub_at = now\n\n return {\n \"remaining_submissions\": max_subs - len(subs),\n \"next_submission_at\": next_sub_at,\n }\n\n def form_valid(self, form):\n if form.instance.creator is None:\n form.instance.creator = self.request.user\n\n form.instance.challenge = self.request.challenge\n\n if \"algorithm\" in form.cleaned_data:\n # Algorithm submission\n form.instance.algorithm_image = form.cleaned_data[\n \"algorithm\"\n ].latest_ready_image\n else:\n # Predictions file submission\n uploaded_file = form.cleaned_data[\"chunked_upload\"][0]\n with uploaded_file.open() as f:\n form.instance.predictions_file.save(\n uploaded_file.name, File(f)\n )\n\n return super().form_valid(form)\n\n def get_success_url(self):\n return reverse(\n \"evaluation:list\",\n kwargs={\"challenge_short_name\": self.object.challenge.short_name},\n )\n\n\nclass SubmissionCreate(\n UserIsChallengeParticipantOrAdminMixin, SubmissionCreateBase\n):\n form_class = SubmissionForm\n\n\nclass LegacySubmissionCreate(UserIsChallengeAdminMixin, SubmissionCreateBase):\n form_class = LegacySubmissionForm\n\n\nclass SubmissionList(UserIsChallengeParticipantOrAdminMixin, ListView):\n model = Submission\n\n def get_queryset(self):\n \"\"\"Admins see everything, participants just their submissions.\"\"\"\n queryset = super().get_queryset()\n challenge = self.request.challenge\n if challenge.is_admin(self.request.user):\n return queryset.filter(challenge=self.request.challenge)\n\n else:\n return queryset.filter(\n Q(challenge=self.request.challenge),\n Q(creator__pk=self.request.user.pk),\n )\n\n\nclass SubmissionDetail(UserIsChallengeAdminMixin, DetailView):\n # TODO - if participant: list only their submissions\n model = Submission\n\n\nclass TeamContextMixin:\n def get_context_data(self, *args, **kwargs):\n context = super().get_context_data(*args, **kwargs)\n\n evaluation_config = self.request.challenge.evaluation_config\n\n if evaluation_config.use_teams:\n user_teams = {\n teammember.user.username: (team.name, team.get_absolute_url())\n for team in Team.objects.filter(\n challenge=self.request.challenge\n )\n .select_related(\"challenge\")\n .prefetch_related(\"teammember_set__user\")\n for teammember in team.teammember_set.all()\n }\n else:\n user_teams = {}\n\n context.update(\n {\"evaluation_config\": evaluation_config, \"user_teams\": user_teams}\n )\n\n return context\n\n\nclass EvaluationList(\n UserIsChallengeParticipantOrAdminMixin, TeamContextMixin, ListView\n):\n model = Evaluation\n\n def get_queryset(self):\n \"\"\"Admins see everything, participants just their evaluations.\"\"\"\n challenge = self.request.challenge\n\n queryset = super().get_queryset()\n queryset = queryset.select_related(\n \"submission__creator__user_profile\", \"submission__challenge\"\n ).filter(submission__challenge=challenge)\n\n if challenge.is_admin(self.request.user):\n return queryset\n else:\n return queryset.filter(\n Q(submission__creator__pk=self.request.user.pk)\n )\n\n\nclass EvaluationDetail(DetailView):\n # TODO - if participant: list only their evaluations\n model = Evaluation\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n try:\n metrics = self.object.outputs.get(\n interface__slug=\"metrics-json-file\"\n ).value\n except ObjectDoesNotExist:\n metrics = None\n\n context.update({\"metrics\": metrics})\n\n return context\n\n\nclass LeaderboardDetail(TeamContextMixin, PaginatedTableListView):\n model = Evaluation\n template_name = \"evaluation/leaderboard_detail.html\"\n row_template = \"evaluation/leaderboard_row.html\"\n search_fields = [\"pk\", \"submission__creator__username\"]\n\n @property\n def columns(self):\n columns = [\n Column(title=\"#\", sort_field=\"rank\"),\n Column(\n title=\"User (Team)\" if self.config.use_teams else \"User\",\n sort_field=\"submission__creator__username\",\n ),\n Column(title=\"Created\", sort_field=\"created\"),\n ]\n\n if self.config.scoring_method_choice == self.config.MEAN:\n columns.append(Column(title=\"Mean Position\", sort_field=\"rank\"))\n elif self.config.scoring_method_choice == self.config.MEDIAN:\n columns.append(Column(title=\"Median Position\", sort_field=\"rank\"))\n\n if self.config.scoring_method_choice == self.config.ABSOLUTE:\n columns.append(\n Column(title=self.config.score_title, sort_field=\"rank\")\n )\n else:\n columns.append(\n Column(\n title=f\"{self.config.score_title} (Position)\",\n sort_field=\"rank\",\n toggleable=True,\n )\n )\n\n for c in self.config.extra_results_columns:\n columns.append(\n Column(\n title=c[\"title\"]\n if self.config.scoring_method_choice\n == self.config.ABSOLUTE\n else f\"{c['title']} (Position)\",\n sort_field=\"rank\",\n toggleable=True,\n )\n )\n\n if self.config.display_submission_comments:\n columns.append(\n Column(title=\"Comment\", sort_field=\"submission__comment\")\n )\n\n if self.config.show_publication_url:\n columns.append(\n Column(\n title=\"Publication\",\n sort_field=\"submission__publication_url\",\n )\n )\n\n if self.config.show_supplementary_file_link:\n columns.append(\n Column(\n title=self.config.supplementary_file_label,\n sort_field=\"submission__supplementary_file\",\n )\n )\n\n return columns\n\n @cached_property\n def config(self):\n return self.request.challenge.evaluation_config\n\n def get_row_context(self, job, *args, **kwargs):\n return {\"evaluation\": job, \"evaluation_config\": self.config}\n\n def get_unfiltered_queryset(self):\n queryset = super().get_queryset()\n queryset = (\n queryset.select_related(\n \"submission__creator__user_profile\", \"submission__challenge\"\n )\n .filter(\n submission__challenge=self.request.challenge,\n published=True,\n status=Evaluation.SUCCESS,\n rank__gt=0,\n )\n .annotate(\n metrics=ArrayAgg(\n \"outputs__value\",\n filter=Q(outputs__interface__slug=\"metrics-json-file\"),\n )\n )\n )\n return queryset\n\n\nclass EvaluationUpdate(\n UserIsChallengeAdminMixin, SuccessMessageMixin, UpdateView\n):\n model = Evaluation\n fields = (\"published\",)\n success_message = \"Result successfully updated.\"\n", "path": "app/grandchallenge/evaluation/views.py"}], "after_files": [{"content": "from datetime import datetime, timedelta\nfrom typing import Dict\n\nfrom dateutil.relativedelta import relativedelta\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.contrib.postgres.aggregates import ArrayAgg\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.core.files import File\nfrom django.db.models import Q\nfrom django.utils import timezone\nfrom django.utils.functional import cached_property\nfrom django.views.generic import CreateView, DetailView, ListView, UpdateView\n\nfrom grandchallenge.core.permissions.mixins import (\n UserIsChallengeAdminMixin,\n UserIsChallengeParticipantOrAdminMixin,\n)\nfrom grandchallenge.core.views import Column, PaginatedTableListView\nfrom grandchallenge.evaluation.forms import (\n ConfigForm,\n LegacySubmissionForm,\n MethodForm,\n SubmissionForm,\n)\nfrom grandchallenge.evaluation.models import (\n Config,\n Evaluation,\n Method,\n Submission,\n)\nfrom grandchallenge.jqfileupload.widgets.uploader import StagedAjaxFile\nfrom grandchallenge.subdomains.utils import reverse\nfrom grandchallenge.teams.models import Team\n\n\nclass ConfigUpdate(UserIsChallengeAdminMixin, SuccessMessageMixin, UpdateView):\n form_class = ConfigForm\n success_message = \"Configuration successfully updated\"\n\n def get_object(self, queryset=None):\n challenge = self.request.challenge\n return challenge.evaluation_config\n\n\nclass MethodCreate(UserIsChallengeAdminMixin, CreateView):\n model = Method\n form_class = MethodForm\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs.update({\"user\": self.request.user})\n return kwargs\n\n def form_valid(self, form):\n form.instance.creator = self.request.user\n form.instance.challenge = self.request.challenge\n\n uploaded_file: StagedAjaxFile = form.cleaned_data[\"chunked_upload\"][0]\n form.instance.staged_image_uuid = uploaded_file.uuid\n\n return super().form_valid(form)\n\n\nclass MethodList(UserIsChallengeAdminMixin, ListView):\n model = Method\n\n def get_queryset(self):\n queryset = super().get_queryset()\n return queryset.filter(challenge=self.request.challenge)\n\n\nclass MethodDetail(UserIsChallengeAdminMixin, DetailView):\n model = Method\n\n\nclass SubmissionCreateBase(SuccessMessageMixin, CreateView):\n \"\"\"\n Base class for the submission creation forms.\n\n It has no permissions, do not use it directly! See the subclasses.\n \"\"\"\n\n model = Submission\n success_message = (\n \"Your submission was successful. \"\n \"Your result will appear on the leaderboard when it is ready.\"\n )\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n\n config: Config = Config.objects.get(challenge=self.request.challenge)\n\n kwargs.update(\n {\n \"user\": self.request.user,\n \"display_comment_field\": config.allow_submission_comments,\n \"supplementary_file_choice\": config.supplementary_file_choice,\n \"supplementary_file_label\": config.supplementary_file_label,\n \"supplementary_file_help_text\": config.supplementary_file_help_text,\n \"publication_url_choice\": config.publication_url_choice,\n \"algorithm_submission\": config.submission_kind\n == config.SubmissionKind.ALGORITHM,\n }\n )\n\n return kwargs\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n config = Config.objects.get(challenge=self.request.challenge)\n\n context.update(\n self.get_next_submission(max_subs=config.daily_submission_limit)\n )\n\n pending_evaluations = Evaluation.objects.filter(\n submission__challenge=self.request.challenge,\n submission__creator=self.request.user,\n status__in=(Evaluation.PENDING, Evaluation.STARTED),\n ).count()\n\n context.update({\"pending_evaluations\": pending_evaluations})\n\n return context\n\n def get_next_submission(\n self, *, max_subs: int, period: timedelta = None, now: datetime = None\n ) -> Dict:\n \"\"\"\n Determines the number of submissions left for the user in a given time\n period, and when they can next submit.\n\n :return: A dictionary containing remaining_submissions (int) and\n next_submission_at (datetime)\n \"\"\"\n if now is None:\n now = timezone.now()\n\n if period is None:\n period = timedelta(days=1)\n\n subs = (\n Submission.objects.filter(\n challenge=self.request.challenge,\n creator=self.request.user,\n created__gte=now - period,\n )\n .exclude(evaluation__status=Evaluation.FAILURE)\n .order_by(\"-created\")\n .distinct()\n )\n\n try:\n next_sub_at = subs[max_subs - 1].created + period\n except (IndexError, AssertionError):\n next_sub_at = now\n\n return {\n \"remaining_submissions\": max_subs - len(subs),\n \"next_submission_at\": next_sub_at,\n }\n\n def form_valid(self, form):\n if form.instance.creator is None:\n form.instance.creator = self.request.user\n\n form.instance.challenge = self.request.challenge\n\n if \"algorithm\" in form.cleaned_data:\n # Algorithm submission\n form.instance.algorithm_image = form.cleaned_data[\n \"algorithm\"\n ].latest_ready_image\n else:\n # Predictions file submission\n uploaded_file = form.cleaned_data[\"chunked_upload\"][0]\n with uploaded_file.open() as f:\n form.instance.predictions_file.save(\n uploaded_file.name, File(f)\n )\n\n return super().form_valid(form)\n\n def get_success_url(self):\n return reverse(\n \"evaluation:list\",\n kwargs={\"challenge_short_name\": self.object.challenge.short_name},\n )\n\n\nclass SubmissionCreate(\n UserIsChallengeParticipantOrAdminMixin, SubmissionCreateBase\n):\n form_class = SubmissionForm\n\n\nclass LegacySubmissionCreate(UserIsChallengeAdminMixin, SubmissionCreateBase):\n form_class = LegacySubmissionForm\n\n\nclass SubmissionList(UserIsChallengeParticipantOrAdminMixin, ListView):\n model = Submission\n\n def get_queryset(self):\n \"\"\"Admins see everything, participants just their submissions.\"\"\"\n queryset = super().get_queryset()\n challenge = self.request.challenge\n if challenge.is_admin(self.request.user):\n return queryset.filter(challenge=self.request.challenge)\n\n else:\n return queryset.filter(\n Q(challenge=self.request.challenge),\n Q(creator__pk=self.request.user.pk),\n )\n\n\nclass SubmissionDetail(UserIsChallengeAdminMixin, DetailView):\n # TODO - if participant: list only their submissions\n model = Submission\n\n\nclass TeamContextMixin:\n def get_context_data(self, *args, **kwargs):\n context = super().get_context_data(*args, **kwargs)\n\n evaluation_config = self.request.challenge.evaluation_config\n\n if evaluation_config.use_teams:\n user_teams = {\n teammember.user.username: (team.name, team.get_absolute_url())\n for team in Team.objects.filter(\n challenge=self.request.challenge\n )\n .select_related(\"challenge\")\n .prefetch_related(\"teammember_set__user\")\n for teammember in team.teammember_set.all()\n }\n else:\n user_teams = {}\n\n context.update(\n {\"evaluation_config\": evaluation_config, \"user_teams\": user_teams}\n )\n\n return context\n\n\nclass EvaluationList(\n UserIsChallengeParticipantOrAdminMixin, TeamContextMixin, ListView\n):\n model = Evaluation\n\n def get_queryset(self):\n \"\"\"Admins see everything, participants just their evaluations.\"\"\"\n challenge = self.request.challenge\n\n queryset = super().get_queryset()\n queryset = queryset.select_related(\n \"submission__creator__user_profile\", \"submission__challenge\"\n ).filter(submission__challenge=challenge)\n\n if challenge.is_admin(self.request.user):\n return queryset\n else:\n return queryset.filter(\n Q(submission__creator__pk=self.request.user.pk)\n )\n\n\nclass EvaluationDetail(DetailView):\n # TODO - if participant: list only their evaluations\n model = Evaluation\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n try:\n metrics = self.object.outputs.get(\n interface__slug=\"metrics-json-file\"\n ).value\n except ObjectDoesNotExist:\n metrics = None\n\n context.update({\"metrics\": metrics})\n\n return context\n\n\nclass LeaderboardDetail(TeamContextMixin, PaginatedTableListView):\n model = Evaluation\n template_name = \"evaluation/leaderboard_detail.html\"\n row_template = \"evaluation/leaderboard_row.html\"\n search_fields = [\"pk\", \"submission__creator__username\"]\n\n @property\n def columns(self):\n columns = [\n Column(title=\"#\", sort_field=\"rank\"),\n Column(\n title=\"User (Team)\" if self.config.use_teams else \"User\",\n sort_field=\"submission__creator__username\",\n ),\n Column(title=\"Created\", sort_field=\"created\"),\n ]\n\n if self.config.scoring_method_choice == self.config.MEAN:\n columns.append(Column(title=\"Mean Position\", sort_field=\"rank\"))\n elif self.config.scoring_method_choice == self.config.MEDIAN:\n columns.append(Column(title=\"Median Position\", sort_field=\"rank\"))\n\n if self.config.scoring_method_choice == self.config.ABSOLUTE:\n columns.append(\n Column(title=self.config.score_title, sort_field=\"rank\")\n )\n else:\n columns.append(\n Column(\n title=f\"{self.config.score_title} (Position)\",\n sort_field=\"rank\",\n toggleable=True,\n )\n )\n\n for c in self.config.extra_results_columns:\n columns.append(\n Column(\n title=c[\"title\"]\n if self.config.scoring_method_choice\n == self.config.ABSOLUTE\n else f\"{c['title']} (Position)\",\n sort_field=\"rank\",\n toggleable=True,\n )\n )\n\n if self.config.display_submission_comments:\n columns.append(\n Column(title=\"Comment\", sort_field=\"submission__comment\")\n )\n\n if self.config.show_publication_url:\n columns.append(\n Column(\n title=\"Publication\",\n sort_field=\"submission__publication_url\",\n )\n )\n\n if self.config.show_supplementary_file_link:\n columns.append(\n Column(\n title=self.config.supplementary_file_label,\n sort_field=\"submission__supplementary_file\",\n )\n )\n\n return columns\n\n @cached_property\n def config(self):\n return self.request.challenge.evaluation_config\n\n def get_row_context(self, job, *args, **kwargs):\n return {\"evaluation\": job, \"evaluation_config\": self.config}\n\n def get_unfiltered_queryset(self):\n queryset = super().get_queryset()\n queryset = self.filter_by_date(queryset=queryset)\n queryset = (\n queryset.select_related(\n \"submission__creator__user_profile\", \"submission__challenge\"\n )\n .filter(\n submission__challenge=self.request.challenge,\n published=True,\n status=Evaluation.SUCCESS,\n rank__gt=0,\n )\n .annotate(\n metrics=ArrayAgg(\n \"outputs__value\",\n filter=Q(outputs__interface__slug=\"metrics-json-file\"),\n )\n )\n )\n return queryset\n\n def filter_by_date(self, queryset):\n if \"leaderboardDate\" in self.request.GET:\n year, month, day = self.request.GET[\"leaderboardDate\"].split(\"-\")\n before = datetime(\n year=int(year), month=int(month), day=int(day),\n ) + relativedelta(days=1)\n return queryset.filter(submission__created__lt=before)\n else:\n return queryset\n\n\nclass EvaluationUpdate(\n UserIsChallengeAdminMixin, SuccessMessageMixin, UpdateView\n):\n model = Evaluation\n fields = (\"published\",)\n success_message = \"Result successfully updated.\"\n", "path": "app/grandchallenge/evaluation/views.py"}]}
| 3,850 | 315 |
gh_patches_debug_4891
|
rasdani/github-patches
|
git_diff
|
qutip__qutip-949
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Optimization flags in setup.py should be completely avoided
Hard-coding compiler flag `-march=native` in setup.py completely destroys possibility to set up Qutip on heterogeneous cluster. In general, it brings a lot of problems for people that don't have a good experience in debugging "illegal instruction" errors, that often happen, if you compile the module on different machine than you use.
If you are sure you need optimized build for localhost, you might use
```
export CFLAGS="-O3 -march=native"
export CXXFLAGS="$CFLAGS"
pip install qutip
```
instead or provide separate option for setup.py script.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 """QuTiP: The Quantum Toolbox in Python
3
4 QuTiP is open-source software for simulating the dynamics of closed and open
5 quantum systems. The QuTiP library depends on the excellent Numpy, Scipy, and
6 Cython numerical packages. In addition, graphical output is provided by
7 Matplotlib. QuTiP aims to provide user-friendly and efficient numerical
8 simulations of a wide variety of quantum mechanical problems, including those
9 with Hamiltonians and/or collapse operators with arbitrary time-dependence,
10 commonly found in a wide range of physics applications. QuTiP is freely
11 available for use and/or modification on all common platforms. Being free of
12 any licensing fees, QuTiP is ideal for exploring quantum mechanics in research
13 as well as in the classroom.
14 """
15
16 DOCLINES = __doc__.split('\n')
17
18 CLASSIFIERS = """\
19 Development Status :: 4 - Beta
20 Intended Audience :: Science/Research
21 License :: OSI Approved :: BSD License
22 Programming Language :: Python
23 Programming Language :: Python :: 3
24 Topic :: Scientific/Engineering
25 Operating System :: MacOS
26 Operating System :: POSIX
27 Operating System :: Unix
28 Operating System :: Microsoft :: Windows
29 """
30
31 # import statements
32 import os
33 import sys
34 # The following is required to get unit tests up and running.
35 # If the user doesn't have, then that's OK, we'll just skip unit tests.
36 try:
37 from setuptools import setup, Extension
38 TEST_SUITE = 'nose.collector'
39 TESTS_REQUIRE = ['nose']
40 EXTRA_KWARGS = {
41 'test_suite': TEST_SUITE,
42 'tests_require': TESTS_REQUIRE
43 }
44 except:
45 from distutils.core import setup
46 from distutils.extension import Extension
47 EXTRA_KWARGS = {}
48
49 try:
50 import numpy as np
51 except:
52 np = None
53
54 from Cython.Build import cythonize
55 from Cython.Distutils import build_ext
56
57 # all information about QuTiP goes here
58 MAJOR = 4
59 MINOR = 4
60 MICRO = 0
61 ISRELEASED = False
62 VERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO)
63 REQUIRES = ['numpy (>=1.8)', 'scipy (>=0.15)', 'cython (>=0.21)']
64 INSTALL_REQUIRES = ['numpy>=1.8', 'scipy>=0.15', 'cython>=0.21']
65 PACKAGES = ['qutip', 'qutip/ui', 'qutip/cy', 'qutip/cy/src',
66 'qutip/qip', 'qutip/qip/models',
67 'qutip/qip/algorithms', 'qutip/control', 'qutip/nonmarkov',
68 'qutip/_mkl', 'qutip/tests', 'qutip/legacy',
69 'qutip/cy/openmp', 'qutip/cy/openmp/src']
70 PACKAGE_DATA = {
71 'qutip': ['configspec.ini'],
72 'qutip/tests': ['*.ini'],
73 'qutip/cy': ['*.pxi', '*.pxd', '*.pyx'],
74 'qutip/cy/src': ['*.cpp', '*.hpp'],
75 'qutip/control': ['*.pyx'],
76 'qutip/cy/openmp': ['*.pxd', '*.pyx'],
77 'qutip/cy/openmp/src': ['*.cpp', '*.hpp']
78 }
79 # If we're missing numpy, exclude import directories until we can
80 # figure them out properly.
81 INCLUDE_DIRS = [np.get_include()] if np is not None else []
82 # ajgpitch Mar 2017:
83 # This HEADERS did not work, but I will leave it in anyway, as it is supposed to.
84 # I had to do the nasty thing with PACKAGES and PACKAGE_DATA above.
85 HEADERS = ['qutip/cy/src/zspmv.hpp', 'qutip/cy/openmp/src/zspmv_openmp.hpp']
86 NAME = "qutip"
87 AUTHOR = ("Alexander Pitchford, Paul D. Nation, Robert J. Johansson, "
88 "Chris Granade, Arne Grimsmo")
89 AUTHOR_EMAIL = ("[email protected], [email protected], "
90 "[email protected], [email protected], "
91 "[email protected]")
92 LICENSE = "BSD"
93 DESCRIPTION = DOCLINES[0]
94 LONG_DESCRIPTION = "\n".join(DOCLINES[2:])
95 KEYWORDS = "quantum physics dynamics"
96 URL = "http://qutip.org"
97 CLASSIFIERS = [_f for _f in CLASSIFIERS.split('\n') if _f]
98 PLATFORMS = ["Linux", "Mac OSX", "Unix", "Windows"]
99
100
101 def git_short_hash():
102 try:
103 git_str = "+" + os.popen('git log -1 --format="%h"').read().strip()
104 except:
105 git_str = ""
106 else:
107 if git_str == '+': #fixes setuptools PEP issues with versioning
108 git_str = ''
109 return git_str
110
111 FULLVERSION = VERSION
112 if not ISRELEASED:
113 FULLVERSION += '.dev'+str(MICRO)+git_short_hash()
114
115 # NumPy's distutils reads in versions differently than
116 # our fallback. To make sure that versions are added to
117 # egg-info correctly, we need to add FULLVERSION to
118 # EXTRA_KWARGS if NumPy wasn't imported correctly.
119 if np is None:
120 EXTRA_KWARGS['version'] = FULLVERSION
121
122
123 def write_version_py(filename='qutip/version.py'):
124 cnt = """\
125 # THIS FILE IS GENERATED FROM QUTIP SETUP.PY
126 short_version = '%(version)s'
127 version = '%(fullversion)s'
128 release = %(isrelease)s
129 """
130 a = open(filename, 'w')
131 try:
132 a.write(cnt % {'version': VERSION, 'fullversion':
133 FULLVERSION, 'isrelease': str(ISRELEASED)})
134 finally:
135 a.close()
136
137 local_path = os.path.dirname(os.path.abspath(sys.argv[0]))
138 os.chdir(local_path)
139 sys.path.insert(0, local_path)
140 sys.path.insert(0, os.path.join(local_path, 'qutip')) # to retrive _version
141
142 # always rewrite _version
143 if os.path.exists('qutip/version.py'):
144 os.remove('qutip/version.py')
145
146 write_version_py()
147
148 # Add Cython extensions here
149 cy_exts = ['spmatfuncs', 'stochastic', 'sparse_utils', 'graph_utils', 'interpolate',
150 'spmath', 'heom', 'math', 'spconvert', 'ptrace', 'checks', 'brtools',
151 'brtools_checks', 'br_tensor', 'inter', 'cqobjevo', 'cqobjevo_factor', 'piqs']
152
153 # Extra link args
154 _link_flags = []
155
156 # If on Win and Python version >= 3.5 and not in MSYS2 (i.e. Visual studio compile)
157 if (sys.platform == 'win32' and int(str(sys.version_info[0])+str(sys.version_info[1])) >= 35
158 and os.environ.get('MSYSTEM') is None):
159 _compiler_flags = ['/w', '/Ox']
160 # Everything else
161 else:
162 _compiler_flags = ['-w', '-O3', '-march=native', '-funroll-loops']
163 if sys.platform == 'darwin':
164 # These are needed for compiling on OSX 10.14+
165 _compiler_flags.append('-mmacosx-version-min=10.9')
166 _link_flags.append('-mmacosx-version-min=10.9')
167
168
169
170 EXT_MODULES =[]
171 # Add Cython files from qutip/cy
172 for ext in cy_exts:
173 _mod = Extension('qutip.cy.'+ext,
174 sources = ['qutip/cy/'+ext+'.pyx', 'qutip/cy/src/zspmv.cpp'],
175 include_dirs = [np.get_include()],
176 extra_compile_args=_compiler_flags,
177 extra_link_args=_link_flags,
178 language='c++')
179 EXT_MODULES.append(_mod)
180
181 # Add Cython files from qutip/control
182 _mod = Extension('qutip.control.cy_grape',
183 sources = ['qutip/control/cy_grape.pyx'],
184 include_dirs = [np.get_include()],
185 extra_compile_args=_compiler_flags,
186 extra_link_args=_link_flags,
187 language='c++')
188 EXT_MODULES.append(_mod)
189
190
191 # Add optional ext modules here
192 if "--with-openmp" in sys.argv:
193 sys.argv.remove("--with-openmp")
194 if (sys.platform == 'win32'
195 and int(str(sys.version_info[0])+str(sys.version_info[1])) >= 35):
196 omp_flags = ['/openmp']
197 omp_args = []
198 else:
199 omp_flags = ['-fopenmp']
200 omp_args = omp_flags
201 _mod = Extension('qutip.cy.openmp.parfuncs',
202 sources = ['qutip/cy/openmp/parfuncs.pyx',
203 'qutip/cy/openmp/src/zspmv_openmp.cpp'],
204 include_dirs = [np.get_include()],
205 extra_compile_args=_compiler_flags+omp_flags,
206 extra_link_args=omp_args+_link_flags,
207 language='c++')
208 EXT_MODULES.append(_mod)
209 # Add benchmark pyx
210 _mod = Extension('qutip.cy.openmp.benchmark',
211 sources = ['qutip/cy/openmp/benchmark.pyx'],
212 include_dirs = [np.get_include()],
213 extra_compile_args=_compiler_flags,
214 extra_link_args=_link_flags,
215 language='c++')
216 EXT_MODULES.append(_mod)
217
218 # Add brtools_omp
219 _mod = Extension('qutip.cy.openmp.br_omp',
220 sources = ['qutip/cy/openmp/br_omp.pyx'],
221 include_dirs = [np.get_include()],
222 extra_compile_args=_compiler_flags,
223 extra_link_args=_link_flags,
224 language='c++')
225 EXT_MODULES.append(_mod)
226
227 # Add omp_sparse_utils
228 _mod = Extension('qutip.cy.openmp.omp_sparse_utils',
229 sources = ['qutip/cy/openmp/omp_sparse_utils.pyx'],
230 include_dirs = [np.get_include()],
231 extra_compile_args=_compiler_flags+omp_flags,
232 extra_link_args=omp_args+_link_flags,
233 language='c++')
234 EXT_MODULES.append(_mod)
235
236 # Add cqobjevo_omp
237 _mod = Extension('qutip.cy.openmp.cqobjevo_omp',
238 sources = ['qutip/cy/openmp/cqobjevo_omp.pyx'],
239 include_dirs = [np.get_include()],
240 extra_compile_args=_compiler_flags+omp_flags,
241 extra_link_args=omp_args,
242 language='c++')
243 EXT_MODULES.append(_mod)
244
245
246 # Remove -Wstrict-prototypes from cflags
247 import distutils.sysconfig
248 cfg_vars = distutils.sysconfig.get_config_vars()
249 if "CFLAGS" in cfg_vars:
250 cfg_vars["CFLAGS"] = cfg_vars["CFLAGS"].replace("-Wstrict-prototypes", "")
251
252
253 # Setup commands go here
254 setup(
255 name = NAME,
256 version = FULLVERSION,
257 packages = PACKAGES,
258 include_package_data=True,
259 include_dirs = INCLUDE_DIRS,
260 headers = HEADERS,
261 ext_modules = cythonize(EXT_MODULES),
262 cmdclass = {'build_ext': build_ext},
263 author = AUTHOR,
264 author_email = AUTHOR_EMAIL,
265 license = LICENSE,
266 description = DESCRIPTION,
267 long_description = LONG_DESCRIPTION,
268 keywords = KEYWORDS,
269 url = URL,
270 classifiers = CLASSIFIERS,
271 platforms = PLATFORMS,
272 requires = REQUIRES,
273 package_data = PACKAGE_DATA,
274 zip_safe = False,
275 install_requires=INSTALL_REQUIRES,
276 **EXTRA_KWARGS
277 )
278 _cite = """\
279 ==============================================================================
280 Installation complete
281 Please cite QuTiP in your publication.
282 ==============================================================================
283 For your convenience a bibtex reference can be easily generated using
284 `qutip.cite()`"""
285 print(_cite)
286
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -159,7 +159,7 @@
_compiler_flags = ['/w', '/Ox']
# Everything else
else:
- _compiler_flags = ['-w', '-O3', '-march=native', '-funroll-loops']
+ _compiler_flags = ['-w', '-O3', '-funroll-loops']
if sys.platform == 'darwin':
# These are needed for compiling on OSX 10.14+
_compiler_flags.append('-mmacosx-version-min=10.9')
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -159,7 +159,7 @@\n _compiler_flags = ['/w', '/Ox']\n # Everything else\n else:\n- _compiler_flags = ['-w', '-O3', '-march=native', '-funroll-loops']\n+ _compiler_flags = ['-w', '-O3', '-funroll-loops']\n if sys.platform == 'darwin':\n # These are needed for compiling on OSX 10.14+\n _compiler_flags.append('-mmacosx-version-min=10.9')\n", "issue": "Optimization flags in setup.py should be completely avoided\nHard-coding compiler flag `-march=native` in setup.py completely destroys possibility to set up Qutip on heterogeneous cluster. In general, it brings a lot of problems for people that don't have a good experience in debugging \"illegal instruction\" errors, that often happen, if you compile the module on different machine than you use.\r\n\r\nIf you are sure you need optimized build for localhost, you might use\r\n```\r\nexport CFLAGS=\"-O3 -march=native\"\r\nexport CXXFLAGS=\"$CFLAGS\"\r\npip install qutip\r\n```\r\ninstead or provide separate option for setup.py script.\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"QuTiP: The Quantum Toolbox in Python\n\nQuTiP is open-source software for simulating the dynamics of closed and open\nquantum systems. The QuTiP library depends on the excellent Numpy, Scipy, and\nCython numerical packages. In addition, graphical output is provided by\nMatplotlib. QuTiP aims to provide user-friendly and efficient numerical\nsimulations of a wide variety of quantum mechanical problems, including those\nwith Hamiltonians and/or collapse operators with arbitrary time-dependence,\ncommonly found in a wide range of physics applications. QuTiP is freely\navailable for use and/or modification on all common platforms. Being free of\nany licensing fees, QuTiP is ideal for exploring quantum mechanics in research\nas well as in the classroom.\n\"\"\"\n\nDOCLINES = __doc__.split('\\n')\n\nCLASSIFIERS = \"\"\"\\\nDevelopment Status :: 4 - Beta\nIntended Audience :: Science/Research\nLicense :: OSI Approved :: BSD License\nProgramming Language :: Python\nProgramming Language :: Python :: 3\nTopic :: Scientific/Engineering\nOperating System :: MacOS\nOperating System :: POSIX\nOperating System :: Unix\nOperating System :: Microsoft :: Windows\n\"\"\"\n\n# import statements\nimport os\nimport sys\n# The following is required to get unit tests up and running.\n# If the user doesn't have, then that's OK, we'll just skip unit tests.\ntry:\n from setuptools import setup, Extension\n TEST_SUITE = 'nose.collector'\n TESTS_REQUIRE = ['nose']\n EXTRA_KWARGS = {\n 'test_suite': TEST_SUITE,\n 'tests_require': TESTS_REQUIRE\n }\nexcept:\n from distutils.core import setup\n from distutils.extension import Extension\n EXTRA_KWARGS = {}\n\ntry:\n import numpy as np\nexcept:\n np = None\n\nfrom Cython.Build import cythonize\nfrom Cython.Distutils import build_ext\n\n# all information about QuTiP goes here\nMAJOR = 4\nMINOR = 4\nMICRO = 0\nISRELEASED = False\nVERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO)\nREQUIRES = ['numpy (>=1.8)', 'scipy (>=0.15)', 'cython (>=0.21)']\nINSTALL_REQUIRES = ['numpy>=1.8', 'scipy>=0.15', 'cython>=0.21']\nPACKAGES = ['qutip', 'qutip/ui', 'qutip/cy', 'qutip/cy/src',\n 'qutip/qip', 'qutip/qip/models',\n 'qutip/qip/algorithms', 'qutip/control', 'qutip/nonmarkov',\n 'qutip/_mkl', 'qutip/tests', 'qutip/legacy',\n 'qutip/cy/openmp', 'qutip/cy/openmp/src']\nPACKAGE_DATA = {\n 'qutip': ['configspec.ini'],\n 'qutip/tests': ['*.ini'],\n 'qutip/cy': ['*.pxi', '*.pxd', '*.pyx'],\n 'qutip/cy/src': ['*.cpp', '*.hpp'],\n 'qutip/control': ['*.pyx'],\n 'qutip/cy/openmp': ['*.pxd', '*.pyx'],\n 'qutip/cy/openmp/src': ['*.cpp', '*.hpp']\n}\n# If we're missing numpy, exclude import directories until we can\n# figure them out properly.\nINCLUDE_DIRS = [np.get_include()] if np is not None else []\n# ajgpitch Mar 2017:\n# This HEADERS did not work, but I will leave it in anyway, as it is supposed to.\n# I had to do the nasty thing with PACKAGES and PACKAGE_DATA above.\nHEADERS = ['qutip/cy/src/zspmv.hpp', 'qutip/cy/openmp/src/zspmv_openmp.hpp']\nNAME = \"qutip\"\nAUTHOR = (\"Alexander Pitchford, Paul D. Nation, Robert J. Johansson, \"\n \"Chris Granade, Arne Grimsmo\")\nAUTHOR_EMAIL = (\"[email protected], [email protected], \"\n \"[email protected], [email protected], \"\n \"[email protected]\")\nLICENSE = \"BSD\"\nDESCRIPTION = DOCLINES[0]\nLONG_DESCRIPTION = \"\\n\".join(DOCLINES[2:])\nKEYWORDS = \"quantum physics dynamics\"\nURL = \"http://qutip.org\"\nCLASSIFIERS = [_f for _f in CLASSIFIERS.split('\\n') if _f]\nPLATFORMS = [\"Linux\", \"Mac OSX\", \"Unix\", \"Windows\"]\n\n\ndef git_short_hash():\n try:\n git_str = \"+\" + os.popen('git log -1 --format=\"%h\"').read().strip()\n except:\n git_str = \"\"\n else:\n if git_str == '+': #fixes setuptools PEP issues with versioning\n git_str = ''\n return git_str\n\nFULLVERSION = VERSION\nif not ISRELEASED:\n FULLVERSION += '.dev'+str(MICRO)+git_short_hash()\n\n# NumPy's distutils reads in versions differently than\n# our fallback. To make sure that versions are added to\n# egg-info correctly, we need to add FULLVERSION to\n# EXTRA_KWARGS if NumPy wasn't imported correctly.\nif np is None:\n EXTRA_KWARGS['version'] = FULLVERSION\n\n\ndef write_version_py(filename='qutip/version.py'):\n cnt = \"\"\"\\\n# THIS FILE IS GENERATED FROM QUTIP SETUP.PY\nshort_version = '%(version)s'\nversion = '%(fullversion)s'\nrelease = %(isrelease)s\n\"\"\"\n a = open(filename, 'w')\n try:\n a.write(cnt % {'version': VERSION, 'fullversion':\n FULLVERSION, 'isrelease': str(ISRELEASED)})\n finally:\n a.close()\n\nlocal_path = os.path.dirname(os.path.abspath(sys.argv[0]))\nos.chdir(local_path)\nsys.path.insert(0, local_path)\nsys.path.insert(0, os.path.join(local_path, 'qutip')) # to retrive _version\n\n# always rewrite _version\nif os.path.exists('qutip/version.py'):\n os.remove('qutip/version.py')\n\nwrite_version_py()\n\n# Add Cython extensions here\ncy_exts = ['spmatfuncs', 'stochastic', 'sparse_utils', 'graph_utils', 'interpolate',\n 'spmath', 'heom', 'math', 'spconvert', 'ptrace', 'checks', 'brtools',\n 'brtools_checks', 'br_tensor', 'inter', 'cqobjevo', 'cqobjevo_factor', 'piqs']\n\n# Extra link args\n_link_flags = []\n\n# If on Win and Python version >= 3.5 and not in MSYS2 (i.e. Visual studio compile)\nif (sys.platform == 'win32' and int(str(sys.version_info[0])+str(sys.version_info[1])) >= 35\n and os.environ.get('MSYSTEM') is None):\n _compiler_flags = ['/w', '/Ox']\n# Everything else\nelse:\n _compiler_flags = ['-w', '-O3', '-march=native', '-funroll-loops']\n if sys.platform == 'darwin':\n # These are needed for compiling on OSX 10.14+\n _compiler_flags.append('-mmacosx-version-min=10.9')\n _link_flags.append('-mmacosx-version-min=10.9')\n\n\n\nEXT_MODULES =[]\n# Add Cython files from qutip/cy\nfor ext in cy_exts:\n _mod = Extension('qutip.cy.'+ext,\n sources = ['qutip/cy/'+ext+'.pyx', 'qutip/cy/src/zspmv.cpp'],\n include_dirs = [np.get_include()],\n extra_compile_args=_compiler_flags,\n extra_link_args=_link_flags,\n language='c++')\n EXT_MODULES.append(_mod)\n\n# Add Cython files from qutip/control\n_mod = Extension('qutip.control.cy_grape',\n sources = ['qutip/control/cy_grape.pyx'],\n include_dirs = [np.get_include()],\n extra_compile_args=_compiler_flags,\n extra_link_args=_link_flags,\n language='c++')\nEXT_MODULES.append(_mod)\n\n\n# Add optional ext modules here\nif \"--with-openmp\" in sys.argv:\n sys.argv.remove(\"--with-openmp\")\n if (sys.platform == 'win32'\n and int(str(sys.version_info[0])+str(sys.version_info[1])) >= 35):\n omp_flags = ['/openmp']\n omp_args = []\n else:\n omp_flags = ['-fopenmp']\n omp_args = omp_flags\n _mod = Extension('qutip.cy.openmp.parfuncs',\n sources = ['qutip/cy/openmp/parfuncs.pyx',\n 'qutip/cy/openmp/src/zspmv_openmp.cpp'],\n include_dirs = [np.get_include()],\n extra_compile_args=_compiler_flags+omp_flags,\n extra_link_args=omp_args+_link_flags,\n language='c++')\n EXT_MODULES.append(_mod)\n # Add benchmark pyx\n _mod = Extension('qutip.cy.openmp.benchmark',\n sources = ['qutip/cy/openmp/benchmark.pyx'],\n include_dirs = [np.get_include()],\n extra_compile_args=_compiler_flags,\n extra_link_args=_link_flags,\n language='c++')\n EXT_MODULES.append(_mod)\n\n # Add brtools_omp\n _mod = Extension('qutip.cy.openmp.br_omp',\n sources = ['qutip/cy/openmp/br_omp.pyx'],\n include_dirs = [np.get_include()],\n extra_compile_args=_compiler_flags,\n extra_link_args=_link_flags,\n language='c++')\n EXT_MODULES.append(_mod)\n\n # Add omp_sparse_utils\n _mod = Extension('qutip.cy.openmp.omp_sparse_utils',\n sources = ['qutip/cy/openmp/omp_sparse_utils.pyx'],\n include_dirs = [np.get_include()],\n extra_compile_args=_compiler_flags+omp_flags,\n extra_link_args=omp_args+_link_flags,\n language='c++')\n EXT_MODULES.append(_mod)\n\n # Add cqobjevo_omp\n _mod = Extension('qutip.cy.openmp.cqobjevo_omp',\n sources = ['qutip/cy/openmp/cqobjevo_omp.pyx'],\n include_dirs = [np.get_include()],\n extra_compile_args=_compiler_flags+omp_flags,\n extra_link_args=omp_args,\n language='c++')\n EXT_MODULES.append(_mod)\n\n\n# Remove -Wstrict-prototypes from cflags\nimport distutils.sysconfig\ncfg_vars = distutils.sysconfig.get_config_vars()\nif \"CFLAGS\" in cfg_vars:\n cfg_vars[\"CFLAGS\"] = cfg_vars[\"CFLAGS\"].replace(\"-Wstrict-prototypes\", \"\")\n\n\n# Setup commands go here\nsetup(\n name = NAME,\n version = FULLVERSION,\n packages = PACKAGES,\n include_package_data=True,\n include_dirs = INCLUDE_DIRS,\n headers = HEADERS,\n ext_modules = cythonize(EXT_MODULES),\n cmdclass = {'build_ext': build_ext},\n author = AUTHOR,\n author_email = AUTHOR_EMAIL,\n license = LICENSE,\n description = DESCRIPTION,\n long_description = LONG_DESCRIPTION,\n keywords = KEYWORDS,\n url = URL,\n classifiers = CLASSIFIERS,\n platforms = PLATFORMS,\n requires = REQUIRES,\n package_data = PACKAGE_DATA,\n zip_safe = False,\n install_requires=INSTALL_REQUIRES,\n **EXTRA_KWARGS\n)\n_cite = \"\"\"\\\n==============================================================================\nInstallation complete\nPlease cite QuTiP in your publication.\n==============================================================================\nFor your convenience a bibtex reference can be easily generated using\n`qutip.cite()`\"\"\"\nprint(_cite)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\"\"\"QuTiP: The Quantum Toolbox in Python\n\nQuTiP is open-source software for simulating the dynamics of closed and open\nquantum systems. The QuTiP library depends on the excellent Numpy, Scipy, and\nCython numerical packages. In addition, graphical output is provided by\nMatplotlib. QuTiP aims to provide user-friendly and efficient numerical\nsimulations of a wide variety of quantum mechanical problems, including those\nwith Hamiltonians and/or collapse operators with arbitrary time-dependence,\ncommonly found in a wide range of physics applications. QuTiP is freely\navailable for use and/or modification on all common platforms. Being free of\nany licensing fees, QuTiP is ideal for exploring quantum mechanics in research\nas well as in the classroom.\n\"\"\"\n\nDOCLINES = __doc__.split('\\n')\n\nCLASSIFIERS = \"\"\"\\\nDevelopment Status :: 4 - Beta\nIntended Audience :: Science/Research\nLicense :: OSI Approved :: BSD License\nProgramming Language :: Python\nProgramming Language :: Python :: 3\nTopic :: Scientific/Engineering\nOperating System :: MacOS\nOperating System :: POSIX\nOperating System :: Unix\nOperating System :: Microsoft :: Windows\n\"\"\"\n\n# import statements\nimport os\nimport sys\n# The following is required to get unit tests up and running.\n# If the user doesn't have, then that's OK, we'll just skip unit tests.\ntry:\n from setuptools import setup, Extension\n TEST_SUITE = 'nose.collector'\n TESTS_REQUIRE = ['nose']\n EXTRA_KWARGS = {\n 'test_suite': TEST_SUITE,\n 'tests_require': TESTS_REQUIRE\n }\nexcept:\n from distutils.core import setup\n from distutils.extension import Extension\n EXTRA_KWARGS = {}\n\ntry:\n import numpy as np\nexcept:\n np = None\n\nfrom Cython.Build import cythonize\nfrom Cython.Distutils import build_ext\n\n# all information about QuTiP goes here\nMAJOR = 4\nMINOR = 4\nMICRO = 0\nISRELEASED = False\nVERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO)\nREQUIRES = ['numpy (>=1.8)', 'scipy (>=0.15)', 'cython (>=0.21)']\nINSTALL_REQUIRES = ['numpy>=1.8', 'scipy>=0.15', 'cython>=0.21']\nPACKAGES = ['qutip', 'qutip/ui', 'qutip/cy', 'qutip/cy/src',\n 'qutip/qip', 'qutip/qip/models',\n 'qutip/qip/algorithms', 'qutip/control', 'qutip/nonmarkov',\n 'qutip/_mkl', 'qutip/tests', 'qutip/legacy',\n 'qutip/cy/openmp', 'qutip/cy/openmp/src']\nPACKAGE_DATA = {\n 'qutip': ['configspec.ini'],\n 'qutip/tests': ['*.ini'],\n 'qutip/cy': ['*.pxi', '*.pxd', '*.pyx'],\n 'qutip/cy/src': ['*.cpp', '*.hpp'],\n 'qutip/control': ['*.pyx'],\n 'qutip/cy/openmp': ['*.pxd', '*.pyx'],\n 'qutip/cy/openmp/src': ['*.cpp', '*.hpp']\n}\n# If we're missing numpy, exclude import directories until we can\n# figure them out properly.\nINCLUDE_DIRS = [np.get_include()] if np is not None else []\n# ajgpitch Mar 2017:\n# This HEADERS did not work, but I will leave it in anyway, as it is supposed to.\n# I had to do the nasty thing with PACKAGES and PACKAGE_DATA above.\nHEADERS = ['qutip/cy/src/zspmv.hpp', 'qutip/cy/openmp/src/zspmv_openmp.hpp']\nNAME = \"qutip\"\nAUTHOR = (\"Alexander Pitchford, Paul D. Nation, Robert J. Johansson, \"\n \"Chris Granade, Arne Grimsmo\")\nAUTHOR_EMAIL = (\"[email protected], [email protected], \"\n \"[email protected], [email protected], \"\n \"[email protected]\")\nLICENSE = \"BSD\"\nDESCRIPTION = DOCLINES[0]\nLONG_DESCRIPTION = \"\\n\".join(DOCLINES[2:])\nKEYWORDS = \"quantum physics dynamics\"\nURL = \"http://qutip.org\"\nCLASSIFIERS = [_f for _f in CLASSIFIERS.split('\\n') if _f]\nPLATFORMS = [\"Linux\", \"Mac OSX\", \"Unix\", \"Windows\"]\n\n\ndef git_short_hash():\n try:\n git_str = \"+\" + os.popen('git log -1 --format=\"%h\"').read().strip()\n except:\n git_str = \"\"\n else:\n if git_str == '+': #fixes setuptools PEP issues with versioning\n git_str = ''\n return git_str\n\nFULLVERSION = VERSION\nif not ISRELEASED:\n FULLVERSION += '.dev'+str(MICRO)+git_short_hash()\n\n# NumPy's distutils reads in versions differently than\n# our fallback. To make sure that versions are added to\n# egg-info correctly, we need to add FULLVERSION to\n# EXTRA_KWARGS if NumPy wasn't imported correctly.\nif np is None:\n EXTRA_KWARGS['version'] = FULLVERSION\n\n\ndef write_version_py(filename='qutip/version.py'):\n cnt = \"\"\"\\\n# THIS FILE IS GENERATED FROM QUTIP SETUP.PY\nshort_version = '%(version)s'\nversion = '%(fullversion)s'\nrelease = %(isrelease)s\n\"\"\"\n a = open(filename, 'w')\n try:\n a.write(cnt % {'version': VERSION, 'fullversion':\n FULLVERSION, 'isrelease': str(ISRELEASED)})\n finally:\n a.close()\n\nlocal_path = os.path.dirname(os.path.abspath(sys.argv[0]))\nos.chdir(local_path)\nsys.path.insert(0, local_path)\nsys.path.insert(0, os.path.join(local_path, 'qutip')) # to retrive _version\n\n# always rewrite _version\nif os.path.exists('qutip/version.py'):\n os.remove('qutip/version.py')\n\nwrite_version_py()\n\n# Add Cython extensions here\ncy_exts = ['spmatfuncs', 'stochastic', 'sparse_utils', 'graph_utils', 'interpolate',\n 'spmath', 'heom', 'math', 'spconvert', 'ptrace', 'checks', 'brtools',\n 'brtools_checks', 'br_tensor', 'inter', 'cqobjevo', 'cqobjevo_factor', 'piqs']\n\n# Extra link args\n_link_flags = []\n\n# If on Win and Python version >= 3.5 and not in MSYS2 (i.e. Visual studio compile)\nif (sys.platform == 'win32' and int(str(sys.version_info[0])+str(sys.version_info[1])) >= 35\n and os.environ.get('MSYSTEM') is None):\n _compiler_flags = ['/w', '/Ox']\n# Everything else\nelse:\n _compiler_flags = ['-w', '-O3', '-funroll-loops']\n if sys.platform == 'darwin':\n # These are needed for compiling on OSX 10.14+\n _compiler_flags.append('-mmacosx-version-min=10.9')\n _link_flags.append('-mmacosx-version-min=10.9')\n\n\n\nEXT_MODULES =[]\n# Add Cython files from qutip/cy\nfor ext in cy_exts:\n _mod = Extension('qutip.cy.'+ext,\n sources = ['qutip/cy/'+ext+'.pyx', 'qutip/cy/src/zspmv.cpp'],\n include_dirs = [np.get_include()],\n extra_compile_args=_compiler_flags,\n extra_link_args=_link_flags,\n language='c++')\n EXT_MODULES.append(_mod)\n\n# Add Cython files from qutip/control\n_mod = Extension('qutip.control.cy_grape',\n sources = ['qutip/control/cy_grape.pyx'],\n include_dirs = [np.get_include()],\n extra_compile_args=_compiler_flags,\n extra_link_args=_link_flags,\n language='c++')\nEXT_MODULES.append(_mod)\n\n\n# Add optional ext modules here\nif \"--with-openmp\" in sys.argv:\n sys.argv.remove(\"--with-openmp\")\n if (sys.platform == 'win32'\n and int(str(sys.version_info[0])+str(sys.version_info[1])) >= 35):\n omp_flags = ['/openmp']\n omp_args = []\n else:\n omp_flags = ['-fopenmp']\n omp_args = omp_flags\n _mod = Extension('qutip.cy.openmp.parfuncs',\n sources = ['qutip/cy/openmp/parfuncs.pyx',\n 'qutip/cy/openmp/src/zspmv_openmp.cpp'],\n include_dirs = [np.get_include()],\n extra_compile_args=_compiler_flags+omp_flags,\n extra_link_args=omp_args+_link_flags,\n language='c++')\n EXT_MODULES.append(_mod)\n # Add benchmark pyx\n _mod = Extension('qutip.cy.openmp.benchmark',\n sources = ['qutip/cy/openmp/benchmark.pyx'],\n include_dirs = [np.get_include()],\n extra_compile_args=_compiler_flags,\n extra_link_args=_link_flags,\n language='c++')\n EXT_MODULES.append(_mod)\n\n # Add brtools_omp\n _mod = Extension('qutip.cy.openmp.br_omp',\n sources = ['qutip/cy/openmp/br_omp.pyx'],\n include_dirs = [np.get_include()],\n extra_compile_args=_compiler_flags,\n extra_link_args=_link_flags,\n language='c++')\n EXT_MODULES.append(_mod)\n\n # Add omp_sparse_utils\n _mod = Extension('qutip.cy.openmp.omp_sparse_utils',\n sources = ['qutip/cy/openmp/omp_sparse_utils.pyx'],\n include_dirs = [np.get_include()],\n extra_compile_args=_compiler_flags+omp_flags,\n extra_link_args=omp_args+_link_flags,\n language='c++')\n EXT_MODULES.append(_mod)\n\n # Add cqobjevo_omp\n _mod = Extension('qutip.cy.openmp.cqobjevo_omp',\n sources = ['qutip/cy/openmp/cqobjevo_omp.pyx'],\n include_dirs = [np.get_include()],\n extra_compile_args=_compiler_flags+omp_flags,\n extra_link_args=omp_args,\n language='c++')\n EXT_MODULES.append(_mod)\n\n\n# Remove -Wstrict-prototypes from cflags\nimport distutils.sysconfig\ncfg_vars = distutils.sysconfig.get_config_vars()\nif \"CFLAGS\" in cfg_vars:\n cfg_vars[\"CFLAGS\"] = cfg_vars[\"CFLAGS\"].replace(\"-Wstrict-prototypes\", \"\")\n\n\n# Setup commands go here\nsetup(\n name = NAME,\n version = FULLVERSION,\n packages = PACKAGES,\n include_package_data=True,\n include_dirs = INCLUDE_DIRS,\n headers = HEADERS,\n ext_modules = cythonize(EXT_MODULES),\n cmdclass = {'build_ext': build_ext},\n author = AUTHOR,\n author_email = AUTHOR_EMAIL,\n license = LICENSE,\n description = DESCRIPTION,\n long_description = LONG_DESCRIPTION,\n keywords = KEYWORDS,\n url = URL,\n classifiers = CLASSIFIERS,\n platforms = PLATFORMS,\n requires = REQUIRES,\n package_data = PACKAGE_DATA,\n zip_safe = False,\n install_requires=INSTALL_REQUIRES,\n **EXTRA_KWARGS\n)\n_cite = \"\"\"\\\n==============================================================================\nInstallation complete\nPlease cite QuTiP in your publication.\n==============================================================================\nFor your convenience a bibtex reference can be easily generated using\n`qutip.cite()`\"\"\"\nprint(_cite)\n", "path": "setup.py"}]}
| 3,772 | 137 |
gh_patches_debug_15742
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-939
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Hyphens in search query are normalized differently than in ElasticSearch
If I have the substring "fooo-baar" in the text of one of the indexed fields for a Page-derived model instance, I'd expect to be see that page when I search for "fooo-baar", but I don't.
This seems to be because `wagtailsearch` normalizes "fooo-baar" to "fooobaar", while ElasticSearch treats the hyphen as a whitespace character.
Failing test: add
```
"Hello-world",
```
to `test_queries:45`.
Suggested fix: normalize to "fooo baar" instead.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/wagtailsearch/backends/base.py`
Content:
```
1 from six import text_type
2
3 from django.db import models
4 from django.db.models.query import QuerySet
5 from django.db.models.lookups import Lookup
6 from django.db.models.sql.where import SubqueryConstraint, WhereNode
7 from django.core.exceptions import ImproperlyConfigured
8
9 from wagtail.wagtailsearch.index import class_is_indexed
10 from wagtail.wagtailsearch.utils import normalise_query_string
11
12
13 class FilterError(Exception):
14 pass
15
16
17 class FieldError(Exception):
18 pass
19
20
21 class BaseSearchQuery(object):
22 def __init__(self, queryset, query_string, fields=None):
23 self.queryset = queryset
24 self.query_string = query_string
25 self.fields = fields
26
27 def _get_searchable_field(self, field_attname):
28 # Get field
29 field = dict(
30 (field.get_attname(self.queryset.model), field)
31 for field in self.queryset.model.get_searchable_search_fields()
32 ).get(field_attname, None)
33
34 return field
35
36 def _get_filterable_field(self, field_attname):
37 # Get field
38 field = dict(
39 (field.get_attname(self.queryset.model), field)
40 for field in self.queryset.model.get_filterable_search_fields()
41 ).get(field_attname, None)
42
43 return field
44
45 def _process_lookup(self, field, lookup, value):
46 raise NotImplementedError
47
48 def _connect_filters(self, filters, connector, negated):
49 raise NotImplementedError
50
51 def _process_filter(self, field_attname, lookup, value):
52 # Get the field
53 field = self._get_filterable_field(field_attname)
54
55 if field is None:
56 raise FieldError('Cannot filter search results with field "' + field_attname + '". Please add index.FilterField(\'' + field_attname + '\') to ' + self.queryset.model.__name__ + '.search_fields.')
57
58 # Process the lookup
59 result = self._process_lookup(field, lookup, value)
60
61 if result is None:
62 raise FilterError('Could not apply filter on search results: "' + field_attname + '__' + lookup + ' = ' + text_type(value) + '". Lookup "' + lookup + '"" not recognosed.')
63
64 return result
65
66 def _get_filters_from_where_node(self, where_node):
67 # Check if this is a leaf node
68 if isinstance(where_node, Lookup):
69 field_attname = where_node.lhs.target.attname
70 lookup = where_node.lookup_name
71 value = where_node.rhs
72
73 # Process the filter
74 return self._process_filter(field_attname, lookup, value)
75
76 elif isinstance(where_node, SubqueryConstraint):
77 raise FilterError('Could not apply filter on search results: Subqueries are not allowed.')
78
79 elif isinstance(where_node, WhereNode):
80 # Get child filters
81 connector = where_node.connector
82 child_filters = [self._get_filters_from_where_node(child) for child in where_node.children]
83 child_filters = [child_filter for child_filter in child_filters if child_filter]
84
85 return self._connect_filters(child_filters, connector, where_node.negated)
86
87 else:
88 raise FilterError('Could not apply filter on search results: Unknown where node: ' + str(type(where_node)))
89
90 def _get_filters_from_queryset(self):
91 return self._get_filters_from_where_node(self.queryset.query.where)
92
93
94 class BaseSearchResults(object):
95 def __init__(self, backend, query, prefetch_related=None):
96 self.backend = backend
97 self.query = query
98 self.prefetch_related = prefetch_related
99 self.start = 0
100 self.stop = None
101 self._results_cache = None
102 self._count_cache = None
103
104 def _set_limits(self, start=None, stop=None):
105 if stop is not None:
106 if self.stop is not None:
107 self.stop = min(self.stop, self.start + stop)
108 else:
109 self.stop = self.start + stop
110
111 if start is not None:
112 if self.stop is not None:
113 self.start = min(self.stop, self.start + start)
114 else:
115 self.start = self.start + start
116
117 def _clone(self):
118 klass = self.__class__
119 new = klass(self.backend, self.query, prefetch_related=self.prefetch_related)
120 new.start = self.start
121 new.stop = self.stop
122 return new
123
124 def _do_search(self):
125 raise NotImplementedError
126
127 def _do_count(self):
128 raise NotImplementedError
129
130 def results(self):
131 if self._results_cache is None:
132 self._results_cache = self._do_search()
133 return self._results_cache
134
135 def count(self):
136 if self._count_cache is None:
137 if self._results_cache is not None:
138 self._count_cache = len(self._results_cache)
139 else:
140 self._count_cache = self._do_count()
141 return self._count_cache
142
143 def __getitem__(self, key):
144 new = self._clone()
145
146 if isinstance(key, slice):
147 # Set limits
148 start = int(key.start) if key.start else None
149 stop = int(key.stop) if key.stop else None
150 new._set_limits(start, stop)
151
152 # Copy results cache
153 if self._results_cache is not None:
154 new._results_cache = self._results_cache[key]
155
156 return new
157 else:
158 if self._results_cache is not None:
159 return self._results_cache[key]
160
161 new.start = key
162 new.stop = key + 1
163 return list(new)[0]
164
165 def __iter__(self):
166 return iter(self.results())
167
168 def __len__(self):
169 return len(self.results())
170
171 def __repr__(self):
172 data = list(self[:21])
173 if len(data) > 20:
174 data[-1] = "...(remaining elements truncated)..."
175 return repr(data)
176
177
178 class BaseSearch(object):
179 def __init__(self, params):
180 pass
181
182 def reset_index(self):
183 raise NotImplementedError
184
185 def add_type(self, model):
186 raise NotImplementedError
187
188 def refresh_index(self):
189 raise NotImplementedError
190
191 def add(self, obj):
192 raise NotImplementedError
193
194 def add_bulk(self, model, obj_list):
195 raise NotImplementedError
196
197 def delete(self, obj):
198 raise NotImplementedError
199
200 def _search(self, queryset, query_string, fields=None):
201 raise NotImplementedError
202
203 def search(self, query_string, model_or_queryset, fields=None, filters=None, prefetch_related=None):
204 # Find model/queryset
205 if isinstance(model_or_queryset, QuerySet):
206 model = model_or_queryset.model
207 queryset = model_or_queryset
208 else:
209 model = model_or_queryset
210 queryset = model_or_queryset.objects.all()
211
212 # Model must be a class that is in the index
213 if not class_is_indexed(model):
214 return []
215
216 # Normalise query string
217 if query_string is not None:
218 query_string = normalise_query_string(query_string)
219
220 # Check that theres still a query string after the clean up
221 if query_string == "":
222 return []
223
224 # Apply filters to queryset
225 if filters:
226 queryset = queryset.filter(**filters)
227
228 # Prefetch related
229 if prefetch_related:
230 for prefetch in prefetch_related:
231 queryset = queryset.prefetch_related(prefetch)
232
233 # Search
234 return self._search(queryset, query_string, fields=fields)
235
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wagtail/wagtailsearch/backends/base.py b/wagtail/wagtailsearch/backends/base.py
--- a/wagtail/wagtailsearch/backends/base.py
+++ b/wagtail/wagtailsearch/backends/base.py
@@ -7,7 +7,6 @@
from django.core.exceptions import ImproperlyConfigured
from wagtail.wagtailsearch.index import class_is_indexed
-from wagtail.wagtailsearch.utils import normalise_query_string
class FilterError(Exception):
@@ -213,10 +212,6 @@
if not class_is_indexed(model):
return []
- # Normalise query string
- if query_string is not None:
- query_string = normalise_query_string(query_string)
-
# Check that theres still a query string after the clean up
if query_string == "":
return []
|
{"golden_diff": "diff --git a/wagtail/wagtailsearch/backends/base.py b/wagtail/wagtailsearch/backends/base.py\n--- a/wagtail/wagtailsearch/backends/base.py\n+++ b/wagtail/wagtailsearch/backends/base.py\n@@ -7,7 +7,6 @@\n from django.core.exceptions import ImproperlyConfigured\n \n from wagtail.wagtailsearch.index import class_is_indexed\n-from wagtail.wagtailsearch.utils import normalise_query_string\n \n \n class FilterError(Exception):\n@@ -213,10 +212,6 @@\n if not class_is_indexed(model):\n return []\n \n- # Normalise query string\n- if query_string is not None:\n- query_string = normalise_query_string(query_string)\n-\n # Check that theres still a query string after the clean up\n if query_string == \"\":\n return []\n", "issue": "Hyphens in search query are normalized differently than in ElasticSearch\nIf I have the substring \"fooo-baar\" in the text of one of the indexed fields for a Page-derived model instance, I'd expect to be see that page when I search for \"fooo-baar\", but I don't.\n\nThis seems to be because `wagtailsearch` normalizes \"fooo-baar\" to \"fooobaar\", while ElasticSearch treats the hyphen as a whitespace character.\n\nFailing test: add \n\n```\n\"Hello-world\",\n```\n\nto `test_queries:45`.\n\nSuggested fix: normalize to \"fooo baar\" instead.\n\n", "before_files": [{"content": "from six import text_type\n\nfrom django.db import models\nfrom django.db.models.query import QuerySet\nfrom django.db.models.lookups import Lookup\nfrom django.db.models.sql.where import SubqueryConstraint, WhereNode\nfrom django.core.exceptions import ImproperlyConfigured\n\nfrom wagtail.wagtailsearch.index import class_is_indexed\nfrom wagtail.wagtailsearch.utils import normalise_query_string\n\n\nclass FilterError(Exception):\n pass\n\n\nclass FieldError(Exception):\n pass\n\n\nclass BaseSearchQuery(object):\n def __init__(self, queryset, query_string, fields=None):\n self.queryset = queryset\n self.query_string = query_string\n self.fields = fields\n\n def _get_searchable_field(self, field_attname):\n # Get field\n field = dict(\n (field.get_attname(self.queryset.model), field)\n for field in self.queryset.model.get_searchable_search_fields()\n ).get(field_attname, None)\n\n return field\n\n def _get_filterable_field(self, field_attname):\n # Get field\n field = dict(\n (field.get_attname(self.queryset.model), field)\n for field in self.queryset.model.get_filterable_search_fields()\n ).get(field_attname, None)\n\n return field\n\n def _process_lookup(self, field, lookup, value):\n raise NotImplementedError\n\n def _connect_filters(self, filters, connector, negated):\n raise NotImplementedError\n\n def _process_filter(self, field_attname, lookup, value):\n # Get the field\n field = self._get_filterable_field(field_attname)\n\n if field is None:\n raise FieldError('Cannot filter search results with field \"' + field_attname + '\". Please add index.FilterField(\\'' + field_attname + '\\') to ' + self.queryset.model.__name__ + '.search_fields.')\n\n # Process the lookup\n result = self._process_lookup(field, lookup, value)\n\n if result is None:\n raise FilterError('Could not apply filter on search results: \"' + field_attname + '__' + lookup + ' = ' + text_type(value) + '\". Lookup \"' + lookup + '\"\" not recognosed.')\n\n return result\n\n def _get_filters_from_where_node(self, where_node):\n # Check if this is a leaf node\n if isinstance(where_node, Lookup):\n field_attname = where_node.lhs.target.attname\n lookup = where_node.lookup_name\n value = where_node.rhs\n\n # Process the filter\n return self._process_filter(field_attname, lookup, value)\n\n elif isinstance(where_node, SubqueryConstraint):\n raise FilterError('Could not apply filter on search results: Subqueries are not allowed.')\n\n elif isinstance(where_node, WhereNode):\n # Get child filters\n connector = where_node.connector\n child_filters = [self._get_filters_from_where_node(child) for child in where_node.children]\n child_filters = [child_filter for child_filter in child_filters if child_filter]\n\n return self._connect_filters(child_filters, connector, where_node.negated)\n\n else:\n raise FilterError('Could not apply filter on search results: Unknown where node: ' + str(type(where_node)))\n\n def _get_filters_from_queryset(self):\n return self._get_filters_from_where_node(self.queryset.query.where)\n\n\nclass BaseSearchResults(object):\n def __init__(self, backend, query, prefetch_related=None):\n self.backend = backend\n self.query = query\n self.prefetch_related = prefetch_related\n self.start = 0\n self.stop = None\n self._results_cache = None\n self._count_cache = None\n\n def _set_limits(self, start=None, stop=None):\n if stop is not None:\n if self.stop is not None:\n self.stop = min(self.stop, self.start + stop)\n else:\n self.stop = self.start + stop\n\n if start is not None:\n if self.stop is not None:\n self.start = min(self.stop, self.start + start)\n else:\n self.start = self.start + start\n\n def _clone(self):\n klass = self.__class__\n new = klass(self.backend, self.query, prefetch_related=self.prefetch_related)\n new.start = self.start\n new.stop = self.stop\n return new\n\n def _do_search(self):\n raise NotImplementedError\n\n def _do_count(self):\n raise NotImplementedError\n\n def results(self):\n if self._results_cache is None:\n self._results_cache = self._do_search()\n return self._results_cache\n\n def count(self):\n if self._count_cache is None:\n if self._results_cache is not None:\n self._count_cache = len(self._results_cache)\n else:\n self._count_cache = self._do_count()\n return self._count_cache\n\n def __getitem__(self, key):\n new = self._clone()\n\n if isinstance(key, slice):\n # Set limits\n start = int(key.start) if key.start else None\n stop = int(key.stop) if key.stop else None\n new._set_limits(start, stop)\n\n # Copy results cache\n if self._results_cache is not None:\n new._results_cache = self._results_cache[key]\n\n return new\n else:\n if self._results_cache is not None:\n return self._results_cache[key]\n\n new.start = key\n new.stop = key + 1\n return list(new)[0]\n\n def __iter__(self):\n return iter(self.results())\n\n def __len__(self):\n return len(self.results())\n\n def __repr__(self):\n data = list(self[:21])\n if len(data) > 20:\n data[-1] = \"...(remaining elements truncated)...\"\n return repr(data)\n\n\nclass BaseSearch(object):\n def __init__(self, params):\n pass\n\n def reset_index(self):\n raise NotImplementedError\n\n def add_type(self, model):\n raise NotImplementedError\n\n def refresh_index(self):\n raise NotImplementedError\n\n def add(self, obj):\n raise NotImplementedError\n\n def add_bulk(self, model, obj_list):\n raise NotImplementedError\n\n def delete(self, obj):\n raise NotImplementedError\n\n def _search(self, queryset, query_string, fields=None):\n raise NotImplementedError\n\n def search(self, query_string, model_or_queryset, fields=None, filters=None, prefetch_related=None):\n # Find model/queryset\n if isinstance(model_or_queryset, QuerySet):\n model = model_or_queryset.model\n queryset = model_or_queryset\n else:\n model = model_or_queryset\n queryset = model_or_queryset.objects.all()\n\n # Model must be a class that is in the index\n if not class_is_indexed(model):\n return []\n\n # Normalise query string\n if query_string is not None:\n query_string = normalise_query_string(query_string)\n\n # Check that theres still a query string after the clean up\n if query_string == \"\":\n return []\n\n # Apply filters to queryset\n if filters:\n queryset = queryset.filter(**filters)\n\n # Prefetch related\n if prefetch_related:\n for prefetch in prefetch_related:\n queryset = queryset.prefetch_related(prefetch)\n\n # Search\n return self._search(queryset, query_string, fields=fields)\n", "path": "wagtail/wagtailsearch/backends/base.py"}], "after_files": [{"content": "from six import text_type\n\nfrom django.db import models\nfrom django.db.models.query import QuerySet\nfrom django.db.models.lookups import Lookup\nfrom django.db.models.sql.where import SubqueryConstraint, WhereNode\nfrom django.core.exceptions import ImproperlyConfigured\n\nfrom wagtail.wagtailsearch.index import class_is_indexed\n\n\nclass FilterError(Exception):\n pass\n\n\nclass FieldError(Exception):\n pass\n\n\nclass BaseSearchQuery(object):\n def __init__(self, queryset, query_string, fields=None):\n self.queryset = queryset\n self.query_string = query_string\n self.fields = fields\n\n def _get_searchable_field(self, field_attname):\n # Get field\n field = dict(\n (field.get_attname(self.queryset.model), field)\n for field in self.queryset.model.get_searchable_search_fields()\n ).get(field_attname, None)\n\n return field\n\n def _get_filterable_field(self, field_attname):\n # Get field\n field = dict(\n (field.get_attname(self.queryset.model), field)\n for field in self.queryset.model.get_filterable_search_fields()\n ).get(field_attname, None)\n\n return field\n\n def _process_lookup(self, field, lookup, value):\n raise NotImplementedError\n\n def _connect_filters(self, filters, connector, negated):\n raise NotImplementedError\n\n def _process_filter(self, field_attname, lookup, value):\n # Get the field\n field = self._get_filterable_field(field_attname)\n\n if field is None:\n raise FieldError('Cannot filter search results with field \"' + field_attname + '\". Please add index.FilterField(\\'' + field_attname + '\\') to ' + self.queryset.model.__name__ + '.search_fields.')\n\n # Process the lookup\n result = self._process_lookup(field, lookup, value)\n\n if result is None:\n raise FilterError('Could not apply filter on search results: \"' + field_attname + '__' + lookup + ' = ' + text_type(value) + '\". Lookup \"' + lookup + '\"\" not recognosed.')\n\n return result\n\n def _get_filters_from_where_node(self, where_node):\n # Check if this is a leaf node\n if isinstance(where_node, Lookup):\n field_attname = where_node.lhs.target.attname\n lookup = where_node.lookup_name\n value = where_node.rhs\n\n # Process the filter\n return self._process_filter(field_attname, lookup, value)\n\n elif isinstance(where_node, SubqueryConstraint):\n raise FilterError('Could not apply filter on search results: Subqueries are not allowed.')\n\n elif isinstance(where_node, WhereNode):\n # Get child filters\n connector = where_node.connector\n child_filters = [self._get_filters_from_where_node(child) for child in where_node.children]\n child_filters = [child_filter for child_filter in child_filters if child_filter]\n\n return self._connect_filters(child_filters, connector, where_node.negated)\n\n else:\n raise FilterError('Could not apply filter on search results: Unknown where node: ' + str(type(where_node)))\n\n def _get_filters_from_queryset(self):\n return self._get_filters_from_where_node(self.queryset.query.where)\n\n\nclass BaseSearchResults(object):\n def __init__(self, backend, query, prefetch_related=None):\n self.backend = backend\n self.query = query\n self.prefetch_related = prefetch_related\n self.start = 0\n self.stop = None\n self._results_cache = None\n self._count_cache = None\n\n def _set_limits(self, start=None, stop=None):\n if stop is not None:\n if self.stop is not None:\n self.stop = min(self.stop, self.start + stop)\n else:\n self.stop = self.start + stop\n\n if start is not None:\n if self.stop is not None:\n self.start = min(self.stop, self.start + start)\n else:\n self.start = self.start + start\n\n def _clone(self):\n klass = self.__class__\n new = klass(self.backend, self.query, prefetch_related=self.prefetch_related)\n new.start = self.start\n new.stop = self.stop\n return new\n\n def _do_search(self):\n raise NotImplementedError\n\n def _do_count(self):\n raise NotImplementedError\n\n def results(self):\n if self._results_cache is None:\n self._results_cache = self._do_search()\n return self._results_cache\n\n def count(self):\n if self._count_cache is None:\n if self._results_cache is not None:\n self._count_cache = len(self._results_cache)\n else:\n self._count_cache = self._do_count()\n return self._count_cache\n\n def __getitem__(self, key):\n new = self._clone()\n\n if isinstance(key, slice):\n # Set limits\n start = int(key.start) if key.start else None\n stop = int(key.stop) if key.stop else None\n new._set_limits(start, stop)\n\n # Copy results cache\n if self._results_cache is not None:\n new._results_cache = self._results_cache[key]\n\n return new\n else:\n if self._results_cache is not None:\n return self._results_cache[key]\n\n new.start = key\n new.stop = key + 1\n return list(new)[0]\n\n def __iter__(self):\n return iter(self.results())\n\n def __len__(self):\n return len(self.results())\n\n def __repr__(self):\n data = list(self[:21])\n if len(data) > 20:\n data[-1] = \"...(remaining elements truncated)...\"\n return repr(data)\n\n\nclass BaseSearch(object):\n def __init__(self, params):\n pass\n\n def reset_index(self):\n raise NotImplementedError\n\n def add_type(self, model):\n raise NotImplementedError\n\n def refresh_index(self):\n raise NotImplementedError\n\n def add(self, obj):\n raise NotImplementedError\n\n def add_bulk(self, model, obj_list):\n raise NotImplementedError\n\n def delete(self, obj):\n raise NotImplementedError\n\n def _search(self, queryset, query_string, fields=None):\n raise NotImplementedError\n\n def search(self, query_string, model_or_queryset, fields=None, filters=None, prefetch_related=None):\n # Find model/queryset\n if isinstance(model_or_queryset, QuerySet):\n model = model_or_queryset.model\n queryset = model_or_queryset\n else:\n model = model_or_queryset\n queryset = model_or_queryset.objects.all()\n\n # Model must be a class that is in the index\n if not class_is_indexed(model):\n return []\n\n # Check that theres still a query string after the clean up\n if query_string == \"\":\n return []\n\n # Apply filters to queryset\n if filters:\n queryset = queryset.filter(**filters)\n\n # Prefetch related\n if prefetch_related:\n for prefetch in prefetch_related:\n queryset = queryset.prefetch_related(prefetch)\n\n # Search\n return self._search(queryset, query_string, fields=fields)\n", "path": "wagtail/wagtailsearch/backends/base.py"}]}
| 2,605 | 192 |
gh_patches_debug_27176
|
rasdani/github-patches
|
git_diff
|
microsoft__hi-ml-717
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MockHistoDataGenerator generates corrupted tiff files
Update tiffwriter arguments after upgrade in #691
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hi-ml-cpath/testhisto/testhisto/mocks/slides_generator.py`
Content:
```
1 # ------------------------------------------------------------------------------------------
2 # Copyright (c) Microsoft Corporation. All rights reserved.
3 # Licensed under the MIT License (MIT). See LICENSE in the repo root for license information.
4 # ------------------------------------------------------------------------------------------
5 from enum import Enum
6 from pathlib import Path
7 from typing import Any, Optional, Tuple, List, Union
8
9 import numpy as np
10 import pandas as pd
11 import torch
12 from tifffile import TiffWriter
13 from torch import Tensor
14 from health_cpath.datasets.panda_dataset import PandaDataset
15 from testhisto.mocks.base_data_generator import MockHistoDataGenerator, MockHistoDataType, PANDA_N_CLASSES
16
17
18 class TilesPositioningType(Enum):
19 DIAGONAL = 0
20 RANDOM = 1
21
22
23 class MockPandaSlidesGenerator(MockHistoDataGenerator):
24 """Generator class to create mock WSI on the fly.
25 If tiles positioning is diagonal, a mock WSI resembles to:
26 [** ]
27 [ ** ]
28 [ ** ]
29 [ **]
30 where * represents 2 tiles stitched along the Y axis.
31 If tiles positioning is random, tiles are positioned randomly on the WSI grid.
32 """
33
34 ISUP_GRADE = "isup_grade"
35
36 def __init__(
37 self,
38 n_levels: int = 3,
39 n_repeat_diag: int = 4,
40 n_repeat_tile: int = 2,
41 background_val: Union[int, float] = 255,
42 tiles_pos_type: TilesPositioningType = TilesPositioningType.DIAGONAL,
43 n_tiles_list: Optional[List[int]] = None,
44 **kwargs: Any,
45 ) -> None:
46 """
47 :param n_levels: Number of levels for multi resolution WSI.
48 :param n_repeat_diag: Number of repeat time along the diagonal axis, defaults to 4.
49 :param n_repeat_tile: Number of repeat times of a tile along both Y and X axes, defaults to 2.
50 :param background_val: A value to assign to the background, defaults to 255.
51 :param tiles_pos_type: The tiles positioning type to define how tiles should be positioned within the WSI grid,
52 defaults to TilesPositioningType.DIAGONAL.
53 :param n_tiles_list: A list to use different n_tiles per slide for randomly positioned tiles.
54 :param kwargs: Same params passed to MockHistoDataGenerator.
55 """
56 super().__init__(**kwargs)
57
58 self.n_levels = n_levels
59 self.n_repeat_diag = n_repeat_diag
60 self.n_repeat_tile = n_repeat_tile
61 self.background_val = background_val
62 self.tiles_pos_type = tiles_pos_type
63
64 self.step_size = self.tile_size * self.n_repeat_tile
65 self._dtype = np.uint8 if type(background_val) == int else np.float32
66 self.img_size: int = self.n_repeat_diag * self.n_repeat_tile * self.tile_size
67 self.n_tiles_list = n_tiles_list
68
69 if self.n_tiles_list:
70 assert len(self.n_tiles_list) == self.n_slides, "n_tiles_list length should be equal to n_slides"
71 assert self.tiles_pos_type == TilesPositioningType.RANDOM, "different n_tiles enabled only for randomly "
72 "positionned tiles."
73
74 def validate(self) -> None:
75 assert (
76 self.n_slides >= PANDA_N_CLASSES
77 ), f"The number of slides should be >= PANDA_N_CLASSES (i.e., {PANDA_N_CLASSES})"
78
79 def create_mock_metadata_dataframe(self) -> pd.DataFrame:
80 """Create a mock dataframe with random metadata."""
81 isup_grades = np.tile(list(self.ISUP_GRADE_MAPPING.keys()), self.n_slides // PANDA_N_CLASSES + 1,)
82 mock_metadata: dict = {
83 col: [] for col in [PandaDataset.SLIDE_ID_COLUMN, PandaDataset.MASK_COLUMN, *PandaDataset.METADATA_COLUMNS]
84 }
85 for slide_id in range(self.n_slides):
86 mock_metadata[PandaDataset.SLIDE_ID_COLUMN].append(f"_{slide_id}")
87 mock_metadata[PandaDataset.MASK_COLUMN].append(f"_{slide_id}_mask")
88 mock_metadata[self.DATA_PROVIDER].append(np.random.choice(self.DATA_PROVIDERS_VALUES))
89 mock_metadata[self.ISUP_GRADE].append(isup_grades[slide_id])
90 mock_metadata[self.GLEASON_SCORE].append(np.random.choice(self.ISUP_GRADE_MAPPING[isup_grades[slide_id]]))
91 df = pd.DataFrame(data=mock_metadata)
92 csv_filename = self.dest_data_path / PandaDataset.DEFAULT_CSV_FILENAME
93 df.to_csv(csv_filename, index=False)
94
95 def create_mock_wsi(self, tiles: Tensor) -> Tuple[np.ndarray, Optional[np.ndarray]]:
96 if self.tiles_pos_type == TilesPositioningType.DIAGONAL:
97 return self._create_wsi_from_stitched_tiles_along_diagonal_axis(tiles)
98 elif self.tiles_pos_type == TilesPositioningType.RANDOM:
99 return self._create_wsi_from_randomly_positioned_tiles(tiles), None
100 else:
101 raise NotImplementedError
102
103 def _create_wsi_from_stitched_tiles_along_diagonal_axis(self, tiles: Tensor) -> Tuple[np.ndarray, np.ndarray]:
104 """Create a whole slide image by stitching tiles along the diagonal axis.
105
106 :param tiles: A tensor of tiles of shape (n_tiles, n_channels, tile_size, tile_size).
107 :return: returns a wsi of shape (img_size, img_size, n_channels) and the tiles used to create it.
108 The image is in channels_last format so that it can save by TiffWriter.
109 """
110 mock_image = np.full(
111 shape=(self.n_channels, self.img_size, self.img_size), fill_value=self.background_val, dtype=self._dtype
112 )
113 dump_tiles = []
114 for i in range(self.n_repeat_diag):
115 if self.mock_type == MockHistoDataType.PATHMNIST:
116 if i == 0 or self.n_tiles > 1:
117 tile = (
118 (tiles[i % self.n_tiles].numpy()).astype(self._dtype)
119 if self._dtype == np.uint8
120 else tiles[i % self.n_tiles].numpy()
121 )
122 # fill the square diagonal with tile repeated n_repeat_tile times along X and Y axis.
123 fill_square = np.tile(tile, (self.n_repeat_tile, self.n_repeat_tile))
124 dump_tiles.append(tile)
125
126 elif self.mock_type == MockHistoDataType.FAKE:
127 if i == 0 or self.n_tiles > 1:
128 # pick a random fake value to fill in the square diagonal.
129 fill_square = np.random.uniform(0, self.background_val / (self.n_repeat_diag + 1) * (i + 1))
130 dump_tiles.append(
131 np.full(
132 shape=(self.n_channels, self.tile_size, self.tile_size),
133 fill_value=fill_square,
134 dtype=self._dtype,
135 )
136 )
137 else:
138 raise NotImplementedError
139 mock_image[
140 :, self.step_size * i: self.step_size * (i + 1), self.step_size * i: self.step_size * (i + 1)
141 ] = fill_square
142 return np.transpose(mock_image, (1, 2, 0)), np.array(dump_tiles) # switch to channels_last.
143
144 def _create_wsi_from_randomly_positioned_tiles(self, tiles: Tensor) -> np.ndarray:
145 """Create a whole slide image by positioning tiles randomly in the whole slide image grid.
146
147 :param tiles: A tensor of tiles of shape (n_tiles, n_channels, tile_size, tile_size).
148 :return: returns a wsi of shape (img_size, img_size, n_channels) in channels_last format so that it can save by
149 TiffWriter.
150 """
151 mock_image = np.full(
152 shape=(self.n_channels, self.img_size, self.img_size), fill_value=self.background_val, dtype=self._dtype
153 )
154
155 n_tiles_side = self.img_size // self.tile_size
156 total_n_tiles = n_tiles_side ** 2
157 coords = [
158 (k // n_tiles_side, k % n_tiles_side)
159 for k in np.random.choice(total_n_tiles, size=self.n_tiles, replace=False)
160 ]
161 for i in range(self.n_tiles):
162 x, y = self.tile_size * np.array(coords[i])
163 if self.mock_type == MockHistoDataType.PATHMNIST:
164 new_tile = tiles[i].numpy()
165 elif self.mock_type == MockHistoDataType.FAKE:
166 new_tile = np.random.uniform(0, self.background_val / (self.n_repeat_diag + 1) * (i + 1))
167 else:
168 raise NotImplementedError
169 mock_image[:, x: x + self.tile_size, y: y + self.tile_size] = new_tile
170 return np.transpose(mock_image, (1, 2, 0))
171
172 @staticmethod
173 def _save_mock_wsi_as_tiff_file(file_path: Path, wsi_levels: List[np.ndarray]) -> None:
174 """Save a mock whole slide image as a tiff file of pyramidal levels.
175 Warning: this function expects images to be in channels_last format (H, W, C).
176
177 :param file_name: The tiff file name path.
178 :param wsi_levels: List of whole slide images of different resolution levels in channels_last format.
179 """
180 with TiffWriter(file_path, bigtiff=True) as tif:
181 options = dict(photometric="rgb", compression="zlib")
182 for i, wsi_level in enumerate(wsi_levels):
183 # the subfiletype parameter is a bitfield that determines if the wsi_level is a reduced version of
184 # another image.
185 tif.write(wsi_level, **options, subfiletype=int(i > 0))
186
187 def _create_multi_resolution_wsi(self, mock_image: np.ndarray) -> List[np.ndarray]:
188 """Create multi resolution versions of a mock image via 2 factor downsampling.
189
190 :param mock_image: A mock image in channels_last format (H, W, 3).
191 :return: Returns a list of n_levels downsampled versions of the original mock image.
192 """
193 levels = [mock_image[:: 2 ** i, :: 2 ** i] for i in range(self.n_levels)]
194 return levels
195
196 def generate_mock_histo_data(self) -> None:
197 """Create mock wsi and save them as tiff files"""
198 iterator = iter(self.dataloader) if self.dataloader else None
199
200 slide_dir = self.dest_data_path / "train_images"
201 slide_dir.mkdir(parents=True, exist_ok=True)
202 tile_dir = self.dest_data_path / "dump_tiles"
203 tile_dir.mkdir(parents=True, exist_ok=True)
204
205 for slide_counter in range(self.n_slides):
206
207 if self.n_tiles_list:
208 self.total_tiles = self.n_tiles_list[slide_counter]
209 self.n_tiles: int = self.n_tiles_list[slide_counter]
210 self.dataloader: torch.utils.data.DataLoader = self.get_dataloader()
211 iterator = iter(self.dataloader)
212
213 tiles, _ = next(iterator) if iterator else (None, None)
214 mock_image, dump_tiles = self.create_mock_wsi(tiles)
215 wsi_levels = self._create_multi_resolution_wsi(mock_image)
216
217 slide_tiff_filename = self.dest_data_path / "train_images" / f"_{slide_counter}.tiff"
218 self._save_mock_wsi_as_tiff_file(slide_tiff_filename, wsi_levels)
219
220 if dump_tiles is not None:
221 dump_tiles_filename = self.dest_data_path / "dump_tiles" / f"_{slide_counter}.npy"
222 np.save(dump_tiles_filename, dump_tiles)
223
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/hi-ml-cpath/testhisto/testhisto/mocks/slides_generator.py b/hi-ml-cpath/testhisto/testhisto/mocks/slides_generator.py
--- a/hi-ml-cpath/testhisto/testhisto/mocks/slides_generator.py
+++ b/hi-ml-cpath/testhisto/testhisto/mocks/slides_generator.py
@@ -9,7 +9,7 @@
import numpy as np
import pandas as pd
import torch
-from tifffile import TiffWriter
+from tifffile.tifffile import TiffWriter, PHOTOMETRIC, COMPRESSION
from torch import Tensor
from health_cpath.datasets.panda_dataset import PandaDataset
from testhisto.mocks.base_data_generator import MockHistoDataGenerator, MockHistoDataType, PANDA_N_CLASSES
@@ -178,7 +178,13 @@
:param wsi_levels: List of whole slide images of different resolution levels in channels_last format.
"""
with TiffWriter(file_path, bigtiff=True) as tif:
- options = dict(photometric="rgb", compression="zlib")
+ options = dict(
+ software='tifffile',
+ metadata={'axes': 'YXC'},
+ photometric=PHOTOMETRIC.RGB,
+ compression=COMPRESSION.ADOBE_DEFLATE, # ADOBE_DEFLATE aka ZLIB lossless compression
+ tile=(16, 16),
+ )
for i, wsi_level in enumerate(wsi_levels):
# the subfiletype parameter is a bitfield that determines if the wsi_level is a reduced version of
# another image.
|
{"golden_diff": "diff --git a/hi-ml-cpath/testhisto/testhisto/mocks/slides_generator.py b/hi-ml-cpath/testhisto/testhisto/mocks/slides_generator.py\n--- a/hi-ml-cpath/testhisto/testhisto/mocks/slides_generator.py\n+++ b/hi-ml-cpath/testhisto/testhisto/mocks/slides_generator.py\n@@ -9,7 +9,7 @@\n import numpy as np\n import pandas as pd\n import torch\n-from tifffile import TiffWriter\n+from tifffile.tifffile import TiffWriter, PHOTOMETRIC, COMPRESSION\n from torch import Tensor\n from health_cpath.datasets.panda_dataset import PandaDataset\n from testhisto.mocks.base_data_generator import MockHistoDataGenerator, MockHistoDataType, PANDA_N_CLASSES\n@@ -178,7 +178,13 @@\n :param wsi_levels: List of whole slide images of different resolution levels in channels_last format.\n \"\"\"\n with TiffWriter(file_path, bigtiff=True) as tif:\n- options = dict(photometric=\"rgb\", compression=\"zlib\")\n+ options = dict(\n+ software='tifffile',\n+ metadata={'axes': 'YXC'},\n+ photometric=PHOTOMETRIC.RGB,\n+ compression=COMPRESSION.ADOBE_DEFLATE, # ADOBE_DEFLATE aka ZLIB lossless compression\n+ tile=(16, 16),\n+ )\n for i, wsi_level in enumerate(wsi_levels):\n # the subfiletype parameter is a bitfield that determines if the wsi_level is a reduced version of\n # another image.\n", "issue": "MockHistoDataGenerator generates corrupted tiff files\nUpdate tiffwriter arguments after upgrade in #691 \n", "before_files": [{"content": "# ------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License (MIT). See LICENSE in the repo root for license information.\n# ------------------------------------------------------------------------------------------\nfrom enum import Enum\nfrom pathlib import Path\nfrom typing import Any, Optional, Tuple, List, Union\n\nimport numpy as np\nimport pandas as pd\nimport torch\nfrom tifffile import TiffWriter\nfrom torch import Tensor\nfrom health_cpath.datasets.panda_dataset import PandaDataset\nfrom testhisto.mocks.base_data_generator import MockHistoDataGenerator, MockHistoDataType, PANDA_N_CLASSES\n\n\nclass TilesPositioningType(Enum):\n DIAGONAL = 0\n RANDOM = 1\n\n\nclass MockPandaSlidesGenerator(MockHistoDataGenerator):\n \"\"\"Generator class to create mock WSI on the fly.\n If tiles positioning is diagonal, a mock WSI resembles to:\n [** ]\n [ ** ]\n [ ** ]\n [ **]\n where * represents 2 tiles stitched along the Y axis.\n If tiles positioning is random, tiles are positioned randomly on the WSI grid.\n \"\"\"\n\n ISUP_GRADE = \"isup_grade\"\n\n def __init__(\n self,\n n_levels: int = 3,\n n_repeat_diag: int = 4,\n n_repeat_tile: int = 2,\n background_val: Union[int, float] = 255,\n tiles_pos_type: TilesPositioningType = TilesPositioningType.DIAGONAL,\n n_tiles_list: Optional[List[int]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"\n :param n_levels: Number of levels for multi resolution WSI.\n :param n_repeat_diag: Number of repeat time along the diagonal axis, defaults to 4.\n :param n_repeat_tile: Number of repeat times of a tile along both Y and X axes, defaults to 2.\n :param background_val: A value to assign to the background, defaults to 255.\n :param tiles_pos_type: The tiles positioning type to define how tiles should be positioned within the WSI grid,\n defaults to TilesPositioningType.DIAGONAL.\n :param n_tiles_list: A list to use different n_tiles per slide for randomly positioned tiles.\n :param kwargs: Same params passed to MockHistoDataGenerator.\n \"\"\"\n super().__init__(**kwargs)\n\n self.n_levels = n_levels\n self.n_repeat_diag = n_repeat_diag\n self.n_repeat_tile = n_repeat_tile\n self.background_val = background_val\n self.tiles_pos_type = tiles_pos_type\n\n self.step_size = self.tile_size * self.n_repeat_tile\n self._dtype = np.uint8 if type(background_val) == int else np.float32\n self.img_size: int = self.n_repeat_diag * self.n_repeat_tile * self.tile_size\n self.n_tiles_list = n_tiles_list\n\n if self.n_tiles_list:\n assert len(self.n_tiles_list) == self.n_slides, \"n_tiles_list length should be equal to n_slides\"\n assert self.tiles_pos_type == TilesPositioningType.RANDOM, \"different n_tiles enabled only for randomly \"\n \"positionned tiles.\"\n\n def validate(self) -> None:\n assert (\n self.n_slides >= PANDA_N_CLASSES\n ), f\"The number of slides should be >= PANDA_N_CLASSES (i.e., {PANDA_N_CLASSES})\"\n\n def create_mock_metadata_dataframe(self) -> pd.DataFrame:\n \"\"\"Create a mock dataframe with random metadata.\"\"\"\n isup_grades = np.tile(list(self.ISUP_GRADE_MAPPING.keys()), self.n_slides // PANDA_N_CLASSES + 1,)\n mock_metadata: dict = {\n col: [] for col in [PandaDataset.SLIDE_ID_COLUMN, PandaDataset.MASK_COLUMN, *PandaDataset.METADATA_COLUMNS]\n }\n for slide_id in range(self.n_slides):\n mock_metadata[PandaDataset.SLIDE_ID_COLUMN].append(f\"_{slide_id}\")\n mock_metadata[PandaDataset.MASK_COLUMN].append(f\"_{slide_id}_mask\")\n mock_metadata[self.DATA_PROVIDER].append(np.random.choice(self.DATA_PROVIDERS_VALUES))\n mock_metadata[self.ISUP_GRADE].append(isup_grades[slide_id])\n mock_metadata[self.GLEASON_SCORE].append(np.random.choice(self.ISUP_GRADE_MAPPING[isup_grades[slide_id]]))\n df = pd.DataFrame(data=mock_metadata)\n csv_filename = self.dest_data_path / PandaDataset.DEFAULT_CSV_FILENAME\n df.to_csv(csv_filename, index=False)\n\n def create_mock_wsi(self, tiles: Tensor) -> Tuple[np.ndarray, Optional[np.ndarray]]:\n if self.tiles_pos_type == TilesPositioningType.DIAGONAL:\n return self._create_wsi_from_stitched_tiles_along_diagonal_axis(tiles)\n elif self.tiles_pos_type == TilesPositioningType.RANDOM:\n return self._create_wsi_from_randomly_positioned_tiles(tiles), None\n else:\n raise NotImplementedError\n\n def _create_wsi_from_stitched_tiles_along_diagonal_axis(self, tiles: Tensor) -> Tuple[np.ndarray, np.ndarray]:\n \"\"\"Create a whole slide image by stitching tiles along the diagonal axis.\n\n :param tiles: A tensor of tiles of shape (n_tiles, n_channels, tile_size, tile_size).\n :return: returns a wsi of shape (img_size, img_size, n_channels) and the tiles used to create it.\n The image is in channels_last format so that it can save by TiffWriter.\n \"\"\"\n mock_image = np.full(\n shape=(self.n_channels, self.img_size, self.img_size), fill_value=self.background_val, dtype=self._dtype\n )\n dump_tiles = []\n for i in range(self.n_repeat_diag):\n if self.mock_type == MockHistoDataType.PATHMNIST:\n if i == 0 or self.n_tiles > 1:\n tile = (\n (tiles[i % self.n_tiles].numpy()).astype(self._dtype)\n if self._dtype == np.uint8\n else tiles[i % self.n_tiles].numpy()\n )\n # fill the square diagonal with tile repeated n_repeat_tile times along X and Y axis.\n fill_square = np.tile(tile, (self.n_repeat_tile, self.n_repeat_tile))\n dump_tiles.append(tile)\n\n elif self.mock_type == MockHistoDataType.FAKE:\n if i == 0 or self.n_tiles > 1:\n # pick a random fake value to fill in the square diagonal.\n fill_square = np.random.uniform(0, self.background_val / (self.n_repeat_diag + 1) * (i + 1))\n dump_tiles.append(\n np.full(\n shape=(self.n_channels, self.tile_size, self.tile_size),\n fill_value=fill_square,\n dtype=self._dtype,\n )\n )\n else:\n raise NotImplementedError\n mock_image[\n :, self.step_size * i: self.step_size * (i + 1), self.step_size * i: self.step_size * (i + 1)\n ] = fill_square\n return np.transpose(mock_image, (1, 2, 0)), np.array(dump_tiles) # switch to channels_last.\n\n def _create_wsi_from_randomly_positioned_tiles(self, tiles: Tensor) -> np.ndarray:\n \"\"\"Create a whole slide image by positioning tiles randomly in the whole slide image grid.\n\n :param tiles: A tensor of tiles of shape (n_tiles, n_channels, tile_size, tile_size).\n :return: returns a wsi of shape (img_size, img_size, n_channels) in channels_last format so that it can save by\n TiffWriter.\n \"\"\"\n mock_image = np.full(\n shape=(self.n_channels, self.img_size, self.img_size), fill_value=self.background_val, dtype=self._dtype\n )\n\n n_tiles_side = self.img_size // self.tile_size\n total_n_tiles = n_tiles_side ** 2\n coords = [\n (k // n_tiles_side, k % n_tiles_side)\n for k in np.random.choice(total_n_tiles, size=self.n_tiles, replace=False)\n ]\n for i in range(self.n_tiles):\n x, y = self.tile_size * np.array(coords[i])\n if self.mock_type == MockHistoDataType.PATHMNIST:\n new_tile = tiles[i].numpy()\n elif self.mock_type == MockHistoDataType.FAKE:\n new_tile = np.random.uniform(0, self.background_val / (self.n_repeat_diag + 1) * (i + 1))\n else:\n raise NotImplementedError\n mock_image[:, x: x + self.tile_size, y: y + self.tile_size] = new_tile\n return np.transpose(mock_image, (1, 2, 0))\n\n @staticmethod\n def _save_mock_wsi_as_tiff_file(file_path: Path, wsi_levels: List[np.ndarray]) -> None:\n \"\"\"Save a mock whole slide image as a tiff file of pyramidal levels.\n Warning: this function expects images to be in channels_last format (H, W, C).\n\n :param file_name: The tiff file name path.\n :param wsi_levels: List of whole slide images of different resolution levels in channels_last format.\n \"\"\"\n with TiffWriter(file_path, bigtiff=True) as tif:\n options = dict(photometric=\"rgb\", compression=\"zlib\")\n for i, wsi_level in enumerate(wsi_levels):\n # the subfiletype parameter is a bitfield that determines if the wsi_level is a reduced version of\n # another image.\n tif.write(wsi_level, **options, subfiletype=int(i > 0))\n\n def _create_multi_resolution_wsi(self, mock_image: np.ndarray) -> List[np.ndarray]:\n \"\"\"Create multi resolution versions of a mock image via 2 factor downsampling.\n\n :param mock_image: A mock image in channels_last format (H, W, 3).\n :return: Returns a list of n_levels downsampled versions of the original mock image.\n \"\"\"\n levels = [mock_image[:: 2 ** i, :: 2 ** i] for i in range(self.n_levels)]\n return levels\n\n def generate_mock_histo_data(self) -> None:\n \"\"\"Create mock wsi and save them as tiff files\"\"\"\n iterator = iter(self.dataloader) if self.dataloader else None\n\n slide_dir = self.dest_data_path / \"train_images\"\n slide_dir.mkdir(parents=True, exist_ok=True)\n tile_dir = self.dest_data_path / \"dump_tiles\"\n tile_dir.mkdir(parents=True, exist_ok=True)\n\n for slide_counter in range(self.n_slides):\n\n if self.n_tiles_list:\n self.total_tiles = self.n_tiles_list[slide_counter]\n self.n_tiles: int = self.n_tiles_list[slide_counter]\n self.dataloader: torch.utils.data.DataLoader = self.get_dataloader()\n iterator = iter(self.dataloader)\n\n tiles, _ = next(iterator) if iterator else (None, None)\n mock_image, dump_tiles = self.create_mock_wsi(tiles)\n wsi_levels = self._create_multi_resolution_wsi(mock_image)\n\n slide_tiff_filename = self.dest_data_path / \"train_images\" / f\"_{slide_counter}.tiff\"\n self._save_mock_wsi_as_tiff_file(slide_tiff_filename, wsi_levels)\n\n if dump_tiles is not None:\n dump_tiles_filename = self.dest_data_path / \"dump_tiles\" / f\"_{slide_counter}.npy\"\n np.save(dump_tiles_filename, dump_tiles)\n", "path": "hi-ml-cpath/testhisto/testhisto/mocks/slides_generator.py"}], "after_files": [{"content": "# ------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License (MIT). See LICENSE in the repo root for license information.\n# ------------------------------------------------------------------------------------------\nfrom enum import Enum\nfrom pathlib import Path\nfrom typing import Any, Optional, Tuple, List, Union\n\nimport numpy as np\nimport pandas as pd\nimport torch\nfrom tifffile.tifffile import TiffWriter, PHOTOMETRIC, COMPRESSION\nfrom torch import Tensor\nfrom health_cpath.datasets.panda_dataset import PandaDataset\nfrom testhisto.mocks.base_data_generator import MockHistoDataGenerator, MockHistoDataType, PANDA_N_CLASSES\n\n\nclass TilesPositioningType(Enum):\n DIAGONAL = 0\n RANDOM = 1\n\n\nclass MockPandaSlidesGenerator(MockHistoDataGenerator):\n \"\"\"Generator class to create mock WSI on the fly.\n If tiles positioning is diagonal, a mock WSI resembles to:\n [** ]\n [ ** ]\n [ ** ]\n [ **]\n where * represents 2 tiles stitched along the Y axis.\n If tiles positioning is random, tiles are positioned randomly on the WSI grid.\n \"\"\"\n\n ISUP_GRADE = \"isup_grade\"\n\n def __init__(\n self,\n n_levels: int = 3,\n n_repeat_diag: int = 4,\n n_repeat_tile: int = 2,\n background_val: Union[int, float] = 255,\n tiles_pos_type: TilesPositioningType = TilesPositioningType.DIAGONAL,\n n_tiles_list: Optional[List[int]] = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"\n :param n_levels: Number of levels for multi resolution WSI.\n :param n_repeat_diag: Number of repeat time along the diagonal axis, defaults to 4.\n :param n_repeat_tile: Number of repeat times of a tile along both Y and X axes, defaults to 2.\n :param background_val: A value to assign to the background, defaults to 255.\n :param tiles_pos_type: The tiles positioning type to define how tiles should be positioned within the WSI grid,\n defaults to TilesPositioningType.DIAGONAL.\n :param n_tiles_list: A list to use different n_tiles per slide for randomly positioned tiles.\n :param kwargs: Same params passed to MockHistoDataGenerator.\n \"\"\"\n super().__init__(**kwargs)\n\n self.n_levels = n_levels\n self.n_repeat_diag = n_repeat_diag\n self.n_repeat_tile = n_repeat_tile\n self.background_val = background_val\n self.tiles_pos_type = tiles_pos_type\n\n self.step_size = self.tile_size * self.n_repeat_tile\n self._dtype = np.uint8 if type(background_val) == int else np.float32\n self.img_size: int = self.n_repeat_diag * self.n_repeat_tile * self.tile_size\n self.n_tiles_list = n_tiles_list\n\n if self.n_tiles_list:\n assert len(self.n_tiles_list) == self.n_slides, \"n_tiles_list length should be equal to n_slides\"\n assert self.tiles_pos_type == TilesPositioningType.RANDOM, \"different n_tiles enabled only for randomly \"\n \"positionned tiles.\"\n\n def validate(self) -> None:\n assert (\n self.n_slides >= PANDA_N_CLASSES\n ), f\"The number of slides should be >= PANDA_N_CLASSES (i.e., {PANDA_N_CLASSES})\"\n\n def create_mock_metadata_dataframe(self) -> pd.DataFrame:\n \"\"\"Create a mock dataframe with random metadata.\"\"\"\n isup_grades = np.tile(list(self.ISUP_GRADE_MAPPING.keys()), self.n_slides // PANDA_N_CLASSES + 1,)\n mock_metadata: dict = {\n col: [] for col in [PandaDataset.SLIDE_ID_COLUMN, PandaDataset.MASK_COLUMN, *PandaDataset.METADATA_COLUMNS]\n }\n for slide_id in range(self.n_slides):\n mock_metadata[PandaDataset.SLIDE_ID_COLUMN].append(f\"_{slide_id}\")\n mock_metadata[PandaDataset.MASK_COLUMN].append(f\"_{slide_id}_mask\")\n mock_metadata[self.DATA_PROVIDER].append(np.random.choice(self.DATA_PROVIDERS_VALUES))\n mock_metadata[self.ISUP_GRADE].append(isup_grades[slide_id])\n mock_metadata[self.GLEASON_SCORE].append(np.random.choice(self.ISUP_GRADE_MAPPING[isup_grades[slide_id]]))\n df = pd.DataFrame(data=mock_metadata)\n csv_filename = self.dest_data_path / PandaDataset.DEFAULT_CSV_FILENAME\n df.to_csv(csv_filename, index=False)\n\n def create_mock_wsi(self, tiles: Tensor) -> Tuple[np.ndarray, Optional[np.ndarray]]:\n if self.tiles_pos_type == TilesPositioningType.DIAGONAL:\n return self._create_wsi_from_stitched_tiles_along_diagonal_axis(tiles)\n elif self.tiles_pos_type == TilesPositioningType.RANDOM:\n return self._create_wsi_from_randomly_positioned_tiles(tiles), None\n else:\n raise NotImplementedError\n\n def _create_wsi_from_stitched_tiles_along_diagonal_axis(self, tiles: Tensor) -> Tuple[np.ndarray, np.ndarray]:\n \"\"\"Create a whole slide image by stitching tiles along the diagonal axis.\n\n :param tiles: A tensor of tiles of shape (n_tiles, n_channels, tile_size, tile_size).\n :return: returns a wsi of shape (img_size, img_size, n_channels) and the tiles used to create it.\n The image is in channels_last format so that it can save by TiffWriter.\n \"\"\"\n mock_image = np.full(\n shape=(self.n_channels, self.img_size, self.img_size), fill_value=self.background_val, dtype=self._dtype\n )\n dump_tiles = []\n for i in range(self.n_repeat_diag):\n if self.mock_type == MockHistoDataType.PATHMNIST:\n if i == 0 or self.n_tiles > 1:\n tile = (\n (tiles[i % self.n_tiles].numpy()).astype(self._dtype)\n if self._dtype == np.uint8\n else tiles[i % self.n_tiles].numpy()\n )\n # fill the square diagonal with tile repeated n_repeat_tile times along X and Y axis.\n fill_square = np.tile(tile, (self.n_repeat_tile, self.n_repeat_tile))\n dump_tiles.append(tile)\n\n elif self.mock_type == MockHistoDataType.FAKE:\n if i == 0 or self.n_tiles > 1:\n # pick a random fake value to fill in the square diagonal.\n fill_square = np.random.uniform(0, self.background_val / (self.n_repeat_diag + 1) * (i + 1))\n dump_tiles.append(\n np.full(\n shape=(self.n_channels, self.tile_size, self.tile_size),\n fill_value=fill_square,\n dtype=self._dtype,\n )\n )\n else:\n raise NotImplementedError\n mock_image[\n :, self.step_size * i: self.step_size * (i + 1), self.step_size * i: self.step_size * (i + 1)\n ] = fill_square\n return np.transpose(mock_image, (1, 2, 0)), np.array(dump_tiles) # switch to channels_last.\n\n def _create_wsi_from_randomly_positioned_tiles(self, tiles: Tensor) -> np.ndarray:\n \"\"\"Create a whole slide image by positioning tiles randomly in the whole slide image grid.\n\n :param tiles: A tensor of tiles of shape (n_tiles, n_channels, tile_size, tile_size).\n :return: returns a wsi of shape (img_size, img_size, n_channels) in channels_last format so that it can save by\n TiffWriter.\n \"\"\"\n mock_image = np.full(\n shape=(self.n_channels, self.img_size, self.img_size), fill_value=self.background_val, dtype=self._dtype\n )\n\n n_tiles_side = self.img_size // self.tile_size\n total_n_tiles = n_tiles_side ** 2\n coords = [\n (k // n_tiles_side, k % n_tiles_side)\n for k in np.random.choice(total_n_tiles, size=self.n_tiles, replace=False)\n ]\n for i in range(self.n_tiles):\n x, y = self.tile_size * np.array(coords[i])\n if self.mock_type == MockHistoDataType.PATHMNIST:\n new_tile = tiles[i].numpy()\n elif self.mock_type == MockHistoDataType.FAKE:\n new_tile = np.random.uniform(0, self.background_val / (self.n_repeat_diag + 1) * (i + 1))\n else:\n raise NotImplementedError\n mock_image[:, x: x + self.tile_size, y: y + self.tile_size] = new_tile\n return np.transpose(mock_image, (1, 2, 0))\n\n @staticmethod\n def _save_mock_wsi_as_tiff_file(file_path: Path, wsi_levels: List[np.ndarray]) -> None:\n \"\"\"Save a mock whole slide image as a tiff file of pyramidal levels.\n Warning: this function expects images to be in channels_last format (H, W, C).\n\n :param file_name: The tiff file name path.\n :param wsi_levels: List of whole slide images of different resolution levels in channels_last format.\n \"\"\"\n with TiffWriter(file_path, bigtiff=True) as tif:\n options = dict(\n software='tifffile',\n metadata={'axes': 'YXC'},\n photometric=PHOTOMETRIC.RGB,\n compression=COMPRESSION.ADOBE_DEFLATE, # ADOBE_DEFLATE aka ZLIB lossless compression\n tile=(16, 16),\n )\n for i, wsi_level in enumerate(wsi_levels):\n # the subfiletype parameter is a bitfield that determines if the wsi_level is a reduced version of\n # another image.\n tif.write(wsi_level, **options, subfiletype=int(i > 0))\n\n def _create_multi_resolution_wsi(self, mock_image: np.ndarray) -> List[np.ndarray]:\n \"\"\"Create multi resolution versions of a mock image via 2 factor downsampling.\n\n :param mock_image: A mock image in channels_last format (H, W, 3).\n :return: Returns a list of n_levels downsampled versions of the original mock image.\n \"\"\"\n levels = [mock_image[:: 2 ** i, :: 2 ** i] for i in range(self.n_levels)]\n return levels\n\n def generate_mock_histo_data(self) -> None:\n \"\"\"Create mock wsi and save them as tiff files\"\"\"\n iterator = iter(self.dataloader) if self.dataloader else None\n\n slide_dir = self.dest_data_path / \"train_images\"\n slide_dir.mkdir(parents=True, exist_ok=True)\n tile_dir = self.dest_data_path / \"dump_tiles\"\n tile_dir.mkdir(parents=True, exist_ok=True)\n\n for slide_counter in range(self.n_slides):\n\n if self.n_tiles_list:\n self.total_tiles = self.n_tiles_list[slide_counter]\n self.n_tiles: int = self.n_tiles_list[slide_counter]\n self.dataloader: torch.utils.data.DataLoader = self.get_dataloader()\n iterator = iter(self.dataloader)\n\n tiles, _ = next(iterator) if iterator else (None, None)\n mock_image, dump_tiles = self.create_mock_wsi(tiles)\n wsi_levels = self._create_multi_resolution_wsi(mock_image)\n\n slide_tiff_filename = self.dest_data_path / \"train_images\" / f\"_{slide_counter}.tiff\"\n self._save_mock_wsi_as_tiff_file(slide_tiff_filename, wsi_levels)\n\n if dump_tiles is not None:\n dump_tiles_filename = self.dest_data_path / \"dump_tiles\" / f\"_{slide_counter}.npy\"\n np.save(dump_tiles_filename, dump_tiles)\n", "path": "hi-ml-cpath/testhisto/testhisto/mocks/slides_generator.py"}]}
| 3,385 | 373 |
gh_patches_debug_30955
|
rasdani/github-patches
|
git_diff
|
tensorflow__addons-1595
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TQDMProgressBar not working in TF-2.2.0rc1
**System information**
- OS Platform and Distribution: Linux Ubuntu 18.04
- TensorFlow version and how it was installed (source or binary): TF-2.2.0rc1 (wheel compiled from source)
- TensorFlow-Addons version and how it was installed (source or binary): 0.8.3 installed via pip
- Python version: 3.7.6
- Is GPU used? (yes/no): Yes
**Describe the bug**
Executing `model.fit()` with the `TQDMProgressBar()` callback results in `KeyError: 'metrics'` because of a change in TF-2.2 that moves initialization of `model.metrics` (and `model.metrics_names`) from compile stage to train stage.
**Code to reproduce the issue**
```python
import numpy as np
import tensorflow as tf
import tensorflow_addons as tfa
x = np.random.random((5,1,5))
y = np.random.random((5,1,5))
inputs = tf.keras.layers.Input(shape=(3,))
outputs = tf.keras.layers.Dense(2, name="out_1")(inputs)
model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer="Adam", loss="mse", metrics=["acc"])
pg = tfa.callbacks.TQDMProgressBar()
model_callbacks = [pg, ]
VERBOSE=0
history = model.fit(
x,
y,
epochs=100,
verbose=VERBOSE,
callbacks=model_callbacks
)
````
**Other info / logs**
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-23-fdbb03f574a1> in <module>
48 # class_weight=class_weights,
49 verbose=VERBOSE,
---> 50 callbacks=model_callbacks,
51 )
~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
63 def _method_wrapper(self, *args, **kwargs):
64 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
---> 65 return method(self, *args, **kwargs)
66
67 # Running inside `run_distribute_coordinator` already.
~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
763 self.stop_training = False
764 train_function = self.make_train_function()
--> 765 callbacks.on_train_begin()
766 # Handle fault-tolerance for multi-worker.
767 # TODO(omalleyt): Fix the ordering issues that mean this has to
~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/tensorflow/python/keras/callbacks.py in on_train_begin(self, logs)
445 logs = self._process_logs(logs)
446 for callback in self.callbacks:
--> 447 callback.on_train_begin(logs)
448
449 def on_train_end(self, logs=None):
~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/tensorflow_addons/callbacks/tqdm_progress_bar.py in on_train_begin(self, logs)
100 def on_train_begin(self, logs=None):
101 self.num_epochs = self.params["epochs"]
--> 102 self.metrics = self.params["metrics"]
103
104 if self.show_overall_progress:
KeyError: 'metrics'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensorflow_addons/callbacks/tqdm_progress_bar.py`
Content:
```
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """TQDM Progress Bar."""
16
17 import time
18 import tensorflow as tf
19 from collections import defaultdict
20 from typeguard import typechecked
21
22 from tensorflow.keras.callbacks import Callback
23
24
25 @tf.keras.utils.register_keras_serializable(package="Addons")
26 class TQDMProgressBar(Callback):
27 """TQDM Progress Bar for Tensorflow Keras.
28
29 Args:
30 metrics_separator: Custom separator between metrics.
31 Defaults to ' - '.
32 overall_bar_format: Custom bar format for overall
33 (outer) progress bar, see https://github.com/tqdm/tqdm#parameters
34 for more detail.
35 epoch_bar_format: Custom bar format for epoch
36 (inner) progress bar, see https://github.com/tqdm/tqdm#parameters
37 for more detail.
38 update_per_second: Maximum number of updates in the epochs bar
39 per second, this is to prevent small batches from slowing down
40 training. Defaults to 10.
41 metrics_format: Custom format for how metrics are formatted.
42 See https://github.com/tqdm/tqdm#parameters for more detail.
43 leave_epoch_progress: True to leave epoch progress bars.
44 leave_overall_progress: True to leave overall progress bar.
45 show_epoch_progress: False to hide epoch progress bars.
46 show_overall_progress: False to hide overall progress bar.
47 """
48
49 @typechecked
50 def __init__(
51 self,
52 metrics_separator: str = " - ",
53 overall_bar_format: str = "{l_bar}{bar} {n_fmt}/{total_fmt} ETA: "
54 "{remaining}s, {rate_fmt}{postfix}",
55 epoch_bar_format: str = "{n_fmt}/{total_fmt}{bar} ETA: "
56 "{remaining}s - {desc}",
57 metrics_format: str = "{name}: {value:0.4f}",
58 update_per_second: int = 10,
59 leave_epoch_progress: bool = True,
60 leave_overall_progress: bool = True,
61 show_epoch_progress: bool = True,
62 show_overall_progress: bool = True,
63 ):
64
65 try:
66 # import tqdm here because tqdm is not a required package
67 # for addons
68 import tqdm
69
70 version_message = "Please update your TQDM version to >= 4.36.1, "
71 "you have version {}. To update, run !pip install -U tqdm"
72 assert tqdm.__version__ >= "4.36.1", version_message.format(
73 tqdm.__version__
74 )
75 from tqdm.auto import tqdm
76
77 self.tqdm = tqdm
78 except ImportError:
79 raise ImportError("Please install tqdm via pip install tqdm")
80
81 self.metrics_separator = metrics_separator
82 self.overall_bar_format = overall_bar_format
83 self.epoch_bar_format = epoch_bar_format
84 self.leave_epoch_progress = leave_epoch_progress
85 self.leave_overall_progress = leave_overall_progress
86 self.show_epoch_progress = show_epoch_progress
87 self.show_overall_progress = show_overall_progress
88 self.metrics_format = metrics_format
89
90 # compute update interval (inverse of update per second)
91 self.update_interval = 1 / update_per_second
92
93 self.last_update_time = time.time()
94 self.overall_progress_tqdm = None
95 self.epoch_progress_tqdm = None
96 self.num_epochs = None
97 self.logs = None
98 self.metrics = None
99
100 def on_train_begin(self, logs=None):
101 self.num_epochs = self.params["epochs"]
102 self.metrics = self.params["metrics"]
103
104 if self.show_overall_progress:
105 self.overall_progress_tqdm = self.tqdm(
106 desc="Training",
107 total=self.num_epochs,
108 bar_format=self.overall_bar_format,
109 leave=self.leave_overall_progress,
110 dynamic_ncols=True,
111 unit="epochs",
112 )
113
114 # set counting mode
115 if "samples" in self.params:
116 self.mode = "samples"
117 self.total_steps = self.params["samples"]
118 else:
119 self.mode = "steps"
120 self.total_steps = self.params["steps"]
121
122 def on_train_end(self, logs={}):
123 if self.show_overall_progress:
124 self.overall_progress_tqdm.close()
125
126 def on_epoch_begin(self, epoch, logs={}):
127 current_epoch_description = "Epoch {epoch}/{num_epochs}".format(
128 epoch=epoch + 1, num_epochs=self.num_epochs
129 )
130
131 if self.show_epoch_progress:
132 print(current_epoch_description)
133 self.epoch_progress_tqdm = self.tqdm(
134 total=self.total_steps,
135 bar_format=self.epoch_bar_format,
136 leave=self.leave_epoch_progress,
137 dynamic_ncols=True,
138 unit=self.mode,
139 )
140
141 self.num_samples_seen = 0
142 self.steps_to_update = 0
143 self.steps_so_far = 0
144 self.logs = defaultdict(float)
145
146 def on_epoch_end(self, epoch, logs={}):
147
148 if self.show_epoch_progress:
149 metrics = self.format_metrics(logs)
150 self.epoch_progress_tqdm.desc = metrics
151
152 # set miniters and mininterval to 0 so last update displays
153 self.epoch_progress_tqdm.miniters = 0
154 self.epoch_progress_tqdm.mininterval = 0
155
156 # update the rest of the steps in epoch progress bar
157 self.epoch_progress_tqdm.update(
158 self.total_steps - self.epoch_progress_tqdm.n
159 )
160 self.epoch_progress_tqdm.close()
161
162 if self.show_overall_progress:
163 self.overall_progress_tqdm.update(1)
164
165 def on_batch_end(self, batch, logs={}):
166 if self.mode == "samples":
167 batch_size = logs["size"]
168 else:
169 batch_size = 1
170
171 self.num_samples_seen += batch_size
172 self.steps_to_update += 1
173 self.steps_so_far += 1
174
175 if self.steps_so_far < self.total_steps:
176
177 for metric, value in logs.items():
178 self.logs[metric] += value * batch_size
179
180 now = time.time()
181 time_diff = now - self.last_update_time
182 if self.show_epoch_progress and time_diff >= self.update_interval:
183
184 # update the epoch progress bar
185 metrics = self.format_metrics(self.logs, self.num_samples_seen)
186 self.epoch_progress_tqdm.desc = metrics
187 self.epoch_progress_tqdm.update(self.steps_to_update)
188
189 # reset steps to update
190 self.steps_to_update = 0
191
192 # update timestamp for last update
193 self.last_update_time = now
194
195 def format_metrics(self, logs={}, factor=1):
196 """Format metrics in logs into a string.
197
198 Arguments:
199 logs: dictionary of metrics and their values. Defaults to
200 empty dictionary.
201 factor (int): The factor we want to divide the metrics in logs
202 by, useful when we are computing the logs after each batch.
203 Defaults to 1.
204
205 Returns:
206 metrics_string: a string displaying metrics using the given
207 formators passed in through the constructor.
208 """
209
210 metric_value_pairs = []
211 for metric in self.metrics:
212 if metric in logs:
213 value = logs[metric] / factor
214 pair = self.metrics_format.format(name=metric, value=value)
215 metric_value_pairs.append(pair)
216 metrics_string = self.metrics_separator.join(metric_value_pairs)
217 return metrics_string
218
219 def get_config(self):
220 config = {
221 "metrics_separator": self.metrics_separator,
222 "overall_bar_format": self.overall_bar_format,
223 "epoch_bar_format": self.epoch_bar_format,
224 "leave_epoch_progress": self.leave_epoch_progress,
225 "leave_overall_progress": self.leave_overall_progress,
226 "show_epoch_progress": self.show_epoch_progress,
227 "show_overall_progress": self.show_overall_progress,
228 }
229
230 base_config = super().get_config()
231 return {**base_config, **config}
232
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tensorflow_addons/callbacks/tqdm_progress_bar.py b/tensorflow_addons/callbacks/tqdm_progress_bar.py
--- a/tensorflow_addons/callbacks/tqdm_progress_bar.py
+++ b/tensorflow_addons/callbacks/tqdm_progress_bar.py
@@ -95,11 +95,10 @@
self.epoch_progress_tqdm = None
self.num_epochs = None
self.logs = None
- self.metrics = None
+ super().__init__()
def on_train_begin(self, logs=None):
self.num_epochs = self.params["epochs"]
- self.metrics = self.params["metrics"]
if self.show_overall_progress:
self.overall_progress_tqdm = self.tqdm(
@@ -112,12 +111,8 @@
)
# set counting mode
- if "samples" in self.params:
- self.mode = "samples"
- self.total_steps = self.params["samples"]
- else:
- self.mode = "steps"
- self.total_steps = self.params["steps"]
+ self.mode = "steps"
+ self.total_steps = self.params["steps"]
def on_train_end(self, logs={}):
if self.show_overall_progress:
@@ -208,11 +203,11 @@
"""
metric_value_pairs = []
- for metric in self.metrics:
- if metric in logs:
- value = logs[metric] / factor
- pair = self.metrics_format.format(name=metric, value=value)
- metric_value_pairs.append(pair)
+ for key, value in logs.items():
+ if key in ["batch", "size"]:
+ continue
+ pair = self.metrics_format.format(name=key, value=value / factor)
+ metric_value_pairs.append(pair)
metrics_string = self.metrics_separator.join(metric_value_pairs)
return metrics_string
|
{"golden_diff": "diff --git a/tensorflow_addons/callbacks/tqdm_progress_bar.py b/tensorflow_addons/callbacks/tqdm_progress_bar.py\n--- a/tensorflow_addons/callbacks/tqdm_progress_bar.py\n+++ b/tensorflow_addons/callbacks/tqdm_progress_bar.py\n@@ -95,11 +95,10 @@\n self.epoch_progress_tqdm = None\n self.num_epochs = None\n self.logs = None\n- self.metrics = None\n+ super().__init__()\n \n def on_train_begin(self, logs=None):\n self.num_epochs = self.params[\"epochs\"]\n- self.metrics = self.params[\"metrics\"]\n \n if self.show_overall_progress:\n self.overall_progress_tqdm = self.tqdm(\n@@ -112,12 +111,8 @@\n )\n \n # set counting mode\n- if \"samples\" in self.params:\n- self.mode = \"samples\"\n- self.total_steps = self.params[\"samples\"]\n- else:\n- self.mode = \"steps\"\n- self.total_steps = self.params[\"steps\"]\n+ self.mode = \"steps\"\n+ self.total_steps = self.params[\"steps\"]\n \n def on_train_end(self, logs={}):\n if self.show_overall_progress:\n@@ -208,11 +203,11 @@\n \"\"\"\n \n metric_value_pairs = []\n- for metric in self.metrics:\n- if metric in logs:\n- value = logs[metric] / factor\n- pair = self.metrics_format.format(name=metric, value=value)\n- metric_value_pairs.append(pair)\n+ for key, value in logs.items():\n+ if key in [\"batch\", \"size\"]:\n+ continue\n+ pair = self.metrics_format.format(name=key, value=value / factor)\n+ metric_value_pairs.append(pair)\n metrics_string = self.metrics_separator.join(metric_value_pairs)\n return metrics_string\n", "issue": "TQDMProgressBar not working in TF-2.2.0rc1\n**System information**\r\n- OS Platform and Distribution: Linux Ubuntu 18.04\r\n- TensorFlow version and how it was installed (source or binary): TF-2.2.0rc1 (wheel compiled from source)\r\n- TensorFlow-Addons version and how it was installed (source or binary): 0.8.3 installed via pip\r\n- Python version: 3.7.6\r\n- Is GPU used? (yes/no): Yes\r\n\r\n**Describe the bug**\r\n\r\nExecuting `model.fit()` with the `TQDMProgressBar()` callback results in `KeyError: 'metrics'` because of a change in TF-2.2 that moves initialization of `model.metrics` (and `model.metrics_names`) from compile stage to train stage.\r\n\r\n**Code to reproduce the issue**\r\n\r\n```python\r\nimport numpy as np\r\nimport tensorflow as tf\r\nimport tensorflow_addons as tfa\r\n\r\nx = np.random.random((5,1,5))\r\ny = np.random.random((5,1,5))\r\n\r\ninputs = tf.keras.layers.Input(shape=(3,))\r\noutputs = tf.keras.layers.Dense(2, name=\"out_1\")(inputs)\r\nmodel = tf.keras.models.Model(inputs=inputs, outputs=outputs)\r\nmodel.compile(optimizer=\"Adam\", loss=\"mse\", metrics=[\"acc\"])\r\n\r\npg = tfa.callbacks.TQDMProgressBar()\r\nmodel_callbacks = [pg, ]\r\nVERBOSE=0\r\nhistory = model.fit(\r\n x,\r\n y,\r\n epochs=100,\r\n verbose=VERBOSE,\r\n callbacks=model_callbacks\r\n)\r\n````\r\n\r\n**Other info / logs**\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\n<ipython-input-23-fdbb03f574a1> in <module>\r\n 48 # class_weight=class_weights,\r\n 49 verbose=VERBOSE,\r\n---> 50 callbacks=model_callbacks,\r\n 51 )\r\n\r\n~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)\r\n 63 def _method_wrapper(self, *args, **kwargs):\r\n 64 if not self._in_multi_worker_mode(): # pylint: disable=protected-access\r\n---> 65 return method(self, *args, **kwargs)\r\n 66 \r\n 67 # Running inside `run_distribute_coordinator` already.\r\n\r\n~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)\r\n 763 self.stop_training = False\r\n 764 train_function = self.make_train_function()\r\n--> 765 callbacks.on_train_begin()\r\n 766 # Handle fault-tolerance for multi-worker.\r\n 767 # TODO(omalleyt): Fix the ordering issues that mean this has to\r\n\r\n~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/tensorflow/python/keras/callbacks.py in on_train_begin(self, logs)\r\n 445 logs = self._process_logs(logs)\r\n 446 for callback in self.callbacks:\r\n--> 447 callback.on_train_begin(logs)\r\n 448 \r\n 449 def on_train_end(self, logs=None):\r\n\r\n~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/tensorflow_addons/callbacks/tqdm_progress_bar.py in on_train_begin(self, logs)\r\n 100 def on_train_begin(self, logs=None):\r\n 101 self.num_epochs = self.params[\"epochs\"]\r\n--> 102 self.metrics = self.params[\"metrics\"]\r\n 103 \r\n 104 if self.show_overall_progress:\r\n\r\nKeyError: 'metrics'\r\n```\r\n\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"TQDM Progress Bar.\"\"\"\n\nimport time\nimport tensorflow as tf\nfrom collections import defaultdict\nfrom typeguard import typechecked\n\nfrom tensorflow.keras.callbacks import Callback\n\n\[email protected]_keras_serializable(package=\"Addons\")\nclass TQDMProgressBar(Callback):\n \"\"\"TQDM Progress Bar for Tensorflow Keras.\n\n Args:\n metrics_separator: Custom separator between metrics.\n Defaults to ' - '.\n overall_bar_format: Custom bar format for overall\n (outer) progress bar, see https://github.com/tqdm/tqdm#parameters\n for more detail.\n epoch_bar_format: Custom bar format for epoch\n (inner) progress bar, see https://github.com/tqdm/tqdm#parameters\n for more detail.\n update_per_second: Maximum number of updates in the epochs bar\n per second, this is to prevent small batches from slowing down\n training. Defaults to 10.\n metrics_format: Custom format for how metrics are formatted.\n See https://github.com/tqdm/tqdm#parameters for more detail.\n leave_epoch_progress: True to leave epoch progress bars.\n leave_overall_progress: True to leave overall progress bar.\n show_epoch_progress: False to hide epoch progress bars.\n show_overall_progress: False to hide overall progress bar.\n \"\"\"\n\n @typechecked\n def __init__(\n self,\n metrics_separator: str = \" - \",\n overall_bar_format: str = \"{l_bar}{bar} {n_fmt}/{total_fmt} ETA: \"\n \"{remaining}s, {rate_fmt}{postfix}\",\n epoch_bar_format: str = \"{n_fmt}/{total_fmt}{bar} ETA: \"\n \"{remaining}s - {desc}\",\n metrics_format: str = \"{name}: {value:0.4f}\",\n update_per_second: int = 10,\n leave_epoch_progress: bool = True,\n leave_overall_progress: bool = True,\n show_epoch_progress: bool = True,\n show_overall_progress: bool = True,\n ):\n\n try:\n # import tqdm here because tqdm is not a required package\n # for addons\n import tqdm\n\n version_message = \"Please update your TQDM version to >= 4.36.1, \"\n \"you have version {}. To update, run !pip install -U tqdm\"\n assert tqdm.__version__ >= \"4.36.1\", version_message.format(\n tqdm.__version__\n )\n from tqdm.auto import tqdm\n\n self.tqdm = tqdm\n except ImportError:\n raise ImportError(\"Please install tqdm via pip install tqdm\")\n\n self.metrics_separator = metrics_separator\n self.overall_bar_format = overall_bar_format\n self.epoch_bar_format = epoch_bar_format\n self.leave_epoch_progress = leave_epoch_progress\n self.leave_overall_progress = leave_overall_progress\n self.show_epoch_progress = show_epoch_progress\n self.show_overall_progress = show_overall_progress\n self.metrics_format = metrics_format\n\n # compute update interval (inverse of update per second)\n self.update_interval = 1 / update_per_second\n\n self.last_update_time = time.time()\n self.overall_progress_tqdm = None\n self.epoch_progress_tqdm = None\n self.num_epochs = None\n self.logs = None\n self.metrics = None\n\n def on_train_begin(self, logs=None):\n self.num_epochs = self.params[\"epochs\"]\n self.metrics = self.params[\"metrics\"]\n\n if self.show_overall_progress:\n self.overall_progress_tqdm = self.tqdm(\n desc=\"Training\",\n total=self.num_epochs,\n bar_format=self.overall_bar_format,\n leave=self.leave_overall_progress,\n dynamic_ncols=True,\n unit=\"epochs\",\n )\n\n # set counting mode\n if \"samples\" in self.params:\n self.mode = \"samples\"\n self.total_steps = self.params[\"samples\"]\n else:\n self.mode = \"steps\"\n self.total_steps = self.params[\"steps\"]\n\n def on_train_end(self, logs={}):\n if self.show_overall_progress:\n self.overall_progress_tqdm.close()\n\n def on_epoch_begin(self, epoch, logs={}):\n current_epoch_description = \"Epoch {epoch}/{num_epochs}\".format(\n epoch=epoch + 1, num_epochs=self.num_epochs\n )\n\n if self.show_epoch_progress:\n print(current_epoch_description)\n self.epoch_progress_tqdm = self.tqdm(\n total=self.total_steps,\n bar_format=self.epoch_bar_format,\n leave=self.leave_epoch_progress,\n dynamic_ncols=True,\n unit=self.mode,\n )\n\n self.num_samples_seen = 0\n self.steps_to_update = 0\n self.steps_so_far = 0\n self.logs = defaultdict(float)\n\n def on_epoch_end(self, epoch, logs={}):\n\n if self.show_epoch_progress:\n metrics = self.format_metrics(logs)\n self.epoch_progress_tqdm.desc = metrics\n\n # set miniters and mininterval to 0 so last update displays\n self.epoch_progress_tqdm.miniters = 0\n self.epoch_progress_tqdm.mininterval = 0\n\n # update the rest of the steps in epoch progress bar\n self.epoch_progress_tqdm.update(\n self.total_steps - self.epoch_progress_tqdm.n\n )\n self.epoch_progress_tqdm.close()\n\n if self.show_overall_progress:\n self.overall_progress_tqdm.update(1)\n\n def on_batch_end(self, batch, logs={}):\n if self.mode == \"samples\":\n batch_size = logs[\"size\"]\n else:\n batch_size = 1\n\n self.num_samples_seen += batch_size\n self.steps_to_update += 1\n self.steps_so_far += 1\n\n if self.steps_so_far < self.total_steps:\n\n for metric, value in logs.items():\n self.logs[metric] += value * batch_size\n\n now = time.time()\n time_diff = now - self.last_update_time\n if self.show_epoch_progress and time_diff >= self.update_interval:\n\n # update the epoch progress bar\n metrics = self.format_metrics(self.logs, self.num_samples_seen)\n self.epoch_progress_tqdm.desc = metrics\n self.epoch_progress_tqdm.update(self.steps_to_update)\n\n # reset steps to update\n self.steps_to_update = 0\n\n # update timestamp for last update\n self.last_update_time = now\n\n def format_metrics(self, logs={}, factor=1):\n \"\"\"Format metrics in logs into a string.\n\n Arguments:\n logs: dictionary of metrics and their values. Defaults to\n empty dictionary.\n factor (int): The factor we want to divide the metrics in logs\n by, useful when we are computing the logs after each batch.\n Defaults to 1.\n\n Returns:\n metrics_string: a string displaying metrics using the given\n formators passed in through the constructor.\n \"\"\"\n\n metric_value_pairs = []\n for metric in self.metrics:\n if metric in logs:\n value = logs[metric] / factor\n pair = self.metrics_format.format(name=metric, value=value)\n metric_value_pairs.append(pair)\n metrics_string = self.metrics_separator.join(metric_value_pairs)\n return metrics_string\n\n def get_config(self):\n config = {\n \"metrics_separator\": self.metrics_separator,\n \"overall_bar_format\": self.overall_bar_format,\n \"epoch_bar_format\": self.epoch_bar_format,\n \"leave_epoch_progress\": self.leave_epoch_progress,\n \"leave_overall_progress\": self.leave_overall_progress,\n \"show_epoch_progress\": self.show_epoch_progress,\n \"show_overall_progress\": self.show_overall_progress,\n }\n\n base_config = super().get_config()\n return {**base_config, **config}\n", "path": "tensorflow_addons/callbacks/tqdm_progress_bar.py"}], "after_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"TQDM Progress Bar.\"\"\"\n\nimport time\nimport tensorflow as tf\nfrom collections import defaultdict\nfrom typeguard import typechecked\n\nfrom tensorflow.keras.callbacks import Callback\n\n\[email protected]_keras_serializable(package=\"Addons\")\nclass TQDMProgressBar(Callback):\n \"\"\"TQDM Progress Bar for Tensorflow Keras.\n\n Args:\n metrics_separator: Custom separator between metrics.\n Defaults to ' - '.\n overall_bar_format: Custom bar format for overall\n (outer) progress bar, see https://github.com/tqdm/tqdm#parameters\n for more detail.\n epoch_bar_format: Custom bar format for epoch\n (inner) progress bar, see https://github.com/tqdm/tqdm#parameters\n for more detail.\n update_per_second: Maximum number of updates in the epochs bar\n per second, this is to prevent small batches from slowing down\n training. Defaults to 10.\n metrics_format: Custom format for how metrics are formatted.\n See https://github.com/tqdm/tqdm#parameters for more detail.\n leave_epoch_progress: True to leave epoch progress bars.\n leave_overall_progress: True to leave overall progress bar.\n show_epoch_progress: False to hide epoch progress bars.\n show_overall_progress: False to hide overall progress bar.\n \"\"\"\n\n @typechecked\n def __init__(\n self,\n metrics_separator: str = \" - \",\n overall_bar_format: str = \"{l_bar}{bar} {n_fmt}/{total_fmt} ETA: \"\n \"{remaining}s, {rate_fmt}{postfix}\",\n epoch_bar_format: str = \"{n_fmt}/{total_fmt}{bar} ETA: \"\n \"{remaining}s - {desc}\",\n metrics_format: str = \"{name}: {value:0.4f}\",\n update_per_second: int = 10,\n leave_epoch_progress: bool = True,\n leave_overall_progress: bool = True,\n show_epoch_progress: bool = True,\n show_overall_progress: bool = True,\n ):\n\n try:\n # import tqdm here because tqdm is not a required package\n # for addons\n import tqdm\n\n version_message = \"Please update your TQDM version to >= 4.36.1, \"\n \"you have version {}. To update, run !pip install -U tqdm\"\n assert tqdm.__version__ >= \"4.36.1\", version_message.format(\n tqdm.__version__\n )\n from tqdm.auto import tqdm\n\n self.tqdm = tqdm\n except ImportError:\n raise ImportError(\"Please install tqdm via pip install tqdm\")\n\n self.metrics_separator = metrics_separator\n self.overall_bar_format = overall_bar_format\n self.epoch_bar_format = epoch_bar_format\n self.leave_epoch_progress = leave_epoch_progress\n self.leave_overall_progress = leave_overall_progress\n self.show_epoch_progress = show_epoch_progress\n self.show_overall_progress = show_overall_progress\n self.metrics_format = metrics_format\n\n # compute update interval (inverse of update per second)\n self.update_interval = 1 / update_per_second\n\n self.last_update_time = time.time()\n self.overall_progress_tqdm = None\n self.epoch_progress_tqdm = None\n self.num_epochs = None\n self.logs = None\n super().__init__()\n\n def on_train_begin(self, logs=None):\n self.num_epochs = self.params[\"epochs\"]\n\n if self.show_overall_progress:\n self.overall_progress_tqdm = self.tqdm(\n desc=\"Training\",\n total=self.num_epochs,\n bar_format=self.overall_bar_format,\n leave=self.leave_overall_progress,\n dynamic_ncols=True,\n unit=\"epochs\",\n )\n\n # set counting mode\n self.mode = \"steps\"\n self.total_steps = self.params[\"steps\"]\n\n def on_train_end(self, logs={}):\n if self.show_overall_progress:\n self.overall_progress_tqdm.close()\n\n def on_epoch_begin(self, epoch, logs={}):\n current_epoch_description = \"Epoch {epoch}/{num_epochs}\".format(\n epoch=epoch + 1, num_epochs=self.num_epochs\n )\n\n if self.show_epoch_progress:\n print(current_epoch_description)\n self.epoch_progress_tqdm = self.tqdm(\n total=self.total_steps,\n bar_format=self.epoch_bar_format,\n leave=self.leave_epoch_progress,\n dynamic_ncols=True,\n unit=self.mode,\n )\n\n self.num_samples_seen = 0\n self.steps_to_update = 0\n self.steps_so_far = 0\n self.logs = defaultdict(float)\n\n def on_epoch_end(self, epoch, logs={}):\n\n if self.show_epoch_progress:\n metrics = self.format_metrics(logs)\n self.epoch_progress_tqdm.desc = metrics\n\n # set miniters and mininterval to 0 so last update displays\n self.epoch_progress_tqdm.miniters = 0\n self.epoch_progress_tqdm.mininterval = 0\n\n # update the rest of the steps in epoch progress bar\n self.epoch_progress_tqdm.update(\n self.total_steps - self.epoch_progress_tqdm.n\n )\n self.epoch_progress_tqdm.close()\n\n if self.show_overall_progress:\n self.overall_progress_tqdm.update(1)\n\n def on_batch_end(self, batch, logs={}):\n if self.mode == \"samples\":\n batch_size = logs[\"size\"]\n else:\n batch_size = 1\n\n self.num_samples_seen += batch_size\n self.steps_to_update += 1\n self.steps_so_far += 1\n\n if self.steps_so_far < self.total_steps:\n\n for metric, value in logs.items():\n self.logs[metric] += value * batch_size\n\n now = time.time()\n time_diff = now - self.last_update_time\n if self.show_epoch_progress and time_diff >= self.update_interval:\n\n # update the epoch progress bar\n metrics = self.format_metrics(self.logs, self.num_samples_seen)\n self.epoch_progress_tqdm.desc = metrics\n self.epoch_progress_tqdm.update(self.steps_to_update)\n\n # reset steps to update\n self.steps_to_update = 0\n\n # update timestamp for last update\n self.last_update_time = now\n\n def format_metrics(self, logs={}, factor=1):\n \"\"\"Format metrics in logs into a string.\n\n Arguments:\n logs: dictionary of metrics and their values. Defaults to\n empty dictionary.\n factor (int): The factor we want to divide the metrics in logs\n by, useful when we are computing the logs after each batch.\n Defaults to 1.\n\n Returns:\n metrics_string: a string displaying metrics using the given\n formators passed in through the constructor.\n \"\"\"\n\n metric_value_pairs = []\n for key, value in logs.items():\n if key in [\"batch\", \"size\"]:\n continue\n pair = self.metrics_format.format(name=key, value=value / factor)\n metric_value_pairs.append(pair)\n metrics_string = self.metrics_separator.join(metric_value_pairs)\n return metrics_string\n\n def get_config(self):\n config = {\n \"metrics_separator\": self.metrics_separator,\n \"overall_bar_format\": self.overall_bar_format,\n \"epoch_bar_format\": self.epoch_bar_format,\n \"leave_epoch_progress\": self.leave_epoch_progress,\n \"leave_overall_progress\": self.leave_overall_progress,\n \"show_epoch_progress\": self.show_epoch_progress,\n \"show_overall_progress\": self.show_overall_progress,\n }\n\n base_config = super().get_config()\n return {**base_config, **config}\n", "path": "tensorflow_addons/callbacks/tqdm_progress_bar.py"}]}
| 3,600 | 422 |
gh_patches_debug_22397
|
rasdani/github-patches
|
git_diff
|
openvstorage__framework-1136
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
vPool GUI work
- Add vPool wizard - Step 1
- [x] Remove backend type selection
-Add vPool wizard - Step 2
- [x] "Fragment Cache" should become "Cache"
- [x] "Place Fragment Cache on disk" should be "Use local disk"
- [x] "Use another ALBA Backend as Fragment Cache" should be "Use another Backend Cache"
- Add vPool wizard - Step 3
- [x] Replace "StorageDriver Caching Information" by "Storage Driver Write Buffer"
- [x] Remove Strategy
- [x] Remove Deduplication
- Add vPool wizard - Step 4
- [x] https://github.com/openvstorage/framework/issues/783
- vPool overview page
- [x] Remove Type, Connection and Login and replace by OVS Backend & preset
- vPool detail page
- [x] Add backend info https://github.com/openvstorage/framework/issues/735
- [x] Under configuration remove "Cache strategy", "Dedupe Mode"
- [x] Under Mgmt actions add a link to the accelerated ALBA for each storage router (link to the backend)
- [x] Under Mgmt actions make the Storage routers clickable
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ovs/dal/migration/ovsmigrator.py`
Content:
```
1 # Copyright (C) 2016 iNuron NV
2 #
3 # This file is part of Open vStorage Open Source Edition (OSE),
4 # as available from
5 #
6 # http://www.openvstorage.org and
7 # http://www.openvstorage.com.
8 #
9 # This file is free software; you can redistribute it and/or modify it
10 # under the terms of the GNU Affero General Public License v3 (GNU AGPLv3)
11 # as published by the Free Software Foundation, in version 3 as it comes
12 # in the LICENSE.txt file of the Open vStorage OSE distribution.
13 #
14 # Open vStorage is distributed in the hope that it will be useful,
15 # but WITHOUT ANY WARRANTY of any kind.
16
17 """
18 OVS migration module
19 """
20
21 import hashlib
22 import random
23 import string
24
25
26 class OVSMigrator(object):
27 """
28 Handles all model related migrations
29 """
30
31 identifier = 'ovs'
32 THIS_VERSION = 11
33
34 def __init__(self):
35 """ Init method """
36 pass
37
38 @staticmethod
39 def migrate(previous_version):
40 """
41 Migrates from a given version to the current version. It uses 'previous_version' to be smart
42 wherever possible, but the code should be able to migrate any version towards the expected version.
43 When this is not possible, the code can set a minimum version and raise when it is not met.
44 :param previous_version: The previous version from which to start the migration
45 :type previous_version: float
46 """
47
48 working_version = previous_version
49
50 if working_version == 0:
51 # Initial version:
52 # * Set the version to THIS RELEASE version
53
54 from ovs.dal.hybrids.user import User
55 from ovs.dal.hybrids.group import Group
56 from ovs.dal.hybrids.role import Role
57 from ovs.dal.hybrids.client import Client
58 from ovs.dal.hybrids.j_rolegroup import RoleGroup
59 from ovs.dal.hybrids.j_roleclient import RoleClient
60 from ovs.dal.hybrids.backendtype import BackendType
61 from ovs.dal.hybrids.servicetype import ServiceType
62 from ovs.dal.hybrids.branding import Branding
63 from ovs.dal.lists.backendtypelist import BackendTypeList
64
65 # Create groups
66 admin_group = Group()
67 admin_group.name = 'administrators'
68 admin_group.description = 'Administrators'
69 admin_group.save()
70 viewers_group = Group()
71 viewers_group.name = 'viewers'
72 viewers_group.description = 'Viewers'
73 viewers_group.save()
74
75 # Create users
76 admin = User()
77 admin.username = 'admin'
78 admin.password = hashlib.sha256('admin').hexdigest()
79 admin.is_active = True
80 admin.group = admin_group
81 admin.save()
82
83 # Create internal OAuth 2 clients
84 admin_pw_client = Client()
85 admin_pw_client.ovs_type = 'INTERNAL'
86 admin_pw_client.grant_type = 'PASSWORD'
87 admin_pw_client.user = admin
88 admin_pw_client.save()
89 admin_cc_client = Client()
90 admin_cc_client.ovs_type = 'INTERNAL'
91 admin_cc_client.grant_type = 'CLIENT_CREDENTIALS'
92 admin_cc_client.client_secret = ''.join(random.choice(string.ascii_letters +
93 string.digits +
94 '|_=+*#@!/-[]{}<>.?,\'";:~')
95 for _ in range(128))
96 admin_cc_client.user = admin
97 admin_cc_client.save()
98
99 # Create roles
100 read_role = Role()
101 read_role.code = 'read'
102 read_role.name = 'Read'
103 read_role.description = 'Can read objects'
104 read_role.save()
105 write_role = Role()
106 write_role.code = 'write'
107 write_role.name = 'Write'
108 write_role.description = 'Can write objects'
109 write_role.save()
110 manage_role = Role()
111 manage_role.code = 'manage'
112 manage_role.name = 'Manage'
113 manage_role.description = 'Can manage the system'
114 manage_role.save()
115
116 # Attach groups to roles
117 mapping = [
118 (admin_group, [read_role, write_role, manage_role]),
119 (viewers_group, [read_role])
120 ]
121 for setting in mapping:
122 for role in setting[1]:
123 rolegroup = RoleGroup()
124 rolegroup.group = setting[0]
125 rolegroup.role = role
126 rolegroup.save()
127 for user in setting[0].users:
128 for role in setting[1]:
129 for client in user.clients:
130 roleclient = RoleClient()
131 roleclient.client = client
132 roleclient.role = role
133 roleclient.save()
134
135 # Add service types
136 for service_type_info in [ServiceType.SERVICE_TYPES.MD_SERVER, ServiceType.SERVICE_TYPES.ALBA_PROXY, ServiceType.SERVICE_TYPES.ARAKOON]:
137 service_type = ServiceType()
138 service_type.name = service_type_info
139 service_type.save()
140
141 # Branding
142 branding = Branding()
143 branding.name = 'Default'
144 branding.description = 'Default bootstrap theme'
145 branding.css = 'bootstrap-default.min.css'
146 branding.productname = 'Open vStorage'
147 branding.is_default = True
148 branding.save()
149 slate = Branding()
150 slate.name = 'Slate'
151 slate.description = 'Dark bootstrap theme'
152 slate.css = 'bootstrap-slate.min.css'
153 slate.productname = 'Open vStorage'
154 slate.is_default = False
155 slate.save()
156
157 # From here on, all actual migration should happen to get to the expected state for THIS RELEASE
158 elif working_version < OVSMigrator.THIS_VERSION:
159 # Migrate unique constraints
160 from ovs.dal.helpers import HybridRunner, Descriptor
161 from ovs.extensions.storage.persistentfactory import PersistentFactory
162 client = PersistentFactory.get_client()
163 hybrid_structure = HybridRunner.get_hybrids()
164 for class_descriptor in hybrid_structure.values():
165 cls = Descriptor().load(class_descriptor).get_object()
166 classname = cls.__name__.lower()
167 unique_key = 'ovs_unique_{0}_{{0}}_'.format(classname)
168 uniques = []
169 # noinspection PyProtectedMember
170 for prop in cls._properties:
171 if prop.unique is True and len([k for k in client.prefix(unique_key.format(prop.name))]) == 0:
172 uniques.append(prop.name)
173 if len(uniques) > 0:
174 prefix = 'ovs_data_{0}_'.format(classname)
175 for key in client.prefix(prefix):
176 data = client.get(key)
177 for property_name in uniques:
178 ukey = '{0}{1}'.format(unique_key.format(property_name), hashlib.sha1(str(data[property_name])).hexdigest())
179 client.set(ukey, key)
180
181 # Complete rework of the way we detect devices to assign roles or use as ASD
182 # Allow loop-, raid-, nvme-, ??-devices and logical volumes as ASD (https://github.com/openvstorage/framework/issues/792)
183 from ovs.dal.lists.storagerouterlist import StorageRouterList
184 from ovs.extensions.generic.sshclient import SSHClient, UnableToConnectException
185 from ovs.lib.disk import DiskController
186
187 for storagerouter in StorageRouterList.get_storagerouters():
188 try:
189 client = SSHClient(storagerouter, username='root')
190 except UnableToConnectException:
191 raise
192
193 # Retrieve all symlinks for all devices
194 # Example of name_alias_mapping:
195 # {'/dev/md0': ['/dev/disk/by-id/md-uuid-ad2de634:26d97253:5eda0a23:96986b76', '/dev/disk/by-id/md-name-OVS-1:0'],
196 # '/dev/sda': ['/dev/disk/by-path/pci-0000:03:00.0-sas-0x5000c295fe2ff771-lun-0'],
197 # '/dev/sda1': ['/dev/disk/by-uuid/e3e0bc62-4edc-4c6b-a6ce-1f39e8f27e41', '/dev/disk/by-path/pci-0000:03:00.0-sas-0x5000c295fe2ff771-lun-0-part1']}
198 name_alias_mapping = {}
199 alias_name_mapping = {}
200 for path_type in client.dir_list(directory='/dev/disk'):
201 if path_type in ['by-uuid', 'by-partuuid']: # UUIDs can change after creating a filesystem on a partition
202 continue
203 directory = '/dev/disk/{0}'.format(path_type)
204 for symlink in client.dir_list(directory=directory):
205 symlink_path = '{0}/{1}'.format(directory, symlink)
206 link = client.file_read_link(symlink_path)
207 if link not in name_alias_mapping:
208 name_alias_mapping[link] = []
209 name_alias_mapping[link].append(symlink_path)
210 alias_name_mapping[symlink_path] = link
211
212 for disk in storagerouter.disks:
213 if disk.aliases is None:
214 # noinspection PyProtectedMember
215 device_path = '/dev/{0}'.format(disk.name)
216 disk.aliases = name_alias_mapping.get(device_path, [device_path])
217 disk.save()
218 for partition in disk.partitions:
219 # noinspection PyProtectedMember
220 partition_device = alias_name_mapping.get(partition._data['path'])
221 if partition.aliases is None:
222 if partition_device is None:
223 partition.aliases = []
224 partition.save()
225 continue
226 partition.aliases = name_alias_mapping.get(partition_device, [])
227 partition.save()
228
229 DiskController.sync_with_reality(storagerouter_guid=storagerouter.guid)
230
231 # Only support ALBA backend type
232 from ovs.dal.lists.backendtypelist import BackendTypeList
233 for backend_type in BackendTypeList.get_backend_types():
234 if backend_type.code != 'alba':
235 backend_type.delete()
236
237 # Reformat the vpool.metadata information
238 from ovs.dal.lists.vpoollist import VPoolList
239 for vpool in VPoolList.get_vpools():
240 new_metadata = {}
241 for metadata_key, value in vpool.metadata.items():
242 new_info = {}
243 storagerouter_guids = [key for key in vpool.metadata.keys() if not key.startswith('backend')]
244 if isinstance(value, dict):
245 read_cache = value.get('backend_info', {}).get('fragment_cache_on_read', True)
246 write_cache = value.get('backend_info', {}).get('fragment_cache_on_write', False)
247 new_info['backend_info'] = {'alba_backend_guid': value.get('backend_guid'),
248 'backend_guid': None,
249 'frag_size': value.get('backend_info', {}).get('frag_size'),
250 'name': value.get('name'),
251 'policies': value.get('backend_info', {}).get('policies'),
252 'preset': value.get('preset'),
253 'sco_size': value.get('backend_info', {}).get('sco_size'),
254 'total_size': value.get('backend_info', {}).get('total_size')}
255 new_info['arakoon_config'] = value.get('arakoon_config')
256 new_info['connection_info'] = {'host': value.get('connection', {}).get('host', ''),
257 'port': value.get('connection', {}).get('port', ''),
258 'local': value.get('connection', {}).get('local', ''),
259 'client_id': value.get('connection', {}).get('client_id', ''),
260 'client_secret': value.get('connection', {}).get('client_secret', '')}
261 if metadata_key == 'backend':
262 new_info['caching_info'] = dict((sr_guid, {'fragment_cache_on_read': read_cache, 'fragment_cache_on_write': write_cache}) for sr_guid in storagerouter_guids)
263 if metadata_key in storagerouter_guids:
264 metadata_key = 'backend_aa_{0}'.format(metadata_key)
265 new_metadata[metadata_key] = new_info
266 vpool.metadata = new_metadata
267 vpool.save()
268
269 return OVSMigrator.THIS_VERSION
270
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ovs/dal/migration/ovsmigrator.py b/ovs/dal/migration/ovsmigrator.py
--- a/ovs/dal/migration/ovsmigrator.py
+++ b/ovs/dal/migration/ovsmigrator.py
@@ -57,7 +57,6 @@
from ovs.dal.hybrids.client import Client
from ovs.dal.hybrids.j_rolegroup import RoleGroup
from ovs.dal.hybrids.j_roleclient import RoleClient
- from ovs.dal.hybrids.backendtype import BackendType
from ovs.dal.hybrids.servicetype import ServiceType
from ovs.dal.hybrids.branding import Branding
from ovs.dal.lists.backendtypelist import BackendTypeList
@@ -266,4 +265,11 @@
vpool.metadata = new_metadata
vpool.save()
+ # Removal of READ role
+ from ovs.dal.lists.diskpartitionlist import DiskPartitionList
+ for partition in DiskPartitionList.get_partitions():
+ if 'READ' in partition.roles:
+ partition.roles.remove('READ')
+ partition.save()
+
return OVSMigrator.THIS_VERSION
|
{"golden_diff": "diff --git a/ovs/dal/migration/ovsmigrator.py b/ovs/dal/migration/ovsmigrator.py\n--- a/ovs/dal/migration/ovsmigrator.py\n+++ b/ovs/dal/migration/ovsmigrator.py\n@@ -57,7 +57,6 @@\n from ovs.dal.hybrids.client import Client\n from ovs.dal.hybrids.j_rolegroup import RoleGroup\n from ovs.dal.hybrids.j_roleclient import RoleClient\n- from ovs.dal.hybrids.backendtype import BackendType\n from ovs.dal.hybrids.servicetype import ServiceType\n from ovs.dal.hybrids.branding import Branding\n from ovs.dal.lists.backendtypelist import BackendTypeList\n@@ -266,4 +265,11 @@\n vpool.metadata = new_metadata\n vpool.save()\n \n+ # Removal of READ role\n+ from ovs.dal.lists.diskpartitionlist import DiskPartitionList\n+ for partition in DiskPartitionList.get_partitions():\n+ if 'READ' in partition.roles:\n+ partition.roles.remove('READ')\n+ partition.save()\n+\n return OVSMigrator.THIS_VERSION\n", "issue": "vPool GUI work\n- Add vPool wizard - Step 1\r\n - [x] Remove backend type selection\r\n -Add vPool wizard - Step 2\r\n - [x] \"Fragment Cache\" should become \"Cache\"\r\n - [x] \"Place Fragment Cache on disk\" should be \"Use local disk\"\r\n - [x] \"Use another ALBA Backend as Fragment Cache\" should be \"Use another Backend Cache\"\r\n- Add vPool wizard - Step 3\r\n - [x] Replace \"StorageDriver Caching Information\" by \"Storage Driver Write Buffer\"\r\n - [x] Remove Strategy\r\n - [x] Remove Deduplication\r\n- Add vPool wizard - Step 4\r\n - [x] https://github.com/openvstorage/framework/issues/783\r\n- vPool overview page\r\n - [x] Remove Type, Connection and Login and replace by OVS Backend & preset\r\n- vPool detail page\r\n - [x] Add backend info https://github.com/openvstorage/framework/issues/735\r\n - [x] Under configuration remove \"Cache strategy\", \"Dedupe Mode\"\r\n - [x] Under Mgmt actions add a link to the accelerated ALBA for each storage router (link to the backend)\r\n - [x] Under Mgmt actions make the Storage routers clickable\n", "before_files": [{"content": "# Copyright (C) 2016 iNuron NV\n#\n# This file is part of Open vStorage Open Source Edition (OSE),\n# as available from\n#\n# http://www.openvstorage.org and\n# http://www.openvstorage.com.\n#\n# This file is free software; you can redistribute it and/or modify it\n# under the terms of the GNU Affero General Public License v3 (GNU AGPLv3)\n# as published by the Free Software Foundation, in version 3 as it comes\n# in the LICENSE.txt file of the Open vStorage OSE distribution.\n#\n# Open vStorage is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY of any kind.\n\n\"\"\"\nOVS migration module\n\"\"\"\n\nimport hashlib\nimport random\nimport string\n\n\nclass OVSMigrator(object):\n \"\"\"\n Handles all model related migrations\n \"\"\"\n\n identifier = 'ovs'\n THIS_VERSION = 11\n\n def __init__(self):\n \"\"\" Init method \"\"\"\n pass\n\n @staticmethod\n def migrate(previous_version):\n \"\"\"\n Migrates from a given version to the current version. It uses 'previous_version' to be smart\n wherever possible, but the code should be able to migrate any version towards the expected version.\n When this is not possible, the code can set a minimum version and raise when it is not met.\n :param previous_version: The previous version from which to start the migration\n :type previous_version: float\n \"\"\"\n\n working_version = previous_version\n\n if working_version == 0:\n # Initial version:\n # * Set the version to THIS RELEASE version\n\n from ovs.dal.hybrids.user import User\n from ovs.dal.hybrids.group import Group\n from ovs.dal.hybrids.role import Role\n from ovs.dal.hybrids.client import Client\n from ovs.dal.hybrids.j_rolegroup import RoleGroup\n from ovs.dal.hybrids.j_roleclient import RoleClient\n from ovs.dal.hybrids.backendtype import BackendType\n from ovs.dal.hybrids.servicetype import ServiceType\n from ovs.dal.hybrids.branding import Branding\n from ovs.dal.lists.backendtypelist import BackendTypeList\n\n # Create groups\n admin_group = Group()\n admin_group.name = 'administrators'\n admin_group.description = 'Administrators'\n admin_group.save()\n viewers_group = Group()\n viewers_group.name = 'viewers'\n viewers_group.description = 'Viewers'\n viewers_group.save()\n\n # Create users\n admin = User()\n admin.username = 'admin'\n admin.password = hashlib.sha256('admin').hexdigest()\n admin.is_active = True\n admin.group = admin_group\n admin.save()\n\n # Create internal OAuth 2 clients\n admin_pw_client = Client()\n admin_pw_client.ovs_type = 'INTERNAL'\n admin_pw_client.grant_type = 'PASSWORD'\n admin_pw_client.user = admin\n admin_pw_client.save()\n admin_cc_client = Client()\n admin_cc_client.ovs_type = 'INTERNAL'\n admin_cc_client.grant_type = 'CLIENT_CREDENTIALS'\n admin_cc_client.client_secret = ''.join(random.choice(string.ascii_letters +\n string.digits +\n '|_=+*#@!/-[]{}<>.?,\\'\";:~')\n for _ in range(128))\n admin_cc_client.user = admin\n admin_cc_client.save()\n\n # Create roles\n read_role = Role()\n read_role.code = 'read'\n read_role.name = 'Read'\n read_role.description = 'Can read objects'\n read_role.save()\n write_role = Role()\n write_role.code = 'write'\n write_role.name = 'Write'\n write_role.description = 'Can write objects'\n write_role.save()\n manage_role = Role()\n manage_role.code = 'manage'\n manage_role.name = 'Manage'\n manage_role.description = 'Can manage the system'\n manage_role.save()\n\n # Attach groups to roles\n mapping = [\n (admin_group, [read_role, write_role, manage_role]),\n (viewers_group, [read_role])\n ]\n for setting in mapping:\n for role in setting[1]:\n rolegroup = RoleGroup()\n rolegroup.group = setting[0]\n rolegroup.role = role\n rolegroup.save()\n for user in setting[0].users:\n for role in setting[1]:\n for client in user.clients:\n roleclient = RoleClient()\n roleclient.client = client\n roleclient.role = role\n roleclient.save()\n\n # Add service types\n for service_type_info in [ServiceType.SERVICE_TYPES.MD_SERVER, ServiceType.SERVICE_TYPES.ALBA_PROXY, ServiceType.SERVICE_TYPES.ARAKOON]:\n service_type = ServiceType()\n service_type.name = service_type_info\n service_type.save()\n\n # Branding\n branding = Branding()\n branding.name = 'Default'\n branding.description = 'Default bootstrap theme'\n branding.css = 'bootstrap-default.min.css'\n branding.productname = 'Open vStorage'\n branding.is_default = True\n branding.save()\n slate = Branding()\n slate.name = 'Slate'\n slate.description = 'Dark bootstrap theme'\n slate.css = 'bootstrap-slate.min.css'\n slate.productname = 'Open vStorage'\n slate.is_default = False\n slate.save()\n\n # From here on, all actual migration should happen to get to the expected state for THIS RELEASE\n elif working_version < OVSMigrator.THIS_VERSION:\n # Migrate unique constraints\n from ovs.dal.helpers import HybridRunner, Descriptor\n from ovs.extensions.storage.persistentfactory import PersistentFactory\n client = PersistentFactory.get_client()\n hybrid_structure = HybridRunner.get_hybrids()\n for class_descriptor in hybrid_structure.values():\n cls = Descriptor().load(class_descriptor).get_object()\n classname = cls.__name__.lower()\n unique_key = 'ovs_unique_{0}_{{0}}_'.format(classname)\n uniques = []\n # noinspection PyProtectedMember\n for prop in cls._properties:\n if prop.unique is True and len([k for k in client.prefix(unique_key.format(prop.name))]) == 0:\n uniques.append(prop.name)\n if len(uniques) > 0:\n prefix = 'ovs_data_{0}_'.format(classname)\n for key in client.prefix(prefix):\n data = client.get(key)\n for property_name in uniques:\n ukey = '{0}{1}'.format(unique_key.format(property_name), hashlib.sha1(str(data[property_name])).hexdigest())\n client.set(ukey, key)\n\n # Complete rework of the way we detect devices to assign roles or use as ASD\n # Allow loop-, raid-, nvme-, ??-devices and logical volumes as ASD (https://github.com/openvstorage/framework/issues/792)\n from ovs.dal.lists.storagerouterlist import StorageRouterList\n from ovs.extensions.generic.sshclient import SSHClient, UnableToConnectException\n from ovs.lib.disk import DiskController\n\n for storagerouter in StorageRouterList.get_storagerouters():\n try:\n client = SSHClient(storagerouter, username='root')\n except UnableToConnectException:\n raise\n\n # Retrieve all symlinks for all devices\n # Example of name_alias_mapping:\n # {'/dev/md0': ['/dev/disk/by-id/md-uuid-ad2de634:26d97253:5eda0a23:96986b76', '/dev/disk/by-id/md-name-OVS-1:0'],\n # '/dev/sda': ['/dev/disk/by-path/pci-0000:03:00.0-sas-0x5000c295fe2ff771-lun-0'],\n # '/dev/sda1': ['/dev/disk/by-uuid/e3e0bc62-4edc-4c6b-a6ce-1f39e8f27e41', '/dev/disk/by-path/pci-0000:03:00.0-sas-0x5000c295fe2ff771-lun-0-part1']}\n name_alias_mapping = {}\n alias_name_mapping = {}\n for path_type in client.dir_list(directory='/dev/disk'):\n if path_type in ['by-uuid', 'by-partuuid']: # UUIDs can change after creating a filesystem on a partition\n continue\n directory = '/dev/disk/{0}'.format(path_type)\n for symlink in client.dir_list(directory=directory):\n symlink_path = '{0}/{1}'.format(directory, symlink)\n link = client.file_read_link(symlink_path)\n if link not in name_alias_mapping:\n name_alias_mapping[link] = []\n name_alias_mapping[link].append(symlink_path)\n alias_name_mapping[symlink_path] = link\n\n for disk in storagerouter.disks:\n if disk.aliases is None:\n # noinspection PyProtectedMember\n device_path = '/dev/{0}'.format(disk.name)\n disk.aliases = name_alias_mapping.get(device_path, [device_path])\n disk.save()\n for partition in disk.partitions:\n # noinspection PyProtectedMember\n partition_device = alias_name_mapping.get(partition._data['path'])\n if partition.aliases is None:\n if partition_device is None:\n partition.aliases = []\n partition.save()\n continue\n partition.aliases = name_alias_mapping.get(partition_device, [])\n partition.save()\n\n DiskController.sync_with_reality(storagerouter_guid=storagerouter.guid)\n\n # Only support ALBA backend type\n from ovs.dal.lists.backendtypelist import BackendTypeList\n for backend_type in BackendTypeList.get_backend_types():\n if backend_type.code != 'alba':\n backend_type.delete()\n\n # Reformat the vpool.metadata information\n from ovs.dal.lists.vpoollist import VPoolList\n for vpool in VPoolList.get_vpools():\n new_metadata = {}\n for metadata_key, value in vpool.metadata.items():\n new_info = {}\n storagerouter_guids = [key for key in vpool.metadata.keys() if not key.startswith('backend')]\n if isinstance(value, dict):\n read_cache = value.get('backend_info', {}).get('fragment_cache_on_read', True)\n write_cache = value.get('backend_info', {}).get('fragment_cache_on_write', False)\n new_info['backend_info'] = {'alba_backend_guid': value.get('backend_guid'),\n 'backend_guid': None,\n 'frag_size': value.get('backend_info', {}).get('frag_size'),\n 'name': value.get('name'),\n 'policies': value.get('backend_info', {}).get('policies'),\n 'preset': value.get('preset'),\n 'sco_size': value.get('backend_info', {}).get('sco_size'),\n 'total_size': value.get('backend_info', {}).get('total_size')}\n new_info['arakoon_config'] = value.get('arakoon_config')\n new_info['connection_info'] = {'host': value.get('connection', {}).get('host', ''),\n 'port': value.get('connection', {}).get('port', ''),\n 'local': value.get('connection', {}).get('local', ''),\n 'client_id': value.get('connection', {}).get('client_id', ''),\n 'client_secret': value.get('connection', {}).get('client_secret', '')}\n if metadata_key == 'backend':\n new_info['caching_info'] = dict((sr_guid, {'fragment_cache_on_read': read_cache, 'fragment_cache_on_write': write_cache}) for sr_guid in storagerouter_guids)\n if metadata_key in storagerouter_guids:\n metadata_key = 'backend_aa_{0}'.format(metadata_key)\n new_metadata[metadata_key] = new_info\n vpool.metadata = new_metadata\n vpool.save()\n\n return OVSMigrator.THIS_VERSION\n", "path": "ovs/dal/migration/ovsmigrator.py"}], "after_files": [{"content": "# Copyright (C) 2016 iNuron NV\n#\n# This file is part of Open vStorage Open Source Edition (OSE),\n# as available from\n#\n# http://www.openvstorage.org and\n# http://www.openvstorage.com.\n#\n# This file is free software; you can redistribute it and/or modify it\n# under the terms of the GNU Affero General Public License v3 (GNU AGPLv3)\n# as published by the Free Software Foundation, in version 3 as it comes\n# in the LICENSE.txt file of the Open vStorage OSE distribution.\n#\n# Open vStorage is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY of any kind.\n\n\"\"\"\nOVS migration module\n\"\"\"\n\nimport hashlib\nimport random\nimport string\n\n\nclass OVSMigrator(object):\n \"\"\"\n Handles all model related migrations\n \"\"\"\n\n identifier = 'ovs'\n THIS_VERSION = 11\n\n def __init__(self):\n \"\"\" Init method \"\"\"\n pass\n\n @staticmethod\n def migrate(previous_version):\n \"\"\"\n Migrates from a given version to the current version. It uses 'previous_version' to be smart\n wherever possible, but the code should be able to migrate any version towards the expected version.\n When this is not possible, the code can set a minimum version and raise when it is not met.\n :param previous_version: The previous version from which to start the migration\n :type previous_version: float\n \"\"\"\n\n working_version = previous_version\n\n if working_version == 0:\n # Initial version:\n # * Set the version to THIS RELEASE version\n\n from ovs.dal.hybrids.user import User\n from ovs.dal.hybrids.group import Group\n from ovs.dal.hybrids.role import Role\n from ovs.dal.hybrids.client import Client\n from ovs.dal.hybrids.j_rolegroup import RoleGroup\n from ovs.dal.hybrids.j_roleclient import RoleClient\n from ovs.dal.hybrids.servicetype import ServiceType\n from ovs.dal.hybrids.branding import Branding\n from ovs.dal.lists.backendtypelist import BackendTypeList\n\n # Create groups\n admin_group = Group()\n admin_group.name = 'administrators'\n admin_group.description = 'Administrators'\n admin_group.save()\n viewers_group = Group()\n viewers_group.name = 'viewers'\n viewers_group.description = 'Viewers'\n viewers_group.save()\n\n # Create users\n admin = User()\n admin.username = 'admin'\n admin.password = hashlib.sha256('admin').hexdigest()\n admin.is_active = True\n admin.group = admin_group\n admin.save()\n\n # Create internal OAuth 2 clients\n admin_pw_client = Client()\n admin_pw_client.ovs_type = 'INTERNAL'\n admin_pw_client.grant_type = 'PASSWORD'\n admin_pw_client.user = admin\n admin_pw_client.save()\n admin_cc_client = Client()\n admin_cc_client.ovs_type = 'INTERNAL'\n admin_cc_client.grant_type = 'CLIENT_CREDENTIALS'\n admin_cc_client.client_secret = ''.join(random.choice(string.ascii_letters +\n string.digits +\n '|_=+*#@!/-[]{}<>.?,\\'\";:~')\n for _ in range(128))\n admin_cc_client.user = admin\n admin_cc_client.save()\n\n # Create roles\n read_role = Role()\n read_role.code = 'read'\n read_role.name = 'Read'\n read_role.description = 'Can read objects'\n read_role.save()\n write_role = Role()\n write_role.code = 'write'\n write_role.name = 'Write'\n write_role.description = 'Can write objects'\n write_role.save()\n manage_role = Role()\n manage_role.code = 'manage'\n manage_role.name = 'Manage'\n manage_role.description = 'Can manage the system'\n manage_role.save()\n\n # Attach groups to roles\n mapping = [\n (admin_group, [read_role, write_role, manage_role]),\n (viewers_group, [read_role])\n ]\n for setting in mapping:\n for role in setting[1]:\n rolegroup = RoleGroup()\n rolegroup.group = setting[0]\n rolegroup.role = role\n rolegroup.save()\n for user in setting[0].users:\n for role in setting[1]:\n for client in user.clients:\n roleclient = RoleClient()\n roleclient.client = client\n roleclient.role = role\n roleclient.save()\n\n # Add service types\n for service_type_info in [ServiceType.SERVICE_TYPES.MD_SERVER, ServiceType.SERVICE_TYPES.ALBA_PROXY, ServiceType.SERVICE_TYPES.ARAKOON]:\n service_type = ServiceType()\n service_type.name = service_type_info\n service_type.save()\n\n # Branding\n branding = Branding()\n branding.name = 'Default'\n branding.description = 'Default bootstrap theme'\n branding.css = 'bootstrap-default.min.css'\n branding.productname = 'Open vStorage'\n branding.is_default = True\n branding.save()\n slate = Branding()\n slate.name = 'Slate'\n slate.description = 'Dark bootstrap theme'\n slate.css = 'bootstrap-slate.min.css'\n slate.productname = 'Open vStorage'\n slate.is_default = False\n slate.save()\n\n # From here on, all actual migration should happen to get to the expected state for THIS RELEASE\n elif working_version < OVSMigrator.THIS_VERSION:\n # Migrate unique constraints\n from ovs.dal.helpers import HybridRunner, Descriptor\n from ovs.extensions.storage.persistentfactory import PersistentFactory\n client = PersistentFactory.get_client()\n hybrid_structure = HybridRunner.get_hybrids()\n for class_descriptor in hybrid_structure.values():\n cls = Descriptor().load(class_descriptor).get_object()\n classname = cls.__name__.lower()\n unique_key = 'ovs_unique_{0}_{{0}}_'.format(classname)\n uniques = []\n # noinspection PyProtectedMember\n for prop in cls._properties:\n if prop.unique is True and len([k for k in client.prefix(unique_key.format(prop.name))]) == 0:\n uniques.append(prop.name)\n if len(uniques) > 0:\n prefix = 'ovs_data_{0}_'.format(classname)\n for key in client.prefix(prefix):\n data = client.get(key)\n for property_name in uniques:\n ukey = '{0}{1}'.format(unique_key.format(property_name), hashlib.sha1(str(data[property_name])).hexdigest())\n client.set(ukey, key)\n\n # Complete rework of the way we detect devices to assign roles or use as ASD\n # Allow loop-, raid-, nvme-, ??-devices and logical volumes as ASD (https://github.com/openvstorage/framework/issues/792)\n from ovs.dal.lists.storagerouterlist import StorageRouterList\n from ovs.extensions.generic.sshclient import SSHClient, UnableToConnectException\n from ovs.lib.disk import DiskController\n\n for storagerouter in StorageRouterList.get_storagerouters():\n try:\n client = SSHClient(storagerouter, username='root')\n except UnableToConnectException:\n raise\n\n # Retrieve all symlinks for all devices\n # Example of name_alias_mapping:\n # {'/dev/md0': ['/dev/disk/by-id/md-uuid-ad2de634:26d97253:5eda0a23:96986b76', '/dev/disk/by-id/md-name-OVS-1:0'],\n # '/dev/sda': ['/dev/disk/by-path/pci-0000:03:00.0-sas-0x5000c295fe2ff771-lun-0'],\n # '/dev/sda1': ['/dev/disk/by-uuid/e3e0bc62-4edc-4c6b-a6ce-1f39e8f27e41', '/dev/disk/by-path/pci-0000:03:00.0-sas-0x5000c295fe2ff771-lun-0-part1']}\n name_alias_mapping = {}\n alias_name_mapping = {}\n for path_type in client.dir_list(directory='/dev/disk'):\n if path_type in ['by-uuid', 'by-partuuid']: # UUIDs can change after creating a filesystem on a partition\n continue\n directory = '/dev/disk/{0}'.format(path_type)\n for symlink in client.dir_list(directory=directory):\n symlink_path = '{0}/{1}'.format(directory, symlink)\n link = client.file_read_link(symlink_path)\n if link not in name_alias_mapping:\n name_alias_mapping[link] = []\n name_alias_mapping[link].append(symlink_path)\n alias_name_mapping[symlink_path] = link\n\n for disk in storagerouter.disks:\n if disk.aliases is None:\n # noinspection PyProtectedMember\n device_path = '/dev/{0}'.format(disk.name)\n disk.aliases = name_alias_mapping.get(device_path, [device_path])\n disk.save()\n for partition in disk.partitions:\n # noinspection PyProtectedMember\n partition_device = alias_name_mapping.get(partition._data['path'])\n if partition.aliases is None:\n if partition_device is None:\n partition.aliases = []\n partition.save()\n continue\n partition.aliases = name_alias_mapping.get(partition_device, [])\n partition.save()\n\n DiskController.sync_with_reality(storagerouter_guid=storagerouter.guid)\n\n # Only support ALBA backend type\n from ovs.dal.lists.backendtypelist import BackendTypeList\n for backend_type in BackendTypeList.get_backend_types():\n if backend_type.code != 'alba':\n backend_type.delete()\n\n # Reformat the vpool.metadata information\n from ovs.dal.lists.vpoollist import VPoolList\n for vpool in VPoolList.get_vpools():\n new_metadata = {}\n for metadata_key, value in vpool.metadata.items():\n new_info = {}\n storagerouter_guids = [key for key in vpool.metadata.keys() if not key.startswith('backend')]\n if isinstance(value, dict):\n read_cache = value.get('backend_info', {}).get('fragment_cache_on_read', True)\n write_cache = value.get('backend_info', {}).get('fragment_cache_on_write', False)\n new_info['backend_info'] = {'alba_backend_guid': value.get('backend_guid'),\n 'backend_guid': None,\n 'frag_size': value.get('backend_info', {}).get('frag_size'),\n 'name': value.get('name'),\n 'policies': value.get('backend_info', {}).get('policies'),\n 'preset': value.get('preset'),\n 'sco_size': value.get('backend_info', {}).get('sco_size'),\n 'total_size': value.get('backend_info', {}).get('total_size')}\n new_info['arakoon_config'] = value.get('arakoon_config')\n new_info['connection_info'] = {'host': value.get('connection', {}).get('host', ''),\n 'port': value.get('connection', {}).get('port', ''),\n 'local': value.get('connection', {}).get('local', ''),\n 'client_id': value.get('connection', {}).get('client_id', ''),\n 'client_secret': value.get('connection', {}).get('client_secret', '')}\n if metadata_key == 'backend':\n new_info['caching_info'] = dict((sr_guid, {'fragment_cache_on_read': read_cache, 'fragment_cache_on_write': write_cache}) for sr_guid in storagerouter_guids)\n if metadata_key in storagerouter_guids:\n metadata_key = 'backend_aa_{0}'.format(metadata_key)\n new_metadata[metadata_key] = new_info\n vpool.metadata = new_metadata\n vpool.save()\n\n # Removal of READ role\n from ovs.dal.lists.diskpartitionlist import DiskPartitionList\n for partition in DiskPartitionList.get_partitions():\n if 'READ' in partition.roles:\n partition.roles.remove('READ')\n partition.save()\n\n return OVSMigrator.THIS_VERSION\n", "path": "ovs/dal/migration/ovsmigrator.py"}]}
| 3,908 | 276 |
gh_patches_debug_37636
|
rasdani/github-patches
|
git_diff
|
doccano__doccano-1222
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Enhancement request] Meaningful error on labels naming conflict
Feature description
---------
Try rename a label to an existing name.
You get a 500 error.
Desired: a meaningful error.
Related: #601, #826.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/views/label.py`
Content:
```
1 import json
2
3 from django.db import IntegrityError, transaction
4 from django.shortcuts import get_object_or_404
5 from rest_framework import generics, status
6 from rest_framework.exceptions import ParseError
7 from rest_framework.parsers import MultiPartParser
8 from rest_framework.permissions import IsAuthenticated
9 from rest_framework.response import Response
10 from rest_framework.views import APIView
11
12 from ..models import Label, Project
13 from ..permissions import IsInProjectReadOnlyOrAdmin, IsProjectAdmin
14 from ..serializers import LabelSerializer
15
16
17 class LabelList(generics.ListCreateAPIView):
18 serializer_class = LabelSerializer
19 pagination_class = None
20 permission_classes = [IsAuthenticated & IsInProjectReadOnlyOrAdmin]
21
22 def get_queryset(self):
23 project = get_object_or_404(Project, pk=self.kwargs['project_id'])
24 return project.labels
25
26 def perform_create(self, serializer):
27 project = get_object_or_404(Project, pk=self.kwargs['project_id'])
28 serializer.save(project=project)
29
30
31 class LabelDetail(generics.RetrieveUpdateDestroyAPIView):
32 queryset = Label.objects.all()
33 serializer_class = LabelSerializer
34 lookup_url_kwarg = 'label_id'
35 permission_classes = [IsAuthenticated & IsInProjectReadOnlyOrAdmin]
36
37
38 class LabelUploadAPI(APIView):
39 parser_classes = (MultiPartParser,)
40 permission_classes = [IsAuthenticated & IsProjectAdmin]
41
42 @transaction.atomic
43 def post(self, request, *args, **kwargs):
44 if 'file' not in request.data:
45 raise ParseError('Empty content')
46 labels = json.load(request.data['file'])
47 project = get_object_or_404(Project, pk=kwargs['project_id'])
48 try:
49 for label in labels:
50 serializer = LabelSerializer(data=label)
51 serializer.is_valid(raise_exception=True)
52 serializer.save(project=project)
53 return Response(status=status.HTTP_201_CREATED)
54 except IntegrityError:
55 content = {'error': 'IntegrityError: you cannot create a label with same name or shortkey.'}
56 return Response(content, status=status.HTTP_400_BAD_REQUEST)
57
```
Path: `app/api/exceptions.py`
Content:
```
1 from rest_framework import status
2 from rest_framework.exceptions import (APIException, PermissionDenied,
3 ValidationError)
4
5
6 class FileParseException(APIException):
7 status_code = status.HTTP_400_BAD_REQUEST
8 default_detail = 'Invalid file format, line {}: {}'
9 default_code = 'invalid'
10
11 def __init__(self, line_num, line, code=None):
12 detail = self.default_detail.format(line_num, line)
13 super().__init__(detail, code)
14
15
16 class AutoLabelingException(APIException):
17 status_code = status.HTTP_400_BAD_REQUEST
18 default_detail = 'Auto labeling not allowed for the document with labels.'
19
20
21 class AutoLabeliingPermissionDenied(PermissionDenied):
22 default_detail = 'You do not have permission to perform auto labeling.' \
23 'Please ask the project administrators to add you.'
24
25
26 class URLConnectionError(ValidationError):
27 default_detail = 'Failed to establish a connection. Please check the URL or network.'
28
29
30 class AWSTokenError(ValidationError):
31 default_detail = 'The security token included in the request is invalid.'
32
33
34 class SampleDataException(ValidationError):
35 default_detail = 'The response is empty. Maybe the sample data is not appropriate.' \
36 'Please specify another sample data which returns at least one label.'
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/api/exceptions.py b/app/api/exceptions.py
--- a/app/api/exceptions.py
+++ b/app/api/exceptions.py
@@ -34,3 +34,8 @@
class SampleDataException(ValidationError):
default_detail = 'The response is empty. Maybe the sample data is not appropriate.' \
'Please specify another sample data which returns at least one label.'
+
+
+class LabelValidationError(APIException):
+ status_code = status.HTTP_400_BAD_REQUEST
+ default_detail = 'You cannot create a label with same name or shortcut key.'
diff --git a/app/api/views/label.py b/app/api/views/label.py
--- a/app/api/views/label.py
+++ b/app/api/views/label.py
@@ -9,6 +9,7 @@
from rest_framework.response import Response
from rest_framework.views import APIView
+from ..exceptions import LabelValidationError
from ..models import Label, Project
from ..permissions import IsInProjectReadOnlyOrAdmin, IsProjectAdmin
from ..serializers import LabelSerializer
@@ -27,6 +28,11 @@
project = get_object_or_404(Project, pk=self.kwargs['project_id'])
serializer.save(project=project)
+ def delete(self, request, *args, **kwargs):
+ delete_ids = request.data['ids']
+ Label.objects.filter(pk__in=delete_ids).delete()
+ return Response(status=status.HTTP_204_NO_CONTENT)
+
class LabelDetail(generics.RetrieveUpdateDestroyAPIView):
queryset = Label.objects.all()
@@ -43,14 +49,14 @@
def post(self, request, *args, **kwargs):
if 'file' not in request.data:
raise ParseError('Empty content')
- labels = json.load(request.data['file'])
project = get_object_or_404(Project, pk=kwargs['project_id'])
try:
- for label in labels:
- serializer = LabelSerializer(data=label)
- serializer.is_valid(raise_exception=True)
- serializer.save(project=project)
+ labels = json.load(request.data['file'])
+ serializer = LabelSerializer(data=labels, many=True)
+ serializer.is_valid(raise_exception=True)
+ serializer.save(project=project)
return Response(status=status.HTTP_201_CREATED)
+ except json.decoder.JSONDecodeError:
+ raise ParseError('The file format is invalid.')
except IntegrityError:
- content = {'error': 'IntegrityError: you cannot create a label with same name or shortkey.'}
- return Response(content, status=status.HTTP_400_BAD_REQUEST)
+ raise LabelValidationError
|
{"golden_diff": "diff --git a/app/api/exceptions.py b/app/api/exceptions.py\n--- a/app/api/exceptions.py\n+++ b/app/api/exceptions.py\n@@ -34,3 +34,8 @@\n class SampleDataException(ValidationError):\n default_detail = 'The response is empty. Maybe the sample data is not appropriate.' \\\n 'Please specify another sample data which returns at least one label.'\n+\n+\n+class LabelValidationError(APIException):\n+ status_code = status.HTTP_400_BAD_REQUEST\n+ default_detail = 'You cannot create a label with same name or shortcut key.'\ndiff --git a/app/api/views/label.py b/app/api/views/label.py\n--- a/app/api/views/label.py\n+++ b/app/api/views/label.py\n@@ -9,6 +9,7 @@\n from rest_framework.response import Response\n from rest_framework.views import APIView\n \n+from ..exceptions import LabelValidationError\n from ..models import Label, Project\n from ..permissions import IsInProjectReadOnlyOrAdmin, IsProjectAdmin\n from ..serializers import LabelSerializer\n@@ -27,6 +28,11 @@\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n serializer.save(project=project)\n \n+ def delete(self, request, *args, **kwargs):\n+ delete_ids = request.data['ids']\n+ Label.objects.filter(pk__in=delete_ids).delete()\n+ return Response(status=status.HTTP_204_NO_CONTENT)\n+\n \n class LabelDetail(generics.RetrieveUpdateDestroyAPIView):\n queryset = Label.objects.all()\n@@ -43,14 +49,14 @@\n def post(self, request, *args, **kwargs):\n if 'file' not in request.data:\n raise ParseError('Empty content')\n- labels = json.load(request.data['file'])\n project = get_object_or_404(Project, pk=kwargs['project_id'])\n try:\n- for label in labels:\n- serializer = LabelSerializer(data=label)\n- serializer.is_valid(raise_exception=True)\n- serializer.save(project=project)\n+ labels = json.load(request.data['file'])\n+ serializer = LabelSerializer(data=labels, many=True)\n+ serializer.is_valid(raise_exception=True)\n+ serializer.save(project=project)\n return Response(status=status.HTTP_201_CREATED)\n+ except json.decoder.JSONDecodeError:\n+ raise ParseError('The file format is invalid.')\n except IntegrityError:\n- content = {'error': 'IntegrityError: you cannot create a label with same name or shortkey.'}\n- return Response(content, status=status.HTTP_400_BAD_REQUEST)\n+ raise LabelValidationError\n", "issue": "[Enhancement request] Meaningful error on labels naming conflict\nFeature description\r\n---------\r\nTry rename a label to an existing name.\r\n\r\nYou get a 500 error.\r\n\r\nDesired: a meaningful error.\r\n\r\nRelated: #601, #826.\n", "before_files": [{"content": "import json\n\nfrom django.db import IntegrityError, transaction\nfrom django.shortcuts import get_object_or_404\nfrom rest_framework import generics, status\nfrom rest_framework.exceptions import ParseError\nfrom rest_framework.parsers import MultiPartParser\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom ..models import Label, Project\nfrom ..permissions import IsInProjectReadOnlyOrAdmin, IsProjectAdmin\nfrom ..serializers import LabelSerializer\n\n\nclass LabelList(generics.ListCreateAPIView):\n serializer_class = LabelSerializer\n pagination_class = None\n permission_classes = [IsAuthenticated & IsInProjectReadOnlyOrAdmin]\n\n def get_queryset(self):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n return project.labels\n\n def perform_create(self, serializer):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n serializer.save(project=project)\n\n\nclass LabelDetail(generics.RetrieveUpdateDestroyAPIView):\n queryset = Label.objects.all()\n serializer_class = LabelSerializer\n lookup_url_kwarg = 'label_id'\n permission_classes = [IsAuthenticated & IsInProjectReadOnlyOrAdmin]\n\n\nclass LabelUploadAPI(APIView):\n parser_classes = (MultiPartParser,)\n permission_classes = [IsAuthenticated & IsProjectAdmin]\n\n @transaction.atomic\n def post(self, request, *args, **kwargs):\n if 'file' not in request.data:\n raise ParseError('Empty content')\n labels = json.load(request.data['file'])\n project = get_object_or_404(Project, pk=kwargs['project_id'])\n try:\n for label in labels:\n serializer = LabelSerializer(data=label)\n serializer.is_valid(raise_exception=True)\n serializer.save(project=project)\n return Response(status=status.HTTP_201_CREATED)\n except IntegrityError:\n content = {'error': 'IntegrityError: you cannot create a label with same name or shortkey.'}\n return Response(content, status=status.HTTP_400_BAD_REQUEST)\n", "path": "app/api/views/label.py"}, {"content": "from rest_framework import status\nfrom rest_framework.exceptions import (APIException, PermissionDenied,\n ValidationError)\n\n\nclass FileParseException(APIException):\n status_code = status.HTTP_400_BAD_REQUEST\n default_detail = 'Invalid file format, line {}: {}'\n default_code = 'invalid'\n\n def __init__(self, line_num, line, code=None):\n detail = self.default_detail.format(line_num, line)\n super().__init__(detail, code)\n\n\nclass AutoLabelingException(APIException):\n status_code = status.HTTP_400_BAD_REQUEST\n default_detail = 'Auto labeling not allowed for the document with labels.'\n\n\nclass AutoLabeliingPermissionDenied(PermissionDenied):\n default_detail = 'You do not have permission to perform auto labeling.' \\\n 'Please ask the project administrators to add you.'\n\n\nclass URLConnectionError(ValidationError):\n default_detail = 'Failed to establish a connection. Please check the URL or network.'\n\n\nclass AWSTokenError(ValidationError):\n default_detail = 'The security token included in the request is invalid.'\n\n\nclass SampleDataException(ValidationError):\n default_detail = 'The response is empty. Maybe the sample data is not appropriate.' \\\n 'Please specify another sample data which returns at least one label.'\n", "path": "app/api/exceptions.py"}], "after_files": [{"content": "import json\n\nfrom django.db import IntegrityError, transaction\nfrom django.shortcuts import get_object_or_404\nfrom rest_framework import generics, status\nfrom rest_framework.exceptions import ParseError\nfrom rest_framework.parsers import MultiPartParser\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom ..exceptions import LabelValidationError\nfrom ..models import Label, Project\nfrom ..permissions import IsInProjectReadOnlyOrAdmin, IsProjectAdmin\nfrom ..serializers import LabelSerializer\n\n\nclass LabelList(generics.ListCreateAPIView):\n serializer_class = LabelSerializer\n pagination_class = None\n permission_classes = [IsAuthenticated & IsInProjectReadOnlyOrAdmin]\n\n def get_queryset(self):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n return project.labels\n\n def perform_create(self, serializer):\n project = get_object_or_404(Project, pk=self.kwargs['project_id'])\n serializer.save(project=project)\n\n def delete(self, request, *args, **kwargs):\n delete_ids = request.data['ids']\n Label.objects.filter(pk__in=delete_ids).delete()\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n\nclass LabelDetail(generics.RetrieveUpdateDestroyAPIView):\n queryset = Label.objects.all()\n serializer_class = LabelSerializer\n lookup_url_kwarg = 'label_id'\n permission_classes = [IsAuthenticated & IsInProjectReadOnlyOrAdmin]\n\n\nclass LabelUploadAPI(APIView):\n parser_classes = (MultiPartParser,)\n permission_classes = [IsAuthenticated & IsProjectAdmin]\n\n @transaction.atomic\n def post(self, request, *args, **kwargs):\n if 'file' not in request.data:\n raise ParseError('Empty content')\n project = get_object_or_404(Project, pk=kwargs['project_id'])\n try:\n labels = json.load(request.data['file'])\n serializer = LabelSerializer(data=labels, many=True)\n serializer.is_valid(raise_exception=True)\n serializer.save(project=project)\n return Response(status=status.HTTP_201_CREATED)\n except json.decoder.JSONDecodeError:\n raise ParseError('The file format is invalid.')\n except IntegrityError:\n raise LabelValidationError\n", "path": "app/api/views/label.py"}, {"content": "from rest_framework import status\nfrom rest_framework.exceptions import (APIException, PermissionDenied,\n ValidationError)\n\n\nclass FileParseException(APIException):\n status_code = status.HTTP_400_BAD_REQUEST\n default_detail = 'Invalid file format, line {}: {}'\n default_code = 'invalid'\n\n def __init__(self, line_num, line, code=None):\n detail = self.default_detail.format(line_num, line)\n super().__init__(detail, code)\n\n\nclass AutoLabelingException(APIException):\n status_code = status.HTTP_400_BAD_REQUEST\n default_detail = 'Auto labeling not allowed for the document with labels.'\n\n\nclass AutoLabeliingPermissionDenied(PermissionDenied):\n default_detail = 'You do not have permission to perform auto labeling.' \\\n 'Please ask the project administrators to add you.'\n\n\nclass URLConnectionError(ValidationError):\n default_detail = 'Failed to establish a connection. Please check the URL or network.'\n\n\nclass AWSTokenError(ValidationError):\n default_detail = 'The security token included in the request is invalid.'\n\n\nclass SampleDataException(ValidationError):\n default_detail = 'The response is empty. Maybe the sample data is not appropriate.' \\\n 'Please specify another sample data which returns at least one label.'\n\n\nclass LabelValidationError(APIException):\n status_code = status.HTTP_400_BAD_REQUEST\n default_detail = 'You cannot create a label with same name or shortcut key.'\n", "path": "app/api/exceptions.py"}]}
| 1,221 | 577 |
gh_patches_debug_37307
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-464
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Scanning IAM policy only takes First SID in json rather than looping through
**Describe the bug**
It seems when specifying more than one SID in a json, the policies do not loop through each one rather it just looks at the first one and ends.
**To Reproduce**
Steps to reproduce the behavior:
1. Create policy with more than one SID
`{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SqsAllow",
"Effect": "Allow",
"Action": [
"sqs:GetQueueAttributes",
"sqs:GetQueueUrl",
"sqs:ListDeadLetterSourceQueues",
"sqs:ListQueues",
"sqs:ReceiveMessage",
"sqs:SendMessage",
"sqs:SendMessageBatch"
],
"Resource": "*"
},
{
"Sid": "ALL",
"Effect": "Allow",
"Action": [ "*"
],
"Resource": ["*"]
},`
2. Run Checkov against policy
**Expected behavior**
I would expect the scan to check each json within the policy rather than the first one
**Desktop (please complete the following information):**
- OS: Mac
- Checkov Version: 1.0.442
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py`
Content:
```
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
3 import json
4
5
6 class IAMStarActionPolicyDocument(BaseResourceCheck):
7
8 def __init__(self):
9 name = "Ensure no IAM policies documents allow \"*\" as a statement's actions"
10 id = "CKV_AWS_63"
11 supported_resources = ['aws_iam_role_policy', 'aws_iam_user_policy', 'aws_iam_group_policy', 'aws_iam_policy']
12 categories = [CheckCategories.IAM]
13 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
14
15 def scan_resource_conf(self, conf):
16 if 'policy' in conf.keys():
17 try:
18 policy_block = json.loads(conf['policy'][0])
19 if 'Statement' in policy_block.keys():
20 if 'Action' in policy_block['Statement'][0] and \
21 policy_block['Statement'][0].get('Effect', ['Allow']) == 'Allow' and \
22 policy_block['Statement'][0]['Action'][0] == "*":
23 return CheckResult.FAILED
24 except: # nosec
25 pass
26 return CheckResult.PASSED
27
28
29 check = IAMStarActionPolicyDocument()
30
```
Path: `checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py`
Content:
```
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
3 import json
4
5
6 class IAMAdminPolicyDocument(BaseResourceCheck):
7
8 def __init__(self):
9 name = "Ensure IAM policies that allow full \"*-*\" administrative privileges are not created"
10 id = "CKV_AWS_62"
11 supported_resources = ['aws_iam_role_policy', 'aws_iam_user_policy', 'aws_iam_group_policy', 'aws_iam_policy']
12 categories = [CheckCategories.IAM]
13 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
14
15 def scan_resource_conf(self, conf):
16 if 'policy' in conf.keys():
17 try:
18 policy_block = json.loads(conf['policy'][0])
19 if 'Statement' in policy_block.keys():
20 if 'Action' in policy_block['Statement'][0] and \
21 policy_block['Statement'][0].get('Effect', ['Allow']) == 'Allow' and \
22 policy_block['Statement'][0]['Action'][0] == "*" and \
23 'Resource' in policy_block['Statement'][0] and \
24 policy_block['Statement'][0]['Resource'] == '*':
25 return CheckResult.FAILED
26 except: # nosec
27 pass
28 return CheckResult.PASSED
29
30
31 check = IAMAdminPolicyDocument()
32
```
Path: `checkov/terraform/checks/data/aws/StarActionPolicyDocument.py`
Content:
```
1 from checkov.terraform.checks.data.base_check import BaseDataCheck
2 from checkov.common.models.enums import CheckResult, CheckCategories
3
4
5 class StarActionPolicyDocument(BaseDataCheck):
6 def __init__(self):
7 name = "Ensure no IAM policies documents allow \"*\" as a statement's actions"
8 id = "CKV_AWS_49"
9 supported_data = ['aws_iam_policy_document']
10 categories = [CheckCategories.IAM]
11 super().__init__(name=name, id=id, categories=categories, supported_data=supported_data)
12
13 def scan_data_conf(self, conf):
14 """
15 validates iam policy document
16 https://learn.hashicorp.com/terraform/aws/iam-policy
17 :param conf: aws_kms_key configuration
18 :return: <CheckResult>
19 """
20 key = 'statement'
21 if key in conf.keys():
22 for statement in conf['statement']:
23 if 'actions' in statement and '*' in statement['actions'][0] and statement.get('effect', ['Allow'])[0] == 'Allow':
24 return CheckResult.FAILED
25 return CheckResult.PASSED
26
27
28 check = StarActionPolicyDocument()
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/terraform/checks/data/aws/StarActionPolicyDocument.py b/checkov/terraform/checks/data/aws/StarActionPolicyDocument.py
--- a/checkov/terraform/checks/data/aws/StarActionPolicyDocument.py
+++ b/checkov/terraform/checks/data/aws/StarActionPolicyDocument.py
@@ -19,7 +19,7 @@
"""
key = 'statement'
if key in conf.keys():
- for statement in conf['statement']:
+ for statement in conf[key]:
if 'actions' in statement and '*' in statement['actions'][0] and statement.get('effect', ['Allow'])[0] == 'Allow':
return CheckResult.FAILED
return CheckResult.PASSED
diff --git a/checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py b/checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py
--- a/checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py
+++ b/checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py
@@ -17,13 +17,13 @@
try:
policy_block = json.loads(conf['policy'][0])
if 'Statement' in policy_block.keys():
- if 'Action' in policy_block['Statement'][0] and \
- policy_block['Statement'][0].get('Effect', ['Allow']) == 'Allow' and \
- policy_block['Statement'][0]['Action'][0] == "*" and \
- 'Resource' in policy_block['Statement'][0] and \
- policy_block['Statement'][0]['Resource'] == '*':
+ for statement in policy_block['Statement']:
+ if 'Action' in statement and \
+ statement.get('Effect', ['Allow']) == 'Allow' and \
+ '*' in statement.get('Action', ['']) and \
+ '*' in statement.get('Resource', ['']):
return CheckResult.FAILED
- except: # nosec
+ except: # nosec
pass
return CheckResult.PASSED
diff --git a/checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py b/checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py
--- a/checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py
+++ b/checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py
@@ -17,9 +17,10 @@
try:
policy_block = json.loads(conf['policy'][0])
if 'Statement' in policy_block.keys():
- if 'Action' in policy_block['Statement'][0] and \
- policy_block['Statement'][0].get('Effect', ['Allow']) == 'Allow' and \
- policy_block['Statement'][0]['Action'][0] == "*":
+ for statement in policy_block['Statement']:
+ if 'Action' in statement and \
+ statement.get('Effect', ['Allow']) == 'Allow' and \
+ '*' in statement.get('Action', ['']):
return CheckResult.FAILED
except: # nosec
pass
|
{"golden_diff": "diff --git a/checkov/terraform/checks/data/aws/StarActionPolicyDocument.py b/checkov/terraform/checks/data/aws/StarActionPolicyDocument.py\n--- a/checkov/terraform/checks/data/aws/StarActionPolicyDocument.py\n+++ b/checkov/terraform/checks/data/aws/StarActionPolicyDocument.py\n@@ -19,7 +19,7 @@\n \"\"\"\n key = 'statement'\n if key in conf.keys():\n- for statement in conf['statement']:\n+ for statement in conf[key]:\n if 'actions' in statement and '*' in statement['actions'][0] and statement.get('effect', ['Allow'])[0] == 'Allow':\n return CheckResult.FAILED\n return CheckResult.PASSED\ndiff --git a/checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py b/checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py\n--- a/checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py\n+++ b/checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py\n@@ -17,13 +17,13 @@\n try:\n policy_block = json.loads(conf['policy'][0])\n if 'Statement' in policy_block.keys():\n- if 'Action' in policy_block['Statement'][0] and \\\n- policy_block['Statement'][0].get('Effect', ['Allow']) == 'Allow' and \\\n- policy_block['Statement'][0]['Action'][0] == \"*\" and \\\n- 'Resource' in policy_block['Statement'][0] and \\\n- policy_block['Statement'][0]['Resource'] == '*':\n+ for statement in policy_block['Statement']:\n+ if 'Action' in statement and \\\n+ statement.get('Effect', ['Allow']) == 'Allow' and \\\n+ '*' in statement.get('Action', ['']) and \\\n+ '*' in statement.get('Resource', ['']):\n return CheckResult.FAILED\n- except: # nosec\n+ except: # nosec\n pass\n return CheckResult.PASSED\n \ndiff --git a/checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py b/checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py\n--- a/checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py\n+++ b/checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py\n@@ -17,9 +17,10 @@\n try:\n policy_block = json.loads(conf['policy'][0])\n if 'Statement' in policy_block.keys():\n- if 'Action' in policy_block['Statement'][0] and \\\n- policy_block['Statement'][0].get('Effect', ['Allow']) == 'Allow' and \\\n- policy_block['Statement'][0]['Action'][0] == \"*\":\n+ for statement in policy_block['Statement']:\n+ if 'Action' in statement and \\\n+ statement.get('Effect', ['Allow']) == 'Allow' and \\\n+ '*' in statement.get('Action', ['']):\n return CheckResult.FAILED\n except: # nosec\n pass\n", "issue": "Scanning IAM policy only takes First SID in json rather than looping through\n**Describe the bug**\r\nIt seems when specifying more than one SID in a json, the policies do not loop through each one rather it just looks at the first one and ends. \r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Create policy with more than one SID\r\n`{\r\n \"Version\": \"2012-10-17\",\r\n \"Statement\": [\r\n {\r\n \"Sid\": \"SqsAllow\",\r\n \"Effect\": \"Allow\",\r\n \"Action\": [\r\n \"sqs:GetQueueAttributes\",\r\n \"sqs:GetQueueUrl\",\r\n \"sqs:ListDeadLetterSourceQueues\",\r\n \"sqs:ListQueues\",\r\n \"sqs:ReceiveMessage\",\r\n \"sqs:SendMessage\",\r\n \"sqs:SendMessageBatch\"\r\n ],\r\n \"Resource\": \"*\"\r\n },\r\n {\r\n \"Sid\": \"ALL\",\r\n \"Effect\": \"Allow\",\r\n \"Action\": [ \"*\"\r\n ],\r\n \"Resource\": [\"*\"]\r\n },`\r\n2. Run Checkov against policy\r\n\r\n\r\n**Expected behavior**\r\nI would expect the scan to check each json within the policy rather than the first one\r\n\r\n\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: Mac\r\n - Checkov Version: 1.0.442\r\n\r\n\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nimport json\n\n\nclass IAMStarActionPolicyDocument(BaseResourceCheck):\n\n def __init__(self):\n name = \"Ensure no IAM policies documents allow \\\"*\\\" as a statement's actions\"\n id = \"CKV_AWS_63\"\n supported_resources = ['aws_iam_role_policy', 'aws_iam_user_policy', 'aws_iam_group_policy', 'aws_iam_policy']\n categories = [CheckCategories.IAM]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if 'policy' in conf.keys():\n try:\n policy_block = json.loads(conf['policy'][0])\n if 'Statement' in policy_block.keys():\n if 'Action' in policy_block['Statement'][0] and \\\n policy_block['Statement'][0].get('Effect', ['Allow']) == 'Allow' and \\\n policy_block['Statement'][0]['Action'][0] == \"*\":\n return CheckResult.FAILED\n except: # nosec\n pass\n return CheckResult.PASSED\n\n\ncheck = IAMStarActionPolicyDocument()\n", "path": "checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py"}, {"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nimport json\n\n\nclass IAMAdminPolicyDocument(BaseResourceCheck):\n\n def __init__(self):\n name = \"Ensure IAM policies that allow full \\\"*-*\\\" administrative privileges are not created\"\n id = \"CKV_AWS_62\"\n supported_resources = ['aws_iam_role_policy', 'aws_iam_user_policy', 'aws_iam_group_policy', 'aws_iam_policy']\n categories = [CheckCategories.IAM]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if 'policy' in conf.keys():\n try:\n policy_block = json.loads(conf['policy'][0])\n if 'Statement' in policy_block.keys():\n if 'Action' in policy_block['Statement'][0] and \\\n policy_block['Statement'][0].get('Effect', ['Allow']) == 'Allow' and \\\n policy_block['Statement'][0]['Action'][0] == \"*\" and \\\n 'Resource' in policy_block['Statement'][0] and \\\n policy_block['Statement'][0]['Resource'] == '*':\n return CheckResult.FAILED\n except: # nosec\n pass\n return CheckResult.PASSED\n\n\ncheck = IAMAdminPolicyDocument()\n", "path": "checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py"}, {"content": "from checkov.terraform.checks.data.base_check import BaseDataCheck\nfrom checkov.common.models.enums import CheckResult, CheckCategories\n\n\nclass StarActionPolicyDocument(BaseDataCheck):\n def __init__(self):\n name = \"Ensure no IAM policies documents allow \\\"*\\\" as a statement's actions\"\n id = \"CKV_AWS_49\"\n supported_data = ['aws_iam_policy_document']\n categories = [CheckCategories.IAM]\n super().__init__(name=name, id=id, categories=categories, supported_data=supported_data)\n\n def scan_data_conf(self, conf):\n \"\"\"\n validates iam policy document\n https://learn.hashicorp.com/terraform/aws/iam-policy\n :param conf: aws_kms_key configuration\n :return: <CheckResult>\n \"\"\"\n key = 'statement'\n if key in conf.keys():\n for statement in conf['statement']:\n if 'actions' in statement and '*' in statement['actions'][0] and statement.get('effect', ['Allow'])[0] == 'Allow':\n return CheckResult.FAILED\n return CheckResult.PASSED\n\n\ncheck = StarActionPolicyDocument()\n", "path": "checkov/terraform/checks/data/aws/StarActionPolicyDocument.py"}], "after_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nimport json\n\n\nclass IAMStarActionPolicyDocument(BaseResourceCheck):\n\n def __init__(self):\n name = \"Ensure no IAM policies documents allow \\\"*\\\" as a statement's actions\"\n id = \"CKV_AWS_63\"\n supported_resources = ['aws_iam_role_policy', 'aws_iam_user_policy', 'aws_iam_group_policy', 'aws_iam_policy']\n categories = [CheckCategories.IAM]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if 'policy' in conf.keys():\n try:\n policy_block = json.loads(conf['policy'][0])\n if 'Statement' in policy_block.keys():\n for statement in policy_block['Statement']:\n if 'Action' in statement and \\\n statement.get('Effect', ['Allow']) == 'Allow' and \\\n '*' in statement.get('Action', ['']):\n return CheckResult.FAILED\n except: # nosec\n pass\n return CheckResult.PASSED\n\n\ncheck = IAMStarActionPolicyDocument()\n", "path": "checkov/terraform/checks/resource/aws/IAMStarActionPolicyDocument.py"}, {"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nimport json\n\n\nclass IAMAdminPolicyDocument(BaseResourceCheck):\n\n def __init__(self):\n name = \"Ensure IAM policies that allow full \\\"*-*\\\" administrative privileges are not created\"\n id = \"CKV_AWS_62\"\n supported_resources = ['aws_iam_role_policy', 'aws_iam_user_policy', 'aws_iam_group_policy', 'aws_iam_policy']\n categories = [CheckCategories.IAM]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if 'policy' in conf.keys():\n try:\n policy_block = json.loads(conf['policy'][0])\n if 'Statement' in policy_block.keys():\n for statement in policy_block['Statement']:\n if 'Action' in statement and \\\n statement.get('Effect', ['Allow']) == 'Allow' and \\\n '*' in statement.get('Action', ['']) and \\\n '*' in statement.get('Resource', ['']):\n return CheckResult.FAILED\n except: # nosec\n pass\n return CheckResult.PASSED\n\n\ncheck = IAMAdminPolicyDocument()\n", "path": "checkov/terraform/checks/resource/aws/IAMAdminPolicyDocument.py"}, {"content": "from checkov.terraform.checks.data.base_check import BaseDataCheck\nfrom checkov.common.models.enums import CheckResult, CheckCategories\n\n\nclass StarActionPolicyDocument(BaseDataCheck):\n def __init__(self):\n name = \"Ensure no IAM policies documents allow \\\"*\\\" as a statement's actions\"\n id = \"CKV_AWS_49\"\n supported_data = ['aws_iam_policy_document']\n categories = [CheckCategories.IAM]\n super().__init__(name=name, id=id, categories=categories, supported_data=supported_data)\n\n def scan_data_conf(self, conf):\n \"\"\"\n validates iam policy document\n https://learn.hashicorp.com/terraform/aws/iam-policy\n :param conf: aws_kms_key configuration\n :return: <CheckResult>\n \"\"\"\n key = 'statement'\n if key in conf.keys():\n for statement in conf[key]:\n if 'actions' in statement and '*' in statement['actions'][0] and statement.get('effect', ['Allow'])[0] == 'Allow':\n return CheckResult.FAILED\n return CheckResult.PASSED\n\n\ncheck = StarActionPolicyDocument()\n", "path": "checkov/terraform/checks/data/aws/StarActionPolicyDocument.py"}]}
| 1,585 | 675 |
gh_patches_debug_20479
|
rasdani/github-patches
|
git_diff
|
sunpy__sunpy-3235
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The ticks for the HGS overlay on map plots are white and invisible by default
Also the HPC ticks are on all four axes.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sunpy/visualization/wcsaxes_compat.py`
Content:
```
1 """
2 This module provides functions to make WCSAxes work in SunPy.
3 """
4 import matplotlib.pyplot as plt
5
6 import astropy.units as u
7 from astropy.visualization import wcsaxes
8
9 # Force is put here to enable disabling all checks in this module.
10 # It should only be used by tests and other such hacks.
11 _FORCE_NO_WCSAXES = False
12
13 __all__ = ["is_wcsaxes", "gca_wcs", "get_world_transform",
14 "default_wcs_grid", "wcsaxes_heliographic_overlay"]
15
16
17 def is_wcsaxes(axes):
18 """
19 Tests a `matplotlib.axes.Axes` object to see if it is an instance of
20 `~astropy.visualization.wcsaxes.WCSAxes`.
21
22 Parameters
23 ----------
24 axes : `matplotlib.axes`
25 Axes to test.
26
27 Returns
28 -------
29 `bool`
30 Result of the test.
31 """
32 if not _FORCE_NO_WCSAXES:
33 return isinstance(axes, wcsaxes.WCSAxes)
34 else:
35 return False
36
37
38 def gca_wcs(wcs, fig=None, slices=None):
39 """
40 Get the current axes, and return a `~astropy.visualization.wcsaxes.WCSAxes`
41 if possible.
42
43 Parameters
44 ----------
45 wcs : `astropy.wcs.WCS`
46 A `~astropy.wcs.WCS` object used to create a new axes.
47 fig : `matplotlib.figure.Figure`
48 The figure in which to check for the axes.
49 slices : `tuple`
50 ``slices`` is passed to `~astropy.visualization.wcsaxes.WCSAxes` to describe
51 which two dimensions of the `~astropy.wcs.WCS` object are being plotted.
52 This slices the multidimensional wcs object in the way it needs to be sliced.
53
54 Returns
55 -------
56 `matplotlib.axes.Axes` or `~astropy.visualization.wcsaxes.WCSAxes`
57 The current axes, or a new one if created.
58 """
59 if not fig:
60 fig = plt.gcf()
61
62 if not len(fig.get_axes()):
63 if not _FORCE_NO_WCSAXES:
64 ax = plt.gca(projection=wcs, slices=slices)
65 else:
66 ax = plt.gca()
67 else:
68 ax = plt.gca()
69
70 return ax
71
72
73 def get_world_transform(axes):
74 """
75 Get the transformation to world coordinates.
76
77 If the axes is a `~astropy.visualization.wcsaxes.WCSAxes` instance this
78 returns the transform to the "world" coordinates, otherwise it returns
79 the transform to the matplotlib data coordinates, which are assumed to be in
80 world coordinates.
81
82 Parameters
83 ----------
84 axes : `~astropy.visualization.wcsaxes.WCSAxes` or `~matplotlib.axes.Axes`
85 The axes to get the transform from.
86
87 Returns
88 -------
89 `~matplotlib.transforms.CompositeGenericTransform`
90 The transformation object.
91 """
92 if is_wcsaxes(axes):
93 transform = axes.get_transform('world')
94 else:
95 transform = axes.transData
96
97 return transform
98
99
100 def default_wcs_grid(axes):
101 """
102 Apply some default `~astropy.visualization.wcsaxes.WCSAxes` grid
103 formatting.
104
105 Parameters
106 ----------
107 axes : `~astropy.visualization.wcsaxes.WCSAxes`
108 The `~astropy.visualization.wcsaxes.WCSAxes` object to draw the world
109 coordinate grid on.
110 """
111 axes.coords.grid(color='white', alpha=0.6, linestyle='dotted',
112 linewidth=0.5)
113
114
115 @u.quantity_input
116 def wcsaxes_heliographic_overlay(axes, grid_spacing: u.deg = 10*u.deg, **kwargs):
117 """
118 Create a heliographic overlay using
119 `~astropy.visualization.wcsaxes.WCSAxes`.
120
121 Will draw a grid and label the top axes.
122
123 Parameters
124 ----------
125 axes : `~astropy.visualization.wcsaxes.WCSAxes`
126 The `~astropy.visualization.wcsaxes.WCSAxes` object to create the HGS overlay on.
127 grid_spacing: `~astropy.units.Quantity`
128 Spacing for longitude and latitude grid in degrees.
129
130 Returns
131 -------
132 `~astropy.visualization.wcsaxes.WCSAxes`
133 The overlay object.
134
135 Notes
136 -----
137 Keywords are passed to `~astropy.visualization.wcsaxes.coordinates_map.CoordinatesMap.grid`.
138 """
139 # Unpack spacing
140 if isinstance(grid_spacing, u.Quantity) and grid_spacing.size == 1:
141 lon_space = lat_space = grid_spacing
142 elif grid_spacing.size == 2:
143 lon_space, lat_space = grid_spacing
144 else:
145 raise ValueError("grid_spacing must be a Quantity of length one or two.")
146
147 overlay = axes.get_coords_overlay('heliographic_stonyhurst')
148
149 lon = overlay[0]
150 lat = overlay[1]
151
152 lon.coord_wrap = 180
153 lon.set_major_formatter('dd')
154
155 lon.set_axislabel('Solar Longitude', minpad=0.8)
156 lat.set_axislabel('Solar Latitude', minpad=0.9)
157
158 lon.set_ticks_position('tr')
159 lat.set_ticks_position('tr')
160
161 grid_kw = {'color': 'white', 'zorder': 100, 'alpha': 0.5}
162 grid_kw.update(kwargs)
163
164 lon.set_ticks(spacing=lon_space, color=grid_kw['color'])
165 lat.set_ticks(spacing=lat_space, color=grid_kw['color'])
166
167 overlay.grid(**grid_kw)
168
169 if axes.title:
170 x, y = axes.title.get_position()
171 axes.title.set_position([x, y + 0.08])
172
173 return overlay
174
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sunpy/visualization/wcsaxes_compat.py b/sunpy/visualization/wcsaxes_compat.py
--- a/sunpy/visualization/wcsaxes_compat.py
+++ b/sunpy/visualization/wcsaxes_compat.py
@@ -144,6 +144,12 @@
else:
raise ValueError("grid_spacing must be a Quantity of length one or two.")
+ # Set the native coordinates to be bottom and left only so they don't share
+ # axes with the overlay.
+ c1, c2 = axes.coords
+ c1.set_ticks_position('bl')
+ c2.set_ticks_position('bl')
+
overlay = axes.get_coords_overlay('heliographic_stonyhurst')
lon = overlay[0]
@@ -161,8 +167,10 @@
grid_kw = {'color': 'white', 'zorder': 100, 'alpha': 0.5}
grid_kw.update(kwargs)
- lon.set_ticks(spacing=lon_space, color=grid_kw['color'])
- lat.set_ticks(spacing=lat_space, color=grid_kw['color'])
+ # Don't plot white ticks by default (only if explicitly asked)
+ tick_color = grid_kw['color'] if 'color' in kwargs else 'k'
+ lon.set_ticks(spacing=lon_space, color=tick_color)
+ lat.set_ticks(spacing=lat_space, color=tick_color)
overlay.grid(**grid_kw)
|
{"golden_diff": "diff --git a/sunpy/visualization/wcsaxes_compat.py b/sunpy/visualization/wcsaxes_compat.py\n--- a/sunpy/visualization/wcsaxes_compat.py\n+++ b/sunpy/visualization/wcsaxes_compat.py\n@@ -144,6 +144,12 @@\n else:\n raise ValueError(\"grid_spacing must be a Quantity of length one or two.\")\n \n+ # Set the native coordinates to be bottom and left only so they don't share\n+ # axes with the overlay.\n+ c1, c2 = axes.coords\n+ c1.set_ticks_position('bl')\n+ c2.set_ticks_position('bl')\n+\n overlay = axes.get_coords_overlay('heliographic_stonyhurst')\n \n lon = overlay[0]\n@@ -161,8 +167,10 @@\n grid_kw = {'color': 'white', 'zorder': 100, 'alpha': 0.5}\n grid_kw.update(kwargs)\n \n- lon.set_ticks(spacing=lon_space, color=grid_kw['color'])\n- lat.set_ticks(spacing=lat_space, color=grid_kw['color'])\n+ # Don't plot white ticks by default (only if explicitly asked)\n+ tick_color = grid_kw['color'] if 'color' in kwargs else 'k'\n+ lon.set_ticks(spacing=lon_space, color=tick_color)\n+ lat.set_ticks(spacing=lat_space, color=tick_color)\n \n overlay.grid(**grid_kw)\n", "issue": "The ticks for the HGS overlay on map plots are white and invisible by default\nAlso the HPC ticks are on all four axes.\n", "before_files": [{"content": "\"\"\"\nThis module provides functions to make WCSAxes work in SunPy.\n\"\"\"\nimport matplotlib.pyplot as plt\n\nimport astropy.units as u\nfrom astropy.visualization import wcsaxes\n\n# Force is put here to enable disabling all checks in this module.\n# It should only be used by tests and other such hacks.\n_FORCE_NO_WCSAXES = False\n\n__all__ = [\"is_wcsaxes\", \"gca_wcs\", \"get_world_transform\",\n \"default_wcs_grid\", \"wcsaxes_heliographic_overlay\"]\n\n\ndef is_wcsaxes(axes):\n \"\"\"\n Tests a `matplotlib.axes.Axes` object to see if it is an instance of\n `~astropy.visualization.wcsaxes.WCSAxes`.\n\n Parameters\n ----------\n axes : `matplotlib.axes`\n Axes to test.\n\n Returns\n -------\n `bool`\n Result of the test.\n \"\"\"\n if not _FORCE_NO_WCSAXES:\n return isinstance(axes, wcsaxes.WCSAxes)\n else:\n return False\n\n\ndef gca_wcs(wcs, fig=None, slices=None):\n \"\"\"\n Get the current axes, and return a `~astropy.visualization.wcsaxes.WCSAxes`\n if possible.\n\n Parameters\n ----------\n wcs : `astropy.wcs.WCS`\n A `~astropy.wcs.WCS` object used to create a new axes.\n fig : `matplotlib.figure.Figure`\n The figure in which to check for the axes.\n slices : `tuple`\n ``slices`` is passed to `~astropy.visualization.wcsaxes.WCSAxes` to describe\n which two dimensions of the `~astropy.wcs.WCS` object are being plotted.\n This slices the multidimensional wcs object in the way it needs to be sliced.\n\n Returns\n -------\n `matplotlib.axes.Axes` or `~astropy.visualization.wcsaxes.WCSAxes`\n The current axes, or a new one if created.\n \"\"\"\n if not fig:\n fig = plt.gcf()\n\n if not len(fig.get_axes()):\n if not _FORCE_NO_WCSAXES:\n ax = plt.gca(projection=wcs, slices=slices)\n else:\n ax = plt.gca()\n else:\n ax = plt.gca()\n\n return ax\n\n\ndef get_world_transform(axes):\n \"\"\"\n Get the transformation to world coordinates.\n\n If the axes is a `~astropy.visualization.wcsaxes.WCSAxes` instance this\n returns the transform to the \"world\" coordinates, otherwise it returns\n the transform to the matplotlib data coordinates, which are assumed to be in\n world coordinates.\n\n Parameters\n ----------\n axes : `~astropy.visualization.wcsaxes.WCSAxes` or `~matplotlib.axes.Axes`\n The axes to get the transform from.\n\n Returns\n -------\n `~matplotlib.transforms.CompositeGenericTransform`\n The transformation object.\n \"\"\"\n if is_wcsaxes(axes):\n transform = axes.get_transform('world')\n else:\n transform = axes.transData\n\n return transform\n\n\ndef default_wcs_grid(axes):\n \"\"\"\n Apply some default `~astropy.visualization.wcsaxes.WCSAxes` grid\n formatting.\n\n Parameters\n ----------\n axes : `~astropy.visualization.wcsaxes.WCSAxes`\n The `~astropy.visualization.wcsaxes.WCSAxes` object to draw the world\n coordinate grid on.\n \"\"\"\n axes.coords.grid(color='white', alpha=0.6, linestyle='dotted',\n linewidth=0.5)\n\n\[email protected]_input\ndef wcsaxes_heliographic_overlay(axes, grid_spacing: u.deg = 10*u.deg, **kwargs):\n \"\"\"\n Create a heliographic overlay using\n `~astropy.visualization.wcsaxes.WCSAxes`.\n\n Will draw a grid and label the top axes.\n\n Parameters\n ----------\n axes : `~astropy.visualization.wcsaxes.WCSAxes`\n The `~astropy.visualization.wcsaxes.WCSAxes` object to create the HGS overlay on.\n grid_spacing: `~astropy.units.Quantity`\n Spacing for longitude and latitude grid in degrees.\n\n Returns\n -------\n `~astropy.visualization.wcsaxes.WCSAxes`\n The overlay object.\n\n Notes\n -----\n Keywords are passed to `~astropy.visualization.wcsaxes.coordinates_map.CoordinatesMap.grid`.\n \"\"\"\n # Unpack spacing\n if isinstance(grid_spacing, u.Quantity) and grid_spacing.size == 1:\n lon_space = lat_space = grid_spacing\n elif grid_spacing.size == 2:\n lon_space, lat_space = grid_spacing\n else:\n raise ValueError(\"grid_spacing must be a Quantity of length one or two.\")\n\n overlay = axes.get_coords_overlay('heliographic_stonyhurst')\n\n lon = overlay[0]\n lat = overlay[1]\n\n lon.coord_wrap = 180\n lon.set_major_formatter('dd')\n\n lon.set_axislabel('Solar Longitude', minpad=0.8)\n lat.set_axislabel('Solar Latitude', minpad=0.9)\n\n lon.set_ticks_position('tr')\n lat.set_ticks_position('tr')\n\n grid_kw = {'color': 'white', 'zorder': 100, 'alpha': 0.5}\n grid_kw.update(kwargs)\n\n lon.set_ticks(spacing=lon_space, color=grid_kw['color'])\n lat.set_ticks(spacing=lat_space, color=grid_kw['color'])\n\n overlay.grid(**grid_kw)\n\n if axes.title:\n x, y = axes.title.get_position()\n axes.title.set_position([x, y + 0.08])\n\n return overlay\n", "path": "sunpy/visualization/wcsaxes_compat.py"}], "after_files": [{"content": "\"\"\"\nThis module provides functions to make WCSAxes work in SunPy.\n\"\"\"\nimport matplotlib.pyplot as plt\n\nimport astropy.units as u\nfrom astropy.visualization import wcsaxes\n\n# Force is put here to enable disabling all checks in this module.\n# It should only be used by tests and other such hacks.\n_FORCE_NO_WCSAXES = False\n\n__all__ = [\"is_wcsaxes\", \"gca_wcs\", \"get_world_transform\",\n \"default_wcs_grid\", \"wcsaxes_heliographic_overlay\"]\n\n\ndef is_wcsaxes(axes):\n \"\"\"\n Tests a `matplotlib.axes.Axes` object to see if it is an instance of\n `~astropy.visualization.wcsaxes.WCSAxes`.\n\n Parameters\n ----------\n axes : `matplotlib.axes`\n Axes to test.\n\n Returns\n -------\n `bool`\n Result of the test.\n \"\"\"\n if not _FORCE_NO_WCSAXES:\n return isinstance(axes, wcsaxes.WCSAxes)\n else:\n return False\n\n\ndef gca_wcs(wcs, fig=None, slices=None):\n \"\"\"\n Get the current axes, and return a `~astropy.visualization.wcsaxes.WCSAxes`\n if possible.\n\n Parameters\n ----------\n wcs : `astropy.wcs.WCS`\n A `~astropy.wcs.WCS` object used to create a new axes.\n fig : `matplotlib.figure.Figure`\n The figure in which to check for the axes.\n slices : `tuple`\n ``slices`` is passed to `~astropy.visualization.wcsaxes.WCSAxes` to describe\n which two dimensions of the `~astropy.wcs.WCS` object are being plotted.\n This slices the multidimensional wcs object in the way it needs to be sliced.\n\n Returns\n -------\n `matplotlib.axes.Axes` or `~astropy.visualization.wcsaxes.WCSAxes`\n The current axes, or a new one if created.\n \"\"\"\n if not fig:\n fig = plt.gcf()\n\n if not len(fig.get_axes()):\n if not _FORCE_NO_WCSAXES:\n ax = plt.gca(projection=wcs, slices=slices)\n else:\n ax = plt.gca()\n else:\n ax = plt.gca()\n\n return ax\n\n\ndef get_world_transform(axes):\n \"\"\"\n Get the transformation to world coordinates.\n\n If the axes is a `~astropy.visualization.wcsaxes.WCSAxes` instance this\n returns the transform to the \"world\" coordinates, otherwise it returns\n the transform to the matplotlib data coordinates, which are assumed to be in\n world coordinates.\n\n Parameters\n ----------\n axes : `~astropy.visualization.wcsaxes.WCSAxes` or `~matplotlib.axes.Axes`\n The axes to get the transform from.\n\n Returns\n -------\n `~matplotlib.transforms.CompositeGenericTransform`\n The transformation object.\n \"\"\"\n if is_wcsaxes(axes):\n transform = axes.get_transform('world')\n else:\n transform = axes.transData\n\n return transform\n\n\ndef default_wcs_grid(axes):\n \"\"\"\n Apply some default `~astropy.visualization.wcsaxes.WCSAxes` grid\n formatting.\n\n Parameters\n ----------\n axes : `~astropy.visualization.wcsaxes.WCSAxes`\n The `~astropy.visualization.wcsaxes.WCSAxes` object to draw the world\n coordinate grid on.\n \"\"\"\n axes.coords.grid(color='white', alpha=0.6, linestyle='dotted',\n linewidth=0.5)\n\n\[email protected]_input\ndef wcsaxes_heliographic_overlay(axes, grid_spacing: u.deg = 10*u.deg, **kwargs):\n \"\"\"\n Create a heliographic overlay using\n `~astropy.visualization.wcsaxes.WCSAxes`.\n\n Will draw a grid and label the top axes.\n\n Parameters\n ----------\n axes : `~astropy.visualization.wcsaxes.WCSAxes`\n The `~astropy.visualization.wcsaxes.WCSAxes` object to create the HGS overlay on.\n grid_spacing: `~astropy.units.Quantity`\n Spacing for longitude and latitude grid in degrees.\n\n Returns\n -------\n `~astropy.visualization.wcsaxes.WCSAxes`\n The overlay object.\n\n Notes\n -----\n Keywords are passed to `~astropy.visualization.wcsaxes.coordinates_map.CoordinatesMap.grid`.\n \"\"\"\n # Unpack spacing\n if isinstance(grid_spacing, u.Quantity) and grid_spacing.size == 1:\n lon_space = lat_space = grid_spacing\n elif grid_spacing.size == 2:\n lon_space, lat_space = grid_spacing\n else:\n raise ValueError(\"grid_spacing must be a Quantity of length one or two.\")\n\n # Set the native coordinates to be bottom and left only so they don't share\n # axes with the overlay.\n c1, c2 = axes.coords\n c1.set_ticks_position('bl')\n c2.set_ticks_position('bl')\n\n overlay = axes.get_coords_overlay('heliographic_stonyhurst')\n\n lon = overlay[0]\n lat = overlay[1]\n\n lon.coord_wrap = 180\n lon.set_major_formatter('dd')\n\n lon.set_axislabel('Solar Longitude', minpad=0.8)\n lat.set_axislabel('Solar Latitude', minpad=0.9)\n\n lon.set_ticks_position('tr')\n lat.set_ticks_position('tr')\n\n grid_kw = {'color': 'white', 'zorder': 100, 'alpha': 0.5}\n grid_kw.update(kwargs)\n\n # Don't plot white ticks by default (only if explicitly asked)\n tick_color = grid_kw['color'] if 'color' in kwargs else 'k'\n lon.set_ticks(spacing=lon_space, color=tick_color)\n lat.set_ticks(spacing=lat_space, color=tick_color)\n\n overlay.grid(**grid_kw)\n\n if axes.title:\n x, y = axes.title.get_position()\n axes.title.set_position([x, y + 0.08])\n\n return overlay\n", "path": "sunpy/visualization/wcsaxes_compat.py"}]}
| 1,949 | 330 |
gh_patches_debug_39145
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-5027
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
test_pipeline_images.py fails with "TypeError: Skipped expected string as 'msg' parameter, got 'bool' instead."
See e.g. https://github.com/scrapy/scrapy/pull/5019/checks?check_run_id=2012658916
This should be related to the skip attribute, though I'm not sure why did it start happening now.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/pipelines/images.py`
Content:
```
1 """
2 Images Pipeline
3
4 See documentation in topics/media-pipeline.rst
5 """
6 import functools
7 import hashlib
8 from contextlib import suppress
9 from io import BytesIO
10
11 from itemadapter import ItemAdapter
12 from PIL import Image
13
14 from scrapy.exceptions import DropItem
15 from scrapy.http import Request
16 from scrapy.pipelines.files import FileException, FilesPipeline
17 # TODO: from scrapy.pipelines.media import MediaPipeline
18 from scrapy.settings import Settings
19 from scrapy.utils.misc import md5sum
20 from scrapy.utils.python import to_bytes
21
22
23 class NoimagesDrop(DropItem):
24 """Product with no images exception"""
25
26
27 class ImageException(FileException):
28 """General image error exception"""
29
30
31 class ImagesPipeline(FilesPipeline):
32 """Abstract pipeline that implement the image thumbnail generation logic
33
34 """
35
36 MEDIA_NAME = 'image'
37
38 # Uppercase attributes kept for backward compatibility with code that subclasses
39 # ImagesPipeline. They may be overridden by settings.
40 MIN_WIDTH = 0
41 MIN_HEIGHT = 0
42 EXPIRES = 90
43 THUMBS = {}
44 DEFAULT_IMAGES_URLS_FIELD = 'image_urls'
45 DEFAULT_IMAGES_RESULT_FIELD = 'images'
46
47 def __init__(self, store_uri, download_func=None, settings=None):
48 super().__init__(store_uri, settings=settings, download_func=download_func)
49
50 if isinstance(settings, dict) or settings is None:
51 settings = Settings(settings)
52
53 resolve = functools.partial(self._key_for_pipe,
54 base_class_name="ImagesPipeline",
55 settings=settings)
56 self.expires = settings.getint(
57 resolve("IMAGES_EXPIRES"), self.EXPIRES
58 )
59
60 if not hasattr(self, "IMAGES_RESULT_FIELD"):
61 self.IMAGES_RESULT_FIELD = self.DEFAULT_IMAGES_RESULT_FIELD
62 if not hasattr(self, "IMAGES_URLS_FIELD"):
63 self.IMAGES_URLS_FIELD = self.DEFAULT_IMAGES_URLS_FIELD
64
65 self.images_urls_field = settings.get(
66 resolve('IMAGES_URLS_FIELD'),
67 self.IMAGES_URLS_FIELD
68 )
69 self.images_result_field = settings.get(
70 resolve('IMAGES_RESULT_FIELD'),
71 self.IMAGES_RESULT_FIELD
72 )
73 self.min_width = settings.getint(
74 resolve('IMAGES_MIN_WIDTH'), self.MIN_WIDTH
75 )
76 self.min_height = settings.getint(
77 resolve('IMAGES_MIN_HEIGHT'), self.MIN_HEIGHT
78 )
79 self.thumbs = settings.get(
80 resolve('IMAGES_THUMBS'), self.THUMBS
81 )
82
83 @classmethod
84 def from_settings(cls, settings):
85 s3store = cls.STORE_SCHEMES['s3']
86 s3store.AWS_ACCESS_KEY_ID = settings['AWS_ACCESS_KEY_ID']
87 s3store.AWS_SECRET_ACCESS_KEY = settings['AWS_SECRET_ACCESS_KEY']
88 s3store.AWS_ENDPOINT_URL = settings['AWS_ENDPOINT_URL']
89 s3store.AWS_REGION_NAME = settings['AWS_REGION_NAME']
90 s3store.AWS_USE_SSL = settings['AWS_USE_SSL']
91 s3store.AWS_VERIFY = settings['AWS_VERIFY']
92 s3store.POLICY = settings['IMAGES_STORE_S3_ACL']
93
94 gcs_store = cls.STORE_SCHEMES['gs']
95 gcs_store.GCS_PROJECT_ID = settings['GCS_PROJECT_ID']
96 gcs_store.POLICY = settings['IMAGES_STORE_GCS_ACL'] or None
97
98 ftp_store = cls.STORE_SCHEMES['ftp']
99 ftp_store.FTP_USERNAME = settings['FTP_USER']
100 ftp_store.FTP_PASSWORD = settings['FTP_PASSWORD']
101 ftp_store.USE_ACTIVE_MODE = settings.getbool('FEED_STORAGE_FTP_ACTIVE')
102
103 store_uri = settings['IMAGES_STORE']
104 return cls(store_uri, settings=settings)
105
106 def file_downloaded(self, response, request, info, *, item=None):
107 return self.image_downloaded(response, request, info, item=item)
108
109 def image_downloaded(self, response, request, info, *, item=None):
110 checksum = None
111 for path, image, buf in self.get_images(response, request, info, item=item):
112 if checksum is None:
113 buf.seek(0)
114 checksum = md5sum(buf)
115 width, height = image.size
116 self.store.persist_file(
117 path, buf, info,
118 meta={'width': width, 'height': height},
119 headers={'Content-Type': 'image/jpeg'})
120 return checksum
121
122 def get_images(self, response, request, info, *, item=None):
123 path = self.file_path(request, response=response, info=info, item=item)
124 orig_image = Image.open(BytesIO(response.body))
125
126 width, height = orig_image.size
127 if width < self.min_width or height < self.min_height:
128 raise ImageException("Image too small "
129 f"({width}x{height} < "
130 f"{self.min_width}x{self.min_height})")
131
132 image, buf = self.convert_image(orig_image)
133 yield path, image, buf
134
135 for thumb_id, size in self.thumbs.items():
136 thumb_path = self.thumb_path(request, thumb_id, response=response, info=info)
137 thumb_image, thumb_buf = self.convert_image(image, size)
138 yield thumb_path, thumb_image, thumb_buf
139
140 def convert_image(self, image, size=None):
141 if image.format == 'PNG' and image.mode == 'RGBA':
142 background = Image.new('RGBA', image.size, (255, 255, 255))
143 background.paste(image, image)
144 image = background.convert('RGB')
145 elif image.mode == 'P':
146 image = image.convert("RGBA")
147 background = Image.new('RGBA', image.size, (255, 255, 255))
148 background.paste(image, image)
149 image = background.convert('RGB')
150 elif image.mode != 'RGB':
151 image = image.convert('RGB')
152
153 if size:
154 image = image.copy()
155 image.thumbnail(size, Image.ANTIALIAS)
156
157 buf = BytesIO()
158 image.save(buf, 'JPEG')
159 return image, buf
160
161 def get_media_requests(self, item, info):
162 urls = ItemAdapter(item).get(self.images_urls_field, [])
163 return [Request(u) for u in urls]
164
165 def item_completed(self, results, item, info):
166 with suppress(KeyError):
167 ItemAdapter(item)[self.images_result_field] = [x for ok, x in results if ok]
168 return item
169
170 def file_path(self, request, response=None, info=None, *, item=None):
171 image_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()
172 return f'full/{image_guid}.jpg'
173
174 def thumb_path(self, request, thumb_id, response=None, info=None):
175 thumb_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()
176 return f'thumbs/{thumb_id}/{thumb_guid}.jpg'
177
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scrapy/pipelines/images.py b/scrapy/pipelines/images.py
--- a/scrapy/pipelines/images.py
+++ b/scrapy/pipelines/images.py
@@ -9,9 +9,8 @@
from io import BytesIO
from itemadapter import ItemAdapter
-from PIL import Image
-from scrapy.exceptions import DropItem
+from scrapy.exceptions import DropItem, NotConfigured
from scrapy.http import Request
from scrapy.pipelines.files import FileException, FilesPipeline
# TODO: from scrapy.pipelines.media import MediaPipeline
@@ -45,6 +44,14 @@
DEFAULT_IMAGES_RESULT_FIELD = 'images'
def __init__(self, store_uri, download_func=None, settings=None):
+ try:
+ from PIL import Image
+ self._Image = Image
+ except ImportError:
+ raise NotConfigured(
+ 'ImagesPipeline requires installing Pillow 4.0.0 or later'
+ )
+
super().__init__(store_uri, settings=settings, download_func=download_func)
if isinstance(settings, dict) or settings is None:
@@ -121,7 +128,7 @@
def get_images(self, response, request, info, *, item=None):
path = self.file_path(request, response=response, info=info, item=item)
- orig_image = Image.open(BytesIO(response.body))
+ orig_image = self._Image.open(BytesIO(response.body))
width, height = orig_image.size
if width < self.min_width or height < self.min_height:
@@ -139,12 +146,12 @@
def convert_image(self, image, size=None):
if image.format == 'PNG' and image.mode == 'RGBA':
- background = Image.new('RGBA', image.size, (255, 255, 255))
+ background = self._Image.new('RGBA', image.size, (255, 255, 255))
background.paste(image, image)
image = background.convert('RGB')
elif image.mode == 'P':
image = image.convert("RGBA")
- background = Image.new('RGBA', image.size, (255, 255, 255))
+ background = self._Image.new('RGBA', image.size, (255, 255, 255))
background.paste(image, image)
image = background.convert('RGB')
elif image.mode != 'RGB':
@@ -152,7 +159,7 @@
if size:
image = image.copy()
- image.thumbnail(size, Image.ANTIALIAS)
+ image.thumbnail(size, self._Image.ANTIALIAS)
buf = BytesIO()
image.save(buf, 'JPEG')
|
{"golden_diff": "diff --git a/scrapy/pipelines/images.py b/scrapy/pipelines/images.py\n--- a/scrapy/pipelines/images.py\n+++ b/scrapy/pipelines/images.py\n@@ -9,9 +9,8 @@\n from io import BytesIO\n \n from itemadapter import ItemAdapter\n-from PIL import Image\n \n-from scrapy.exceptions import DropItem\n+from scrapy.exceptions import DropItem, NotConfigured\n from scrapy.http import Request\n from scrapy.pipelines.files import FileException, FilesPipeline\n # TODO: from scrapy.pipelines.media import MediaPipeline\n@@ -45,6 +44,14 @@\n DEFAULT_IMAGES_RESULT_FIELD = 'images'\n \n def __init__(self, store_uri, download_func=None, settings=None):\n+ try:\n+ from PIL import Image\n+ self._Image = Image\n+ except ImportError:\n+ raise NotConfigured(\n+ 'ImagesPipeline requires installing Pillow 4.0.0 or later'\n+ )\n+\n super().__init__(store_uri, settings=settings, download_func=download_func)\n \n if isinstance(settings, dict) or settings is None:\n@@ -121,7 +128,7 @@\n \n def get_images(self, response, request, info, *, item=None):\n path = self.file_path(request, response=response, info=info, item=item)\n- orig_image = Image.open(BytesIO(response.body))\n+ orig_image = self._Image.open(BytesIO(response.body))\n \n width, height = orig_image.size\n if width < self.min_width or height < self.min_height:\n@@ -139,12 +146,12 @@\n \n def convert_image(self, image, size=None):\n if image.format == 'PNG' and image.mode == 'RGBA':\n- background = Image.new('RGBA', image.size, (255, 255, 255))\n+ background = self._Image.new('RGBA', image.size, (255, 255, 255))\n background.paste(image, image)\n image = background.convert('RGB')\n elif image.mode == 'P':\n image = image.convert(\"RGBA\")\n- background = Image.new('RGBA', image.size, (255, 255, 255))\n+ background = self._Image.new('RGBA', image.size, (255, 255, 255))\n background.paste(image, image)\n image = background.convert('RGB')\n elif image.mode != 'RGB':\n@@ -152,7 +159,7 @@\n \n if size:\n image = image.copy()\n- image.thumbnail(size, Image.ANTIALIAS)\n+ image.thumbnail(size, self._Image.ANTIALIAS)\n \n buf = BytesIO()\n image.save(buf, 'JPEG')\n", "issue": "test_pipeline_images.py fails with \"TypeError: Skipped expected string as 'msg' parameter, got 'bool' instead.\"\nSee e.g. https://github.com/scrapy/scrapy/pull/5019/checks?check_run_id=2012658916\r\n\r\nThis should be related to the skip attribute, though I'm not sure why did it start happening now.\n", "before_files": [{"content": "\"\"\"\nImages Pipeline\n\nSee documentation in topics/media-pipeline.rst\n\"\"\"\nimport functools\nimport hashlib\nfrom contextlib import suppress\nfrom io import BytesIO\n\nfrom itemadapter import ItemAdapter\nfrom PIL import Image\n\nfrom scrapy.exceptions import DropItem\nfrom scrapy.http import Request\nfrom scrapy.pipelines.files import FileException, FilesPipeline\n# TODO: from scrapy.pipelines.media import MediaPipeline\nfrom scrapy.settings import Settings\nfrom scrapy.utils.misc import md5sum\nfrom scrapy.utils.python import to_bytes\n\n\nclass NoimagesDrop(DropItem):\n \"\"\"Product with no images exception\"\"\"\n\n\nclass ImageException(FileException):\n \"\"\"General image error exception\"\"\"\n\n\nclass ImagesPipeline(FilesPipeline):\n \"\"\"Abstract pipeline that implement the image thumbnail generation logic\n\n \"\"\"\n\n MEDIA_NAME = 'image'\n\n # Uppercase attributes kept for backward compatibility with code that subclasses\n # ImagesPipeline. They may be overridden by settings.\n MIN_WIDTH = 0\n MIN_HEIGHT = 0\n EXPIRES = 90\n THUMBS = {}\n DEFAULT_IMAGES_URLS_FIELD = 'image_urls'\n DEFAULT_IMAGES_RESULT_FIELD = 'images'\n\n def __init__(self, store_uri, download_func=None, settings=None):\n super().__init__(store_uri, settings=settings, download_func=download_func)\n\n if isinstance(settings, dict) or settings is None:\n settings = Settings(settings)\n\n resolve = functools.partial(self._key_for_pipe,\n base_class_name=\"ImagesPipeline\",\n settings=settings)\n self.expires = settings.getint(\n resolve(\"IMAGES_EXPIRES\"), self.EXPIRES\n )\n\n if not hasattr(self, \"IMAGES_RESULT_FIELD\"):\n self.IMAGES_RESULT_FIELD = self.DEFAULT_IMAGES_RESULT_FIELD\n if not hasattr(self, \"IMAGES_URLS_FIELD\"):\n self.IMAGES_URLS_FIELD = self.DEFAULT_IMAGES_URLS_FIELD\n\n self.images_urls_field = settings.get(\n resolve('IMAGES_URLS_FIELD'),\n self.IMAGES_URLS_FIELD\n )\n self.images_result_field = settings.get(\n resolve('IMAGES_RESULT_FIELD'),\n self.IMAGES_RESULT_FIELD\n )\n self.min_width = settings.getint(\n resolve('IMAGES_MIN_WIDTH'), self.MIN_WIDTH\n )\n self.min_height = settings.getint(\n resolve('IMAGES_MIN_HEIGHT'), self.MIN_HEIGHT\n )\n self.thumbs = settings.get(\n resolve('IMAGES_THUMBS'), self.THUMBS\n )\n\n @classmethod\n def from_settings(cls, settings):\n s3store = cls.STORE_SCHEMES['s3']\n s3store.AWS_ACCESS_KEY_ID = settings['AWS_ACCESS_KEY_ID']\n s3store.AWS_SECRET_ACCESS_KEY = settings['AWS_SECRET_ACCESS_KEY']\n s3store.AWS_ENDPOINT_URL = settings['AWS_ENDPOINT_URL']\n s3store.AWS_REGION_NAME = settings['AWS_REGION_NAME']\n s3store.AWS_USE_SSL = settings['AWS_USE_SSL']\n s3store.AWS_VERIFY = settings['AWS_VERIFY']\n s3store.POLICY = settings['IMAGES_STORE_S3_ACL']\n\n gcs_store = cls.STORE_SCHEMES['gs']\n gcs_store.GCS_PROJECT_ID = settings['GCS_PROJECT_ID']\n gcs_store.POLICY = settings['IMAGES_STORE_GCS_ACL'] or None\n\n ftp_store = cls.STORE_SCHEMES['ftp']\n ftp_store.FTP_USERNAME = settings['FTP_USER']\n ftp_store.FTP_PASSWORD = settings['FTP_PASSWORD']\n ftp_store.USE_ACTIVE_MODE = settings.getbool('FEED_STORAGE_FTP_ACTIVE')\n\n store_uri = settings['IMAGES_STORE']\n return cls(store_uri, settings=settings)\n\n def file_downloaded(self, response, request, info, *, item=None):\n return self.image_downloaded(response, request, info, item=item)\n\n def image_downloaded(self, response, request, info, *, item=None):\n checksum = None\n for path, image, buf in self.get_images(response, request, info, item=item):\n if checksum is None:\n buf.seek(0)\n checksum = md5sum(buf)\n width, height = image.size\n self.store.persist_file(\n path, buf, info,\n meta={'width': width, 'height': height},\n headers={'Content-Type': 'image/jpeg'})\n return checksum\n\n def get_images(self, response, request, info, *, item=None):\n path = self.file_path(request, response=response, info=info, item=item)\n orig_image = Image.open(BytesIO(response.body))\n\n width, height = orig_image.size\n if width < self.min_width or height < self.min_height:\n raise ImageException(\"Image too small \"\n f\"({width}x{height} < \"\n f\"{self.min_width}x{self.min_height})\")\n\n image, buf = self.convert_image(orig_image)\n yield path, image, buf\n\n for thumb_id, size in self.thumbs.items():\n thumb_path = self.thumb_path(request, thumb_id, response=response, info=info)\n thumb_image, thumb_buf = self.convert_image(image, size)\n yield thumb_path, thumb_image, thumb_buf\n\n def convert_image(self, image, size=None):\n if image.format == 'PNG' and image.mode == 'RGBA':\n background = Image.new('RGBA', image.size, (255, 255, 255))\n background.paste(image, image)\n image = background.convert('RGB')\n elif image.mode == 'P':\n image = image.convert(\"RGBA\")\n background = Image.new('RGBA', image.size, (255, 255, 255))\n background.paste(image, image)\n image = background.convert('RGB')\n elif image.mode != 'RGB':\n image = image.convert('RGB')\n\n if size:\n image = image.copy()\n image.thumbnail(size, Image.ANTIALIAS)\n\n buf = BytesIO()\n image.save(buf, 'JPEG')\n return image, buf\n\n def get_media_requests(self, item, info):\n urls = ItemAdapter(item).get(self.images_urls_field, [])\n return [Request(u) for u in urls]\n\n def item_completed(self, results, item, info):\n with suppress(KeyError):\n ItemAdapter(item)[self.images_result_field] = [x for ok, x in results if ok]\n return item\n\n def file_path(self, request, response=None, info=None, *, item=None):\n image_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()\n return f'full/{image_guid}.jpg'\n\n def thumb_path(self, request, thumb_id, response=None, info=None):\n thumb_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()\n return f'thumbs/{thumb_id}/{thumb_guid}.jpg'\n", "path": "scrapy/pipelines/images.py"}], "after_files": [{"content": "\"\"\"\nImages Pipeline\n\nSee documentation in topics/media-pipeline.rst\n\"\"\"\nimport functools\nimport hashlib\nfrom contextlib import suppress\nfrom io import BytesIO\n\nfrom itemadapter import ItemAdapter\n\nfrom scrapy.exceptions import DropItem, NotConfigured\nfrom scrapy.http import Request\nfrom scrapy.pipelines.files import FileException, FilesPipeline\n# TODO: from scrapy.pipelines.media import MediaPipeline\nfrom scrapy.settings import Settings\nfrom scrapy.utils.misc import md5sum\nfrom scrapy.utils.python import to_bytes\n\n\nclass NoimagesDrop(DropItem):\n \"\"\"Product with no images exception\"\"\"\n\n\nclass ImageException(FileException):\n \"\"\"General image error exception\"\"\"\n\n\nclass ImagesPipeline(FilesPipeline):\n \"\"\"Abstract pipeline that implement the image thumbnail generation logic\n\n \"\"\"\n\n MEDIA_NAME = 'image'\n\n # Uppercase attributes kept for backward compatibility with code that subclasses\n # ImagesPipeline. They may be overridden by settings.\n MIN_WIDTH = 0\n MIN_HEIGHT = 0\n EXPIRES = 90\n THUMBS = {}\n DEFAULT_IMAGES_URLS_FIELD = 'image_urls'\n DEFAULT_IMAGES_RESULT_FIELD = 'images'\n\n def __init__(self, store_uri, download_func=None, settings=None):\n try:\n from PIL import Image\n self._Image = Image\n except ImportError:\n raise NotConfigured(\n 'ImagesPipeline requires installing Pillow 4.0.0 or later'\n )\n\n super().__init__(store_uri, settings=settings, download_func=download_func)\n\n if isinstance(settings, dict) or settings is None:\n settings = Settings(settings)\n\n resolve = functools.partial(self._key_for_pipe,\n base_class_name=\"ImagesPipeline\",\n settings=settings)\n self.expires = settings.getint(\n resolve(\"IMAGES_EXPIRES\"), self.EXPIRES\n )\n\n if not hasattr(self, \"IMAGES_RESULT_FIELD\"):\n self.IMAGES_RESULT_FIELD = self.DEFAULT_IMAGES_RESULT_FIELD\n if not hasattr(self, \"IMAGES_URLS_FIELD\"):\n self.IMAGES_URLS_FIELD = self.DEFAULT_IMAGES_URLS_FIELD\n\n self.images_urls_field = settings.get(\n resolve('IMAGES_URLS_FIELD'),\n self.IMAGES_URLS_FIELD\n )\n self.images_result_field = settings.get(\n resolve('IMAGES_RESULT_FIELD'),\n self.IMAGES_RESULT_FIELD\n )\n self.min_width = settings.getint(\n resolve('IMAGES_MIN_WIDTH'), self.MIN_WIDTH\n )\n self.min_height = settings.getint(\n resolve('IMAGES_MIN_HEIGHT'), self.MIN_HEIGHT\n )\n self.thumbs = settings.get(\n resolve('IMAGES_THUMBS'), self.THUMBS\n )\n\n @classmethod\n def from_settings(cls, settings):\n s3store = cls.STORE_SCHEMES['s3']\n s3store.AWS_ACCESS_KEY_ID = settings['AWS_ACCESS_KEY_ID']\n s3store.AWS_SECRET_ACCESS_KEY = settings['AWS_SECRET_ACCESS_KEY']\n s3store.AWS_ENDPOINT_URL = settings['AWS_ENDPOINT_URL']\n s3store.AWS_REGION_NAME = settings['AWS_REGION_NAME']\n s3store.AWS_USE_SSL = settings['AWS_USE_SSL']\n s3store.AWS_VERIFY = settings['AWS_VERIFY']\n s3store.POLICY = settings['IMAGES_STORE_S3_ACL']\n\n gcs_store = cls.STORE_SCHEMES['gs']\n gcs_store.GCS_PROJECT_ID = settings['GCS_PROJECT_ID']\n gcs_store.POLICY = settings['IMAGES_STORE_GCS_ACL'] or None\n\n ftp_store = cls.STORE_SCHEMES['ftp']\n ftp_store.FTP_USERNAME = settings['FTP_USER']\n ftp_store.FTP_PASSWORD = settings['FTP_PASSWORD']\n ftp_store.USE_ACTIVE_MODE = settings.getbool('FEED_STORAGE_FTP_ACTIVE')\n\n store_uri = settings['IMAGES_STORE']\n return cls(store_uri, settings=settings)\n\n def file_downloaded(self, response, request, info, *, item=None):\n return self.image_downloaded(response, request, info, item=item)\n\n def image_downloaded(self, response, request, info, *, item=None):\n checksum = None\n for path, image, buf in self.get_images(response, request, info, item=item):\n if checksum is None:\n buf.seek(0)\n checksum = md5sum(buf)\n width, height = image.size\n self.store.persist_file(\n path, buf, info,\n meta={'width': width, 'height': height},\n headers={'Content-Type': 'image/jpeg'})\n return checksum\n\n def get_images(self, response, request, info, *, item=None):\n path = self.file_path(request, response=response, info=info, item=item)\n orig_image = self._Image.open(BytesIO(response.body))\n\n width, height = orig_image.size\n if width < self.min_width or height < self.min_height:\n raise ImageException(\"Image too small \"\n f\"({width}x{height} < \"\n f\"{self.min_width}x{self.min_height})\")\n\n image, buf = self.convert_image(orig_image)\n yield path, image, buf\n\n for thumb_id, size in self.thumbs.items():\n thumb_path = self.thumb_path(request, thumb_id, response=response, info=info)\n thumb_image, thumb_buf = self.convert_image(image, size)\n yield thumb_path, thumb_image, thumb_buf\n\n def convert_image(self, image, size=None):\n if image.format == 'PNG' and image.mode == 'RGBA':\n background = self._Image.new('RGBA', image.size, (255, 255, 255))\n background.paste(image, image)\n image = background.convert('RGB')\n elif image.mode == 'P':\n image = image.convert(\"RGBA\")\n background = self._Image.new('RGBA', image.size, (255, 255, 255))\n background.paste(image, image)\n image = background.convert('RGB')\n elif image.mode != 'RGB':\n image = image.convert('RGB')\n\n if size:\n image = image.copy()\n image.thumbnail(size, self._Image.ANTIALIAS)\n\n buf = BytesIO()\n image.save(buf, 'JPEG')\n return image, buf\n\n def get_media_requests(self, item, info):\n urls = ItemAdapter(item).get(self.images_urls_field, [])\n return [Request(u) for u in urls]\n\n def item_completed(self, results, item, info):\n with suppress(KeyError):\n ItemAdapter(item)[self.images_result_field] = [x for ok, x in results if ok]\n return item\n\n def file_path(self, request, response=None, info=None, *, item=None):\n image_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()\n return f'full/{image_guid}.jpg'\n\n def thumb_path(self, request, thumb_id, response=None, info=None):\n thumb_guid = hashlib.sha1(to_bytes(request.url)).hexdigest()\n return f'thumbs/{thumb_id}/{thumb_guid}.jpg'\n", "path": "scrapy/pipelines/images.py"}]}
| 2,240 | 611 |
gh_patches_debug_23676
|
rasdani/github-patches
|
git_diff
|
benoitc__gunicorn-929
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error handling all requests on master - attempt to read attribute of NoneType
I'm trying to run Gunicorn on the master branch (latest commit is 4c601ce447fafbeed27f0f0a238e0e48c928b6f9). But every incoming request generates this error:
```
[2014-10-27 20:36:55 +0000] [22663] [ERROR] Error handling request
Traceback (most recent call last):
File "/mnt/runscope/.virtualenvs/embedcurl/src/gunicorn/gunicorn/workers/async.py", line 41, in handle
proxy_protocol_info = req.proxy_protocol_info
AttributeError: 'NoneType' object has no attribute 'proxy_protocol_info'
```
This is my gunicorn command line:
```
/usr/local/runscope/.virtualenvs/embedcurl/bin/gunicorn \
--name embedcurl -k gevent --workers=2 --bind 0.0.0.0:3002 \
--error-logfile /var/log/runscope/embedcurl.error.log \
--access-logfile /var/log/runscope/embedcurl.access.log \
--access-logformat '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s %(T)s %(D)s "%(f)s" "%(a)s"' \
--max-requests 50000 \
--max-requests-jitter 500 \
--statsd-prefix service.embedcurl.test004. \
-D "embedcurl:app" --pid /var/run/embedcurl.pid
```
I think the problem comes from commit adf353f213d12994cc36ecbbcb6a084baf1fda12. See the file https://github.com/benoitc/gunicorn/blob/adf353f213d12994cc36ecbbcb6a084baf1fda12/gunicorn/workers/async.py -- line 41 reads from `req.proxy_protocol_info` but `req` is always None when that line runs. I'm not familiar with `proxy_protocol_info` myself so I'm not quite sure what the fix is.
Error handling all requests on master - attempt to read attribute of NoneType
I'm trying to run Gunicorn on the master branch (latest commit is 4c601ce447fafbeed27f0f0a238e0e48c928b6f9). But every incoming request generates this error:
```
[2014-10-27 20:36:55 +0000] [22663] [ERROR] Error handling request
Traceback (most recent call last):
File "/mnt/runscope/.virtualenvs/embedcurl/src/gunicorn/gunicorn/workers/async.py", line 41, in handle
proxy_protocol_info = req.proxy_protocol_info
AttributeError: 'NoneType' object has no attribute 'proxy_protocol_info'
```
This is my gunicorn command line:
```
/usr/local/runscope/.virtualenvs/embedcurl/bin/gunicorn \
--name embedcurl -k gevent --workers=2 --bind 0.0.0.0:3002 \
--error-logfile /var/log/runscope/embedcurl.error.log \
--access-logfile /var/log/runscope/embedcurl.access.log \
--access-logformat '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s %(T)s %(D)s "%(f)s" "%(a)s"' \
--max-requests 50000 \
--max-requests-jitter 500 \
--statsd-prefix service.embedcurl.test004. \
-D "embedcurl:app" --pid /var/run/embedcurl.pid
```
I think the problem comes from commit adf353f213d12994cc36ecbbcb6a084baf1fda12. See the file https://github.com/benoitc/gunicorn/blob/adf353f213d12994cc36ecbbcb6a084baf1fda12/gunicorn/workers/async.py -- line 41 reads from `req.proxy_protocol_info` but `req` is always None when that line runs. I'm not familiar with `proxy_protocol_info` myself so I'm not quite sure what the fix is.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gunicorn/workers/async.py`
Content:
```
1 # -*- coding: utf-8 -
2 #
3 # This file is part of gunicorn released under the MIT license.
4 # See the NOTICE for more information.
5
6 from datetime import datetime
7 import errno
8 import socket
9 import ssl
10 import sys
11
12 import gunicorn.http as http
13 import gunicorn.http.wsgi as wsgi
14 import gunicorn.util as util
15 import gunicorn.workers.base as base
16 from gunicorn import six
17
18 ALREADY_HANDLED = object()
19
20
21 class AsyncWorker(base.Worker):
22
23 def __init__(self, *args, **kwargs):
24 super(AsyncWorker, self).__init__(*args, **kwargs)
25 self.worker_connections = self.cfg.worker_connections
26
27 def timeout_ctx(self):
28 raise NotImplementedError()
29
30 def handle(self, listener, client, addr):
31 req = None
32 try:
33 parser = http.RequestParser(self.cfg, client)
34 try:
35 listener_name = listener.getsockname()
36 if not self.cfg.keepalive:
37 req = six.next(parser)
38 self.handle_request(listener_name, req, client, addr)
39 else:
40 # keepalive loop
41 proxy_protocol_info = req.proxy_protocol_info
42 while True:
43 req = None
44 with self.timeout_ctx():
45 req = six.next(parser)
46 if not req:
47 break
48 req.proxy_protocol_info = proxy_protocol_info
49 self.handle_request(listener_name, req, client, addr)
50 except http.errors.NoMoreData as e:
51 self.log.debug("Ignored premature client disconnection. %s", e)
52 except StopIteration as e:
53 self.log.debug("Closing connection. %s", e)
54 except ssl.SSLError:
55 exc_info = sys.exc_info()
56 # pass to next try-except level
57 six.reraise(exc_info[0], exc_info[1], exc_info[2])
58 except socket.error:
59 exc_info = sys.exc_info()
60 # pass to next try-except level
61 six.reraise(exc_info[0], exc_info[1], exc_info[2])
62 except Exception as e:
63 self.handle_error(req, client, addr, e)
64 except ssl.SSLError as e:
65 if e.args[0] == ssl.SSL_ERROR_EOF:
66 self.log.debug("ssl connection closed")
67 client.close()
68 else:
69 self.log.debug("Error processing SSL request.")
70 self.handle_error(req, client, addr, e)
71 except socket.error as e:
72 if e.args[0] not in (errno.EPIPE, errno.ECONNRESET):
73 self.log.exception("Socket error processing request.")
74 else:
75 if e.args[0] == errno.ECONNRESET:
76 self.log.debug("Ignoring connection reset")
77 else:
78 self.log.debug("Ignoring EPIPE")
79 except Exception as e:
80 self.handle_error(req, client, addr, e)
81 finally:
82 util.close(client)
83
84 def handle_request(self, listener_name, req, sock, addr):
85 request_start = datetime.now()
86 environ = {}
87 resp = None
88 try:
89 self.cfg.pre_request(self, req)
90 resp, environ = wsgi.create(req, sock, addr,
91 listener_name, self.cfg)
92 environ["wsgi.multithread"] = True
93 self.nr += 1
94 if self.alive and self.nr >= self.max_requests:
95 self.log.info("Autorestarting worker after current request.")
96 resp.force_close()
97 self.alive = False
98
99 if not self.cfg.keepalive:
100 resp.force_close()
101
102 respiter = self.wsgi(environ, resp.start_response)
103 if respiter == ALREADY_HANDLED:
104 return False
105 try:
106 if isinstance(respiter, environ['wsgi.file_wrapper']):
107 resp.write_file(respiter)
108 else:
109 for item in respiter:
110 resp.write(item)
111 resp.close()
112 request_time = datetime.now() - request_start
113 self.log.access(resp, req, environ, request_time)
114 finally:
115 if hasattr(respiter, "close"):
116 respiter.close()
117 if resp.should_close():
118 raise StopIteration()
119 except StopIteration:
120 raise
121 except Exception:
122 if resp and resp.headers_sent:
123 # If the requests have already been sent, we should close the
124 # connection to indicate the error.
125 self.log.exception("Error handling request")
126 try:
127 sock.shutdown(socket.SHUT_RDWR)
128 sock.close()
129 except socket.error:
130 pass
131 raise StopIteration()
132 raise
133 finally:
134 try:
135 self.cfg.post_request(self, req, environ, resp)
136 except Exception:
137 self.log.exception("Exception in post_request hook")
138 return True
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gunicorn/workers/async.py b/gunicorn/workers/async.py
--- a/gunicorn/workers/async.py
+++ b/gunicorn/workers/async.py
@@ -38,14 +38,17 @@
self.handle_request(listener_name, req, client, addr)
else:
# keepalive loop
- proxy_protocol_info = req.proxy_protocol_info
+ proxy_protocol_info = {}
while True:
req = None
with self.timeout_ctx():
req = six.next(parser)
if not req:
break
- req.proxy_protocol_info = proxy_protocol_info
+ if req.proxy_protocol_info:
+ proxy_protocol_info = req.proxy_protocol_info
+ else:
+ req.proxy_protocol_info = proxy_protocol_info
self.handle_request(listener_name, req, client, addr)
except http.errors.NoMoreData as e:
self.log.debug("Ignored premature client disconnection. %s", e)
|
{"golden_diff": "diff --git a/gunicorn/workers/async.py b/gunicorn/workers/async.py\n--- a/gunicorn/workers/async.py\n+++ b/gunicorn/workers/async.py\n@@ -38,14 +38,17 @@\n self.handle_request(listener_name, req, client, addr)\n else:\n # keepalive loop\n- proxy_protocol_info = req.proxy_protocol_info\n+ proxy_protocol_info = {}\n while True:\n req = None\n with self.timeout_ctx():\n req = six.next(parser)\n if not req:\n break\n- req.proxy_protocol_info = proxy_protocol_info\n+ if req.proxy_protocol_info:\n+ proxy_protocol_info = req.proxy_protocol_info\n+ else:\n+ req.proxy_protocol_info = proxy_protocol_info\n self.handle_request(listener_name, req, client, addr)\n except http.errors.NoMoreData as e:\n self.log.debug(\"Ignored premature client disconnection. %s\", e)\n", "issue": "Error handling all requests on master - attempt to read attribute of NoneType\nI'm trying to run Gunicorn on the master branch (latest commit is 4c601ce447fafbeed27f0f0a238e0e48c928b6f9). But every incoming request generates this error:\n\n```\n[2014-10-27 20:36:55 +0000] [22663] [ERROR] Error handling request\nTraceback (most recent call last):\n File \"/mnt/runscope/.virtualenvs/embedcurl/src/gunicorn/gunicorn/workers/async.py\", line 41, in handle\n proxy_protocol_info = req.proxy_protocol_info\nAttributeError: 'NoneType' object has no attribute 'proxy_protocol_info'\n```\n\nThis is my gunicorn command line:\n\n```\n/usr/local/runscope/.virtualenvs/embedcurl/bin/gunicorn \\\n --name embedcurl -k gevent --workers=2 --bind 0.0.0.0:3002 \\\n --error-logfile /var/log/runscope/embedcurl.error.log \\\n --access-logfile /var/log/runscope/embedcurl.access.log \\\n --access-logformat '%(h)s %(l)s %(u)s %(t)s \"%(r)s\" %(s)s %(b)s %(T)s %(D)s \"%(f)s\" \"%(a)s\"' \\\n --max-requests 50000 \\\n --max-requests-jitter 500 \\\n --statsd-prefix service.embedcurl.test004. \\\n -D \"embedcurl:app\" --pid /var/run/embedcurl.pid\n```\n\nI think the problem comes from commit adf353f213d12994cc36ecbbcb6a084baf1fda12. See the file https://github.com/benoitc/gunicorn/blob/adf353f213d12994cc36ecbbcb6a084baf1fda12/gunicorn/workers/async.py -- line 41 reads from `req.proxy_protocol_info` but `req` is always None when that line runs. I'm not familiar with `proxy_protocol_info` myself so I'm not quite sure what the fix is.\n\nError handling all requests on master - attempt to read attribute of NoneType\nI'm trying to run Gunicorn on the master branch (latest commit is 4c601ce447fafbeed27f0f0a238e0e48c928b6f9). But every incoming request generates this error:\n\n```\n[2014-10-27 20:36:55 +0000] [22663] [ERROR] Error handling request\nTraceback (most recent call last):\n File \"/mnt/runscope/.virtualenvs/embedcurl/src/gunicorn/gunicorn/workers/async.py\", line 41, in handle\n proxy_protocol_info = req.proxy_protocol_info\nAttributeError: 'NoneType' object has no attribute 'proxy_protocol_info'\n```\n\nThis is my gunicorn command line:\n\n```\n/usr/local/runscope/.virtualenvs/embedcurl/bin/gunicorn \\\n --name embedcurl -k gevent --workers=2 --bind 0.0.0.0:3002 \\\n --error-logfile /var/log/runscope/embedcurl.error.log \\\n --access-logfile /var/log/runscope/embedcurl.access.log \\\n --access-logformat '%(h)s %(l)s %(u)s %(t)s \"%(r)s\" %(s)s %(b)s %(T)s %(D)s \"%(f)s\" \"%(a)s\"' \\\n --max-requests 50000 \\\n --max-requests-jitter 500 \\\n --statsd-prefix service.embedcurl.test004. \\\n -D \"embedcurl:app\" --pid /var/run/embedcurl.pid\n```\n\nI think the problem comes from commit adf353f213d12994cc36ecbbcb6a084baf1fda12. See the file https://github.com/benoitc/gunicorn/blob/adf353f213d12994cc36ecbbcb6a084baf1fda12/gunicorn/workers/async.py -- line 41 reads from `req.proxy_protocol_info` but `req` is always None when that line runs. I'm not familiar with `proxy_protocol_info` myself so I'm not quite sure what the fix is.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nfrom datetime import datetime\nimport errno\nimport socket\nimport ssl\nimport sys\n\nimport gunicorn.http as http\nimport gunicorn.http.wsgi as wsgi\nimport gunicorn.util as util\nimport gunicorn.workers.base as base\nfrom gunicorn import six\n\nALREADY_HANDLED = object()\n\n\nclass AsyncWorker(base.Worker):\n\n def __init__(self, *args, **kwargs):\n super(AsyncWorker, self).__init__(*args, **kwargs)\n self.worker_connections = self.cfg.worker_connections\n\n def timeout_ctx(self):\n raise NotImplementedError()\n\n def handle(self, listener, client, addr):\n req = None\n try:\n parser = http.RequestParser(self.cfg, client)\n try:\n listener_name = listener.getsockname()\n if not self.cfg.keepalive:\n req = six.next(parser)\n self.handle_request(listener_name, req, client, addr)\n else:\n # keepalive loop\n proxy_protocol_info = req.proxy_protocol_info\n while True:\n req = None\n with self.timeout_ctx():\n req = six.next(parser)\n if not req:\n break\n req.proxy_protocol_info = proxy_protocol_info\n self.handle_request(listener_name, req, client, addr)\n except http.errors.NoMoreData as e:\n self.log.debug(\"Ignored premature client disconnection. %s\", e)\n except StopIteration as e:\n self.log.debug(\"Closing connection. %s\", e)\n except ssl.SSLError:\n exc_info = sys.exc_info()\n # pass to next try-except level\n six.reraise(exc_info[0], exc_info[1], exc_info[2])\n except socket.error:\n exc_info = sys.exc_info()\n # pass to next try-except level\n six.reraise(exc_info[0], exc_info[1], exc_info[2])\n except Exception as e:\n self.handle_error(req, client, addr, e)\n except ssl.SSLError as e:\n if e.args[0] == ssl.SSL_ERROR_EOF:\n self.log.debug(\"ssl connection closed\")\n client.close()\n else:\n self.log.debug(\"Error processing SSL request.\")\n self.handle_error(req, client, addr, e)\n except socket.error as e:\n if e.args[0] not in (errno.EPIPE, errno.ECONNRESET):\n self.log.exception(\"Socket error processing request.\")\n else:\n if e.args[0] == errno.ECONNRESET:\n self.log.debug(\"Ignoring connection reset\")\n else:\n self.log.debug(\"Ignoring EPIPE\")\n except Exception as e:\n self.handle_error(req, client, addr, e)\n finally:\n util.close(client)\n\n def handle_request(self, listener_name, req, sock, addr):\n request_start = datetime.now()\n environ = {}\n resp = None\n try:\n self.cfg.pre_request(self, req)\n resp, environ = wsgi.create(req, sock, addr,\n listener_name, self.cfg)\n environ[\"wsgi.multithread\"] = True\n self.nr += 1\n if self.alive and self.nr >= self.max_requests:\n self.log.info(\"Autorestarting worker after current request.\")\n resp.force_close()\n self.alive = False\n\n if not self.cfg.keepalive:\n resp.force_close()\n\n respiter = self.wsgi(environ, resp.start_response)\n if respiter == ALREADY_HANDLED:\n return False\n try:\n if isinstance(respiter, environ['wsgi.file_wrapper']):\n resp.write_file(respiter)\n else:\n for item in respiter:\n resp.write(item)\n resp.close()\n request_time = datetime.now() - request_start\n self.log.access(resp, req, environ, request_time)\n finally:\n if hasattr(respiter, \"close\"):\n respiter.close()\n if resp.should_close():\n raise StopIteration()\n except StopIteration:\n raise\n except Exception:\n if resp and resp.headers_sent:\n # If the requests have already been sent, we should close the\n # connection to indicate the error.\n self.log.exception(\"Error handling request\")\n try:\n sock.shutdown(socket.SHUT_RDWR)\n sock.close()\n except socket.error:\n pass\n raise StopIteration()\n raise\n finally:\n try:\n self.cfg.post_request(self, req, environ, resp)\n except Exception:\n self.log.exception(\"Exception in post_request hook\")\n return True\n", "path": "gunicorn/workers/async.py"}], "after_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nfrom datetime import datetime\nimport errno\nimport socket\nimport ssl\nimport sys\n\nimport gunicorn.http as http\nimport gunicorn.http.wsgi as wsgi\nimport gunicorn.util as util\nimport gunicorn.workers.base as base\nfrom gunicorn import six\n\nALREADY_HANDLED = object()\n\n\nclass AsyncWorker(base.Worker):\n\n def __init__(self, *args, **kwargs):\n super(AsyncWorker, self).__init__(*args, **kwargs)\n self.worker_connections = self.cfg.worker_connections\n\n def timeout_ctx(self):\n raise NotImplementedError()\n\n def handle(self, listener, client, addr):\n req = None\n try:\n parser = http.RequestParser(self.cfg, client)\n try:\n listener_name = listener.getsockname()\n if not self.cfg.keepalive:\n req = six.next(parser)\n self.handle_request(listener_name, req, client, addr)\n else:\n # keepalive loop\n proxy_protocol_info = {}\n while True:\n req = None\n with self.timeout_ctx():\n req = six.next(parser)\n if not req:\n break\n if req.proxy_protocol_info:\n proxy_protocol_info = req.proxy_protocol_info\n else:\n req.proxy_protocol_info = proxy_protocol_info\n self.handle_request(listener_name, req, client, addr)\n except http.errors.NoMoreData as e:\n self.log.debug(\"Ignored premature client disconnection. %s\", e)\n except StopIteration as e:\n self.log.debug(\"Closing connection. %s\", e)\n except ssl.SSLError:\n exc_info = sys.exc_info()\n # pass to next try-except level\n six.reraise(exc_info[0], exc_info[1], exc_info[2])\n except socket.error:\n exc_info = sys.exc_info()\n # pass to next try-except level\n six.reraise(exc_info[0], exc_info[1], exc_info[2])\n except Exception as e:\n self.handle_error(req, client, addr, e)\n except ssl.SSLError as e:\n if e.args[0] == ssl.SSL_ERROR_EOF:\n self.log.debug(\"ssl connection closed\")\n client.close()\n else:\n self.log.debug(\"Error processing SSL request.\")\n self.handle_error(req, client, addr, e)\n except socket.error as e:\n if e.args[0] not in (errno.EPIPE, errno.ECONNRESET):\n self.log.exception(\"Socket error processing request.\")\n else:\n if e.args[0] == errno.ECONNRESET:\n self.log.debug(\"Ignoring connection reset\")\n else:\n self.log.debug(\"Ignoring EPIPE\")\n except Exception as e:\n self.handle_error(req, client, addr, e)\n finally:\n util.close(client)\n\n def handle_request(self, listener_name, req, sock, addr):\n request_start = datetime.now()\n environ = {}\n resp = None\n try:\n self.cfg.pre_request(self, req)\n resp, environ = wsgi.create(req, sock, addr,\n listener_name, self.cfg)\n environ[\"wsgi.multithread\"] = True\n self.nr += 1\n if self.alive and self.nr >= self.max_requests:\n self.log.info(\"Autorestarting worker after current request.\")\n resp.force_close()\n self.alive = False\n\n if not self.cfg.keepalive:\n resp.force_close()\n\n respiter = self.wsgi(environ, resp.start_response)\n if respiter == ALREADY_HANDLED:\n return False\n try:\n if isinstance(respiter, environ['wsgi.file_wrapper']):\n resp.write_file(respiter)\n else:\n for item in respiter:\n resp.write(item)\n resp.close()\n request_time = datetime.now() - request_start\n self.log.access(resp, req, environ, request_time)\n finally:\n if hasattr(respiter, \"close\"):\n respiter.close()\n if resp.should_close():\n raise StopIteration()\n except StopIteration:\n raise\n except Exception:\n if resp and resp.headers_sent:\n # If the requests have already been sent, we should close the\n # connection to indicate the error.\n self.log.exception(\"Error handling request\")\n try:\n sock.shutdown(socket.SHUT_RDWR)\n sock.close()\n except socket.error:\n pass\n raise StopIteration()\n raise\n finally:\n try:\n self.cfg.post_request(self, req, environ, resp)\n except Exception:\n self.log.exception(\"Exception in post_request hook\")\n return True\n", "path": "gunicorn/workers/async.py"}]}
| 2,588 | 209 |
gh_patches_debug_12976
|
rasdani/github-patches
|
git_diff
|
urllib3__urllib3-2042
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
urllib3 logo is unreadable in docs in dark mode
This is a recent Furo addition, you can see it in this pull request build: https://urllib3--2026.org.readthedocs.build/en/2026/index.html. Here's what I see (with Firefox on macOS with dark mode enabled):
<img width="237" alt="urllib3 logo in dark mode in docs" src="https://user-images.githubusercontent.com/42327/96408490-ad2c8300-11f4-11eb-8054-661fb38a6c23.png">
I'm not sure what the correct fix is here. The obvious one would be to force a white background. I guess we could also... add a dark mode urllib3 logo, by switching black letters to white?
(The rest of the content looks good, even if the contrast seems low to me.)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 import os
2 import sys
3 from datetime import date
4
5 # If extensions (or modules to document with autodoc) are in another directory,
6 # add these directories to sys.path here. If the directory is relative to the
7 # documentation root, use os.path.abspath to make it absolute, like shown here.
8
9 root_path = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
10 sys.path.insert(0, root_path)
11
12 # Mock some expensive/platform-specific modules so build will work.
13 # (https://read-the-docs.readthedocs.io/en/latest/faq.html#\
14 # i-get-import-errors-on-libraries-that-depend-on-c-modules)
15 from unittest import mock
16
17
18 class MockModule(mock.Mock):
19 @classmethod
20 def __getattr__(cls, name):
21 return MockModule()
22
23
24 MOCK_MODULES = ("ntlm",)
25
26 sys.modules.update((mod_name, MockModule()) for mod_name in MOCK_MODULES)
27
28
29 import urllib3
30
31 # -- General configuration -----------------------------------------------------
32
33
34 # Add any Sphinx extension module names here, as strings. They can be extensions
35 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
36 extensions = [
37 "sphinx.ext.autodoc",
38 "sphinx.ext.doctest",
39 "sphinx.ext.intersphinx",
40 ]
41
42 # Test code blocks only when explicitly specified
43 doctest_test_doctest_blocks = ""
44
45 # Add any paths that contain templates here, relative to this directory.
46 templates_path = ["_templates"]
47
48 # The suffix of source filenames.
49 source_suffix = ".rst"
50
51 # The master toctree document.
52 master_doc = "index"
53
54 # General information about the project.
55 project = "urllib3"
56 copyright = f"{date.today().year}, Andrey Petrov"
57
58 # The short X.Y version.
59 version = urllib3.__version__
60 # The full version, including alpha/beta/rc tags.
61 release = version
62
63 # List of patterns, relative to source directory, that match files and
64 # directories to ignore when looking for source files.
65 exclude_patterns = ["_build"]
66
67 # The name of the Pygments (syntax highlighting) style to use.
68 pygments_style = "friendly"
69
70 # The theme to use for HTML and HTML Help pages. See the documentation for
71 # a list of builtin themes.
72 html_theme = "furo"
73 html_favicon = "images/favicon.png"
74 html_logo = "images/banner.svg"
75
76 html_theme_options = {
77 "announcement": """
78 <a style=\"text-decoration: none; color: white;\"
79 href=\"https://opencollective.com/urllib3\">
80 <img src=\"/en/latest/_static/favicon.png\"/> Sponsor urllib3 v2.0 on Open Collective
81 </a>
82 """,
83 "sidebar_hide_name": True,
84 }
85
86 intersphinx_mapping = {"python": ("https://docs.python.org/3", None)}
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -73,8 +73,8 @@
# a list of builtin themes.
html_theme = "furo"
html_favicon = "images/favicon.png"
-html_logo = "images/banner.svg"
+html_static_path = ["_static"]
html_theme_options = {
"announcement": """
<a style=\"text-decoration: none; color: white;\"
@@ -83,6 +83,8 @@
</a>
""",
"sidebar_hide_name": True,
+ "light_logo": "banner.svg",
+ "dark_logo": "dark-logo.svg",
}
intersphinx_mapping = {"python": ("https://docs.python.org/3", None)}
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -73,8 +73,8 @@\n # a list of builtin themes.\n html_theme = \"furo\"\n html_favicon = \"images/favicon.png\"\n-html_logo = \"images/banner.svg\"\n \n+html_static_path = [\"_static\"]\n html_theme_options = {\n \"announcement\": \"\"\"\n <a style=\\\"text-decoration: none; color: white;\\\" \n@@ -83,6 +83,8 @@\n </a>\n \"\"\",\n \"sidebar_hide_name\": True,\n+ \"light_logo\": \"banner.svg\",\n+ \"dark_logo\": \"dark-logo.svg\",\n }\n \n intersphinx_mapping = {\"python\": (\"https://docs.python.org/3\", None)}\n", "issue": "urllib3 logo is unreadable in docs in dark mode\nThis is a recent Furo addition, you can see it in this pull request build: https://urllib3--2026.org.readthedocs.build/en/2026/index.html. Here's what I see (with Firefox on macOS with dark mode enabled):\r\n\r\n<img width=\"237\" alt=\"urllib3 logo in dark mode in docs\" src=\"https://user-images.githubusercontent.com/42327/96408490-ad2c8300-11f4-11eb-8054-661fb38a6c23.png\">\r\n\r\nI'm not sure what the correct fix is here. The obvious one would be to force a white background. I guess we could also... add a dark mode urllib3 logo, by switching black letters to white?\r\n\r\n(The rest of the content looks good, even if the contrast seems low to me.)\n", "before_files": [{"content": "import os\nimport sys\nfrom datetime import date\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nroot_path = os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\"))\nsys.path.insert(0, root_path)\n\n# Mock some expensive/platform-specific modules so build will work.\n# (https://read-the-docs.readthedocs.io/en/latest/faq.html#\\\n# i-get-import-errors-on-libraries-that-depend-on-c-modules)\nfrom unittest import mock\n\n\nclass MockModule(mock.Mock):\n @classmethod\n def __getattr__(cls, name):\n return MockModule()\n\n\nMOCK_MODULES = (\"ntlm\",)\n\nsys.modules.update((mod_name, MockModule()) for mod_name in MOCK_MODULES)\n\n\nimport urllib3\n\n# -- General configuration -----------------------------------------------------\n\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.intersphinx\",\n]\n\n# Test code blocks only when explicitly specified\ndoctest_test_doctest_blocks = \"\"\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix of source filenames.\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"urllib3\"\ncopyright = f\"{date.today().year}, Andrey Petrov\"\n\n# The short X.Y version.\nversion = urllib3.__version__\n# The full version, including alpha/beta/rc tags.\nrelease = version\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = [\"_build\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"friendly\"\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = \"furo\"\nhtml_favicon = \"images/favicon.png\"\nhtml_logo = \"images/banner.svg\"\n\nhtml_theme_options = {\n \"announcement\": \"\"\"\n <a style=\\\"text-decoration: none; color: white;\\\" \n href=\\\"https://opencollective.com/urllib3\\\">\n <img src=\\\"/en/latest/_static/favicon.png\\\"/> Sponsor urllib3 v2.0 on Open Collective\n </a>\n \"\"\",\n \"sidebar_hide_name\": True,\n}\n\nintersphinx_mapping = {\"python\": (\"https://docs.python.org/3\", None)}\n", "path": "docs/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport os\nimport sys\nfrom datetime import date\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nroot_path = os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\"))\nsys.path.insert(0, root_path)\n\n# Mock some expensive/platform-specific modules so build will work.\n# (https://read-the-docs.readthedocs.io/en/latest/faq.html#\\\n# i-get-import-errors-on-libraries-that-depend-on-c-modules)\nimport mock\n\n\nclass MockModule(mock.Mock):\n @classmethod\n def __getattr__(cls, name):\n return MockModule()\n\n\nMOCK_MODULES = (\"ntlm\",)\n\nsys.modules.update((mod_name, MockModule()) for mod_name in MOCK_MODULES)\n\n\nimport urllib3\n\n# -- General configuration -----------------------------------------------------\n\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.intersphinx\",\n]\n\n# Test code blocks only when explicitly specified\ndoctest_test_doctest_blocks = \"\"\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix of source filenames.\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"urllib3\"\ncopyright = \"{year}, Andrey Petrov\".format(year=date.today().year)\n\n# The short X.Y version.\nversion = urllib3.__version__\n# The full version, including alpha/beta/rc tags.\nrelease = version\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = [\"_build\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"friendly\"\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = \"furo\"\nhtml_favicon = \"images/favicon.png\"\n\nhtml_static_path = [\"_static\"]\nhtml_theme_options = {\n \"announcement\": \"\"\"\n <a style=\\\"text-decoration: none; color: white;\\\" \n href=\\\"https://opencollective.com/urllib3\\\">\n <img src=\\\"/en/latest/_static/favicon.png\\\"/> Sponsor urllib3 v2.0 on Open Collective\n </a>\n \"\"\",\n \"sidebar_hide_name\": True,\n \"light_logo\": \"banner.svg\",\n \"dark_logo\": \"dark-logo.svg\",\n}\n\nintersphinx_mapping = {\"python\": (\"https://docs.python.org/3\", None)}\n", "path": "docs/conf.py"}]}
| 1,242 | 171 |
gh_patches_debug_31822
|
rasdani/github-patches
|
git_diff
|
TencentBlueKing__bk-user-805
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
启动命令: gunicorn 支持退出前打印堆栈
遇到gunicorn 直接退出的情况, 可以使用 https://stackoverflow.com/questions/57167240/is-it-possible-to-get-a-stack-trace-when-a-gunicorn-worker-hits-a-timeout 方式调试, 打印退出前堆栈, 从而推断问题代码位置
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/login/wsgi.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 Tencent is pleased to support the open source community by making 蓝鲸智云PaaS平台社区版 (BlueKing PaaS
4 Community Edition) available.
5 Copyright (C) 2017-2018 THL A29 Limited, a Tencent company. All rights reserved.
6 Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License.
7 You may obtain a copy of the License at http://opensource.org/licenses/MIT
8 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9 an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10 specific language governing permissions and limitations under the License.
11 """
12
13 import os
14
15 from dj_static import Cling
16 from django.core.wsgi import get_wsgi_application
17
18 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "bklogin.config.prod")
19
20 application = Cling(get_wsgi_application())
21
```
Path: `src/saas/wsgi.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 TencentBlueKing is pleased to support the open source community by making 蓝鲸智云-用户管理(Bk-User) available.
4 Copyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.
5 Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at http://opensource.org/licenses/MIT
7 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
8 an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
9 specific language governing permissions and limitations under the License.
10 """
11 import os
12
13 from django.core.wsgi import get_wsgi_application
14
15 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "bkuser_shell.config.overlays.prod")
16
17 application = get_wsgi_application()
18
```
Path: `src/api/wsgi.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 TencentBlueKing is pleased to support the open source community by making 蓝鲸智云-用户管理(Bk-User) available.
4 Copyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.
5 Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at http://opensource.org/licenses/MIT
7 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
8 an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
9 specific language governing permissions and limitations under the License.
10 """
11 import os
12
13 from django.core.wsgi import get_wsgi_application
14
15 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "bkuser_core.config.overlays.prod")
16
17 application = get_wsgi_application()
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/api/wsgi.py b/src/api/wsgi.py
--- a/src/api/wsgi.py
+++ b/src/api/wsgi.py
@@ -8,10 +8,13 @@
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
"""
+import faulthandler
import os
from django.core.wsgi import get_wsgi_application
+faulthandler.enable()
+
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "bkuser_core.config.overlays.prod")
application = get_wsgi_application()
diff --git a/src/login/wsgi.py b/src/login/wsgi.py
--- a/src/login/wsgi.py
+++ b/src/login/wsgi.py
@@ -10,11 +10,14 @@
specific language governing permissions and limitations under the License.
"""
+import faulthandler
import os
from dj_static import Cling
from django.core.wsgi import get_wsgi_application
+faulthandler.enable()
+
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "bklogin.config.prod")
application = Cling(get_wsgi_application())
diff --git a/src/saas/wsgi.py b/src/saas/wsgi.py
--- a/src/saas/wsgi.py
+++ b/src/saas/wsgi.py
@@ -8,10 +8,13 @@
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
"""
+import faulthandler
import os
from django.core.wsgi import get_wsgi_application
+faulthandler.enable()
+
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "bkuser_shell.config.overlays.prod")
application = get_wsgi_application()
|
{"golden_diff": "diff --git a/src/api/wsgi.py b/src/api/wsgi.py\n--- a/src/api/wsgi.py\n+++ b/src/api/wsgi.py\n@@ -8,10 +8,13 @@\n an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\n specific language governing permissions and limitations under the License.\n \"\"\"\n+import faulthandler\n import os\n \n from django.core.wsgi import get_wsgi_application\n \n+faulthandler.enable()\n+\n os.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"bkuser_core.config.overlays.prod\")\n \n application = get_wsgi_application()\ndiff --git a/src/login/wsgi.py b/src/login/wsgi.py\n--- a/src/login/wsgi.py\n+++ b/src/login/wsgi.py\n@@ -10,11 +10,14 @@\n specific language governing permissions and limitations under the License.\n \"\"\"\n \n+import faulthandler\n import os\n \n from dj_static import Cling\n from django.core.wsgi import get_wsgi_application\n \n+faulthandler.enable()\n+\n os.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"bklogin.config.prod\")\n \n application = Cling(get_wsgi_application())\ndiff --git a/src/saas/wsgi.py b/src/saas/wsgi.py\n--- a/src/saas/wsgi.py\n+++ b/src/saas/wsgi.py\n@@ -8,10 +8,13 @@\n an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\n specific language governing permissions and limitations under the License.\n \"\"\"\n+import faulthandler\n import os\n \n from django.core.wsgi import get_wsgi_application\n \n+faulthandler.enable()\n+\n os.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"bkuser_shell.config.overlays.prod\")\n \n application = get_wsgi_application()\n", "issue": "\u542f\u52a8\u547d\u4ee4: gunicorn \u652f\u6301\u9000\u51fa\u524d\u6253\u5370\u5806\u6808\n\u9047\u5230gunicorn \u76f4\u63a5\u9000\u51fa\u7684\u60c5\u51b5, \u53ef\u4ee5\u4f7f\u7528 https://stackoverflow.com/questions/57167240/is-it-possible-to-get-a-stack-trace-when-a-gunicorn-worker-hits-a-timeout \u65b9\u5f0f\u8c03\u8bd5, \u6253\u5370\u9000\u51fa\u524d\u5806\u6808, \u4ece\u800c\u63a8\u65ad\u95ee\u9898\u4ee3\u7801\u4f4d\u7f6e\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nTencent is pleased to support the open source community by making \u84dd\u9cb8\u667a\u4e91PaaS\u5e73\u53f0\u793e\u533a\u7248 (BlueKing PaaS\nCommunity Edition) available.\nCopyright (C) 2017-2018 THL A29 Limited, a Tencent company. All rights reserved.\nLicensed under the MIT License (the \"License\"); you may not use this file except in compliance with the License.\nYou may obtain a copy of the License at http://opensource.org/licenses/MIT\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\nspecific language governing permissions and limitations under the License.\n\"\"\"\n\nimport os\n\nfrom dj_static import Cling\nfrom django.core.wsgi import get_wsgi_application\n\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"bklogin.config.prod\")\n\napplication = Cling(get_wsgi_application())\n", "path": "src/login/wsgi.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"\nTencentBlueKing is pleased to support the open source community by making \u84dd\u9cb8\u667a\u4e91-\u7528\u6237\u7ba1\u7406(Bk-User) available.\nCopyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.\nLicensed under the MIT License (the \"License\"); you may not use this file except in compliance with the License.\nYou may obtain a copy of the License at http://opensource.org/licenses/MIT\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\nspecific language governing permissions and limitations under the License.\n\"\"\"\nimport os\n\nfrom django.core.wsgi import get_wsgi_application\n\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"bkuser_shell.config.overlays.prod\")\n\napplication = get_wsgi_application()\n", "path": "src/saas/wsgi.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"\nTencentBlueKing is pleased to support the open source community by making \u84dd\u9cb8\u667a\u4e91-\u7528\u6237\u7ba1\u7406(Bk-User) available.\nCopyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.\nLicensed under the MIT License (the \"License\"); you may not use this file except in compliance with the License.\nYou may obtain a copy of the License at http://opensource.org/licenses/MIT\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\nspecific language governing permissions and limitations under the License.\n\"\"\"\nimport os\n\nfrom django.core.wsgi import get_wsgi_application\n\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"bkuser_core.config.overlays.prod\")\n\napplication = get_wsgi_application()\n", "path": "src/api/wsgi.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nTencent is pleased to support the open source community by making \u84dd\u9cb8\u667a\u4e91PaaS\u5e73\u53f0\u793e\u533a\u7248 (BlueKing PaaS\nCommunity Edition) available.\nCopyright (C) 2017-2018 THL A29 Limited, a Tencent company. All rights reserved.\nLicensed under the MIT License (the \"License\"); you may not use this file except in compliance with the License.\nYou may obtain a copy of the License at http://opensource.org/licenses/MIT\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\nspecific language governing permissions and limitations under the License.\n\"\"\"\n\nimport faulthandler\nimport os\n\nfrom dj_static import Cling\nfrom django.core.wsgi import get_wsgi_application\n\nfaulthandler.enable()\n\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"bklogin.config.prod\")\n\napplication = Cling(get_wsgi_application())\n", "path": "src/login/wsgi.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"\nTencentBlueKing is pleased to support the open source community by making \u84dd\u9cb8\u667a\u4e91-\u7528\u6237\u7ba1\u7406(Bk-User) available.\nCopyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.\nLicensed under the MIT License (the \"License\"); you may not use this file except in compliance with the License.\nYou may obtain a copy of the License at http://opensource.org/licenses/MIT\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\nspecific language governing permissions and limitations under the License.\n\"\"\"\nimport faulthandler\nimport os\n\nfrom django.core.wsgi import get_wsgi_application\n\nfaulthandler.enable()\n\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"bkuser_shell.config.overlays.prod\")\n\napplication = get_wsgi_application()\n", "path": "src/saas/wsgi.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"\nTencentBlueKing is pleased to support the open source community by making \u84dd\u9cb8\u667a\u4e91-\u7528\u6237\u7ba1\u7406(Bk-User) available.\nCopyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.\nLicensed under the MIT License (the \"License\"); you may not use this file except in compliance with the License.\nYou may obtain a copy of the License at http://opensource.org/licenses/MIT\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\nspecific language governing permissions and limitations under the License.\n\"\"\"\nimport faulthandler\nimport os\n\nfrom django.core.wsgi import get_wsgi_application\n\nfaulthandler.enable()\n\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"bkuser_core.config.overlays.prod\")\n\napplication = get_wsgi_application()\n", "path": "src/api/wsgi.py"}]}
| 1,089 | 399 |
gh_patches_debug_17087
|
rasdani/github-patches
|
git_diff
|
ivy-llc__ivy-17675
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
median
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/paddle/tensor/stat.py`
Content:
```
1 # global
2 import ivy
3 from ivy.func_wrapper import with_unsupported_dtypes
4 from ivy.functional.frontends.paddle.func_wrapper import (
5 to_ivy_arrays_and_back,
6 )
7
8
9 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle")
10 @to_ivy_arrays_and_back
11 def mean(input, axis=None, keepdim=False, out=None):
12 ret = ivy.mean(input, axis=axis, keepdims=keepdim, out=out)
13 ret = ivy.expand_dims(ret, axis=-1) if ret.ndim == 0 else ret
14 return ret
15
16
17 @with_unsupported_dtypes({"2.5.0 and below": ("complex", "int8")}, "paddle")
18 @to_ivy_arrays_and_back
19 def numel(x, name=None):
20 prod = ivy.prod(x.size, dtype=ivy.int64)
21 try:
22 length = len(x)
23 except (ValueError, TypeError):
24 length = 1 # if 0 dimensional tensor with 1 element
25 return ivy.array([prod if prod > 0 else ivy.array(length, dtype=ivy.int64)])
26
27
28 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
29 @to_ivy_arrays_and_back
30 def nanquantile(a, q, axis=None, keepdims=False, interpolation="linear", out=None):
31 return ivy.nanquantile(
32 a, q, axis=axis, keepdims=keepdims, interpolation=interpolation, out=out
33 )
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ivy/functional/frontends/paddle/tensor/stat.py b/ivy/functional/frontends/paddle/tensor/stat.py
--- a/ivy/functional/frontends/paddle/tensor/stat.py
+++ b/ivy/functional/frontends/paddle/tensor/stat.py
@@ -1,6 +1,6 @@
# global
import ivy
-from ivy.func_wrapper import with_unsupported_dtypes
+from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
from ivy.functional.frontends.paddle.func_wrapper import (
to_ivy_arrays_and_back,
)
@@ -31,3 +31,17 @@
return ivy.nanquantile(
a, q, axis=axis, keepdims=keepdims, interpolation=interpolation, out=out
)
+
+
+@with_supported_dtypes(
+ {"2.5.0 and below": ("bool", "float16", "float32", "float64", "int32", "int64")},
+ "paddle",
+)
+@to_ivy_arrays_and_back
+def median(x, axis=None, keepdim=False, name=None):
+ x = (
+ ivy.astype(x, ivy.float64)
+ if ivy.dtype(x) == "float64"
+ else ivy.astype(x, ivy.float32)
+ )
+ return ivy.median(x, axis=axis, keepdims=keepdim)
|
{"golden_diff": "diff --git a/ivy/functional/frontends/paddle/tensor/stat.py b/ivy/functional/frontends/paddle/tensor/stat.py\n--- a/ivy/functional/frontends/paddle/tensor/stat.py\n+++ b/ivy/functional/frontends/paddle/tensor/stat.py\n@@ -1,6 +1,6 @@\n # global\n import ivy\n-from ivy.func_wrapper import with_unsupported_dtypes\n+from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\n from ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n )\n@@ -31,3 +31,17 @@\n return ivy.nanquantile(\n a, q, axis=axis, keepdims=keepdims, interpolation=interpolation, out=out\n )\n+\n+\n+@with_supported_dtypes(\n+ {\"2.5.0 and below\": (\"bool\", \"float16\", \"float32\", \"float64\", \"int32\", \"int64\")},\n+ \"paddle\",\n+)\n+@to_ivy_arrays_and_back\n+def median(x, axis=None, keepdim=False, name=None):\n+ x = (\n+ ivy.astype(x, ivy.float64)\n+ if ivy.dtype(x) == \"float64\"\n+ else ivy.astype(x, ivy.float32)\n+ )\n+ return ivy.median(x, axis=axis, keepdims=keepdim)\n", "issue": "median\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef mean(input, axis=None, keepdim=False, out=None):\n ret = ivy.mean(input, axis=axis, keepdims=keepdim, out=out)\n ret = ivy.expand_dims(ret, axis=-1) if ret.ndim == 0 else ret\n return ret\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"complex\", \"int8\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef numel(x, name=None):\n prod = ivy.prod(x.size, dtype=ivy.int64)\n try:\n length = len(x)\n except (ValueError, TypeError):\n length = 1 # if 0 dimensional tensor with 1 element\n return ivy.array([prod if prod > 0 else ivy.array(length, dtype=ivy.int64)])\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef nanquantile(a, q, axis=None, keepdims=False, interpolation=\"linear\", out=None):\n return ivy.nanquantile(\n a, q, axis=axis, keepdims=keepdims, interpolation=interpolation, out=out\n )\n", "path": "ivy/functional/frontends/paddle/tensor/stat.py"}], "after_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef mean(input, axis=None, keepdim=False, out=None):\n ret = ivy.mean(input, axis=axis, keepdims=keepdim, out=out)\n ret = ivy.expand_dims(ret, axis=-1) if ret.ndim == 0 else ret\n return ret\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"complex\", \"int8\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef numel(x, name=None):\n prod = ivy.prod(x.size, dtype=ivy.int64)\n try:\n length = len(x)\n except (ValueError, TypeError):\n length = 1 # if 0 dimensional tensor with 1 element\n return ivy.array([prod if prod > 0 else ivy.array(length, dtype=ivy.int64)])\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef nanquantile(a, q, axis=None, keepdims=False, interpolation=\"linear\", out=None):\n return ivy.nanquantile(\n a, q, axis=axis, keepdims=keepdims, interpolation=interpolation, out=out\n )\n\n\n@with_supported_dtypes(\n {\"2.5.0 and below\": (\"bool\", \"float16\", \"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef median(x, axis=None, keepdim=False, name=None):\n x = (\n ivy.astype(x, ivy.float64)\n if ivy.dtype(x) == \"float64\"\n else ivy.astype(x, ivy.float32)\n )\n return ivy.median(x, axis=axis, keepdims=keepdim)\n", "path": "ivy/functional/frontends/paddle/tensor/stat.py"}]}
| 683 | 321 |
gh_patches_debug_2051
|
rasdani/github-patches
|
git_diff
|
microsoft__playwright-python-13
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG]: page.getAttribute returns None
Actual:
```py
import asyncio
from playwright_web import chromium
async def run():
browser = await chromium.launch(headless=False)
context = await browser.newContext(viewport=0) # 0 stands for no viewport
page = await context.newPage()
await page.setContent(""""
<input id="kekstar"/>
""")
await page.fill("#kekstar", "Foobar")
print(await page.getAttribute("#kekstar", 'value'))
await browser.close()
asyncio.get_event_loop().run_until_complete(run())
```
Expected: Returns Foobar
On Try Playwright, it works: https://try.playwright.tech/?s=dzmwi
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `playwright_web/frame.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import asyncio
16 from playwright_web.connection import Channel, ChannelOwner, ConnectionScope, from_channel, from_nullable_channel
17 from playwright_web.element_handle import ElementHandle, convertSelectOptionValues, ValuesToSelect
18 from playwright_web.helper import ConsoleMessageLocation, FilePayload, SelectOption, is_function_body, locals_to_params
19 from playwright_web.js_handle import JSHandle, parse_result, serialize_argument
20 from playwright_web.network import Request, Response, Route
21 from typing import Any, Awaitable, Dict, List, Optional, Union
22
23 class Frame(ChannelOwner):
24
25 def __init__(self, scope: ConnectionScope, guid: str, initializer: Dict) -> None:
26 super().__init__(scope, guid, initializer)
27 self._parent_frame = from_nullable_channel(initializer['parentFrame'])
28 if self._parent_frame:
29 self._parent_frame._child_frames.append(self)
30 self._name = initializer['name']
31 self._url = initializer['url']
32 self._detached = False
33 self._child_frames: List[Frame] = list()
34 self._page: Optional['Page']
35
36 async def goto(self,
37 url: str,
38 timeout: int = None,
39 waitUntil: str = None, # Literal['load', 'domcontentloaded', 'networkidle'] = None,
40 referer: str = None) -> Optional[Response]:
41 return from_nullable_channel(await self._channel.send('goto', locals_to_params(locals())))
42
43 async def waitForNavigation(self,
44 timeout: int = None,
45 waitUntil: str = None, # Literal['load', 'domcontentloaded', 'networkidle'] = None,
46 url: str = None # TODO: add url, callback
47 ) -> Optional[Response]:
48 return from_nullable_channel(await self._channel.send('waitForNavigation', locals_to_params(locals())))
49
50 async def waitForLoadState(self,
51 state: str = 'load',
52 timeout: int = None) -> None:
53 await self._channel.send('waitForLoadState', locals_to_params(locals()))
54
55 async def frameElement(self) -> ElementHandle:
56 return from_channel(await self._channel.send('frameElement'))
57
58 async def evaluate(self, expression: str, arg: Any = None, force_expr: bool = False) -> Any:
59 if not is_function_body(expression):
60 force_expr = True
61 return parse_result(await self._channel.send('evaluateExpression', dict(expression=expression, isFunction=not(force_expr), arg=serialize_argument(arg))))
62
63 async def evaluateHandle(self, expression: str, arg: Any = None, force_expr: bool = False) -> JSHandle:
64 if not is_function_body(expression):
65 force_expr = True
66 return from_channel(await self._channel.send('evaluateExpressionHandle', dict(expression=expression, isFunction=not(force_expr), arg=serialize_argument(arg))))
67
68 async def querySelector(self, selector: str) -> Optional[ElementHandle]:
69 return from_nullable_channel(await self._channel.send('querySelector', dict(selector=selector)))
70
71 async def waitForSelector(self,
72 selector: str,
73 timeout: int = None,
74 state: str = None, # Literal['attached', 'detached', 'visible', 'hidden'] = None
75 ) -> Optional[ElementHandle]:
76 return from_nullable_channel(await self._channel.send('waitForSelector', locals_to_params(locals())))
77
78 async def dispatchEvent(self,
79 selector: str,
80 type: str,
81 eventInit: Dict = None,
82 timeout: int = None) -> None:
83 await self._channel.send('dispatchEvent', dict(selector=selector, type=type, eventInit=eventInit))
84
85 async def evalOnSelector(self, selector: str, expression: str, arg: Any = None, force_expr: bool = False) -> Any:
86 return parse_result(await self._channel.send('evalOnSelector', dict(selector=selector, expression=expression, isFunction=not(force_expr), arg=serialize_argument(arg))))
87
88 async def evalOnSelectorAll(self, selector: str, expression: str, arg: Any = None, force_expr: bool = False) -> Any:
89 return parse_result(await self._channel.send('evalOnSelectorAll', dict(selector=selector, expression=expression, isFunction=not(force_expr), arg=serialize_argument(arg))))
90
91 async def content(self) -> str:
92 return await self._channel.send('content')
93
94 async def setContent(self,
95 html: str, timeout: int = None,
96 waitUntil: str = None, # Literal['load', 'domcontentloaded', 'networkidle'] = None
97 ) -> None:
98 await self._channel.send('setContent', locals_to_params(locals()))
99
100 @property
101 def name(self) -> str:
102 return self._name or ''
103
104 @property
105 def url(self) -> str:
106 return self._url or ''
107
108 @property
109 def parentFrame(self) -> Optional['Frame']:
110 return self._parent_frame
111
112 @property
113 def childFrames(self) -> List['Frame']:
114 return self._child_frames.copy()
115
116 def isDetached(self) -> bool:
117 return self._detached
118
119 async def addScriptTag(self,
120 url: str = None,
121 path: str = None,
122 content: str = None) -> ElementHandle:
123 return from_channel(await self._channel.send('addScriptTag', locals_to_params(locals())))
124
125 async def addStyleTag(self,
126 url: str = None,
127 path: str = None,
128 content: str = None) -> ElementHandle:
129 return from_channel(await self._channel.send('addStyleTag', locals_to_params(locals())))
130
131 async def click(self,
132 selector: str,
133 modifiers: List[str] = None, # Literal['Alt', 'Control', 'Meta', 'Shift']] = None,
134 position: Dict = None,
135 delay: int = None,
136 button: str = None, # Literal['left', 'right', 'middle'] = None,
137 clickCount: int = None,
138 timeout: int = None,
139 force: bool = None,
140 noWaitAfter: bool = None) -> None:
141 await self._channel.send('click', locals_to_params(locals()))
142
143 async def dblclick(self,
144 selector: str,
145 modifiers: List[str] = None, # Literal['Alt', 'Control', 'Meta', 'Shift']] = None,
146 position: Dict = None,
147 delay: int = None,
148 button: str = None, # Literal['left', 'right', 'middle'] = None,
149 timeout: int = None,
150 force: bool = None) -> None:
151 await self._channel.send('dblclick', locals_to_params(locals()))
152
153 async def fill(self,
154 selector: str,
155 value: str,
156 timeout: int = None,
157 noWaitAfter: bool = None) -> None:
158 await self._channel.send('fill', locals_to_params(locals()))
159
160 async def focus(self,
161 selector: str,
162 timeout: int = None) -> None:
163 await self._channel.send('focus', locals_to_params(locals()))
164
165 async def textContent(self,
166 selector: str,
167 timeout: int = None) -> str:
168 return await self._channel.send('textContent', locals_to_params(locals()))
169
170 async def innerText(self,
171 selector: str,
172 timeout: int = None) -> str:
173 return await self._channel.send('innerText', locals_to_params(locals()))
174
175 async def innerHTML(self,
176 selector: str,
177 timeout: int = None) -> str:
178 return await self._channel.send('innerHTML', locals_to_params(locals()))
179
180 async def getAttribute(self,
181 selector: str,
182 name: str,
183 timeout: int = None) -> str:
184 await self._channel.send('getAttribute', locals_to_params(locals()))
185
186 async def hover(self,
187 selector: str,
188 modifiers: List[str] = None, # Literal['Alt', 'Control', 'Meta', 'Shift']] = None,
189 position: Dict = None,
190 timeout: int = None,
191 force: bool = None) -> None:
192 await self._channel.send('hover', locals_to_params(locals()))
193
194 async def selectOption(self,
195 selector: str,
196 values: ValuesToSelect,
197 timeout: int = None,
198 noWaitAfter: bool = None) -> None:
199 await self._channel.send('selectOption', dict(selector=selector, values=convertSelectOptionValues(values), timeout=timeout, noWaitAfter=noWaitAfter))
200
201 async def setInputFiles(self,
202 selector: str,
203 files: Union[str, FilePayload, List[str], List[FilePayload]],
204 timeout: int = None,
205 noWaitAfter: bool = None) -> None:
206 await self._channel.send('setInputFiles', locals_to_params(locals()))
207
208 async def type(self,
209 selector: str,
210 text: str,
211 delay: int = None,
212 timeout: int = None,
213 noWaitAfter: bool = None) -> None:
214 await self._channel.send('type', locals_to_params(locals()))
215
216 async def press(self,
217 selector: str,
218 key: str,
219 delay: int = None,
220 timeout: int = None,
221 noWaitAfter: bool = None) -> None:
222 await self._channel.send('press', locals_to_params(locals()))
223
224 async def check(self,
225 selector: str,
226 timeout: int = None,
227 force: bool = None,
228 noWaitAfter: bool = None) -> None:
229 await self._channel.send('check', locals_to_params(locals()))
230
231 async def uncheck(self,
232 selector: str,
233 timeout: int = None,
234 force: bool = None,
235 noWaitAfter: bool = None) -> None:
236 await self._channel.send('uncheck', locals_to_params(locals()))
237
238 async def waitForTimeout(self, timeout: int) -> Awaitable[None]:
239 return self._scope._loop.create_task(asyncio.sleep(timeout / 1000))
240
241 async def waitForFunction(self,
242 expression: str,
243 arg: Any = None,
244 force_expr: bool = False,
245 timeout: int = None,
246 polling: Union[int, str] = None # Union[int, Literal["raf"]]
247 ) -> JSHandle:
248 if not is_function_body(expression):
249 force_expr = True
250 params = locals_to_params(locals())
251 params['isFunction'] = not(force_expr)
252 params['arg'] = serialize_argument(arg)
253 return from_channel(await self._channel.send('waitForFunction', params))
254
255 async def title(self) -> str:
256 return await self._channel.send('title')
257
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/playwright_web/frame.py b/playwright_web/frame.py
--- a/playwright_web/frame.py
+++ b/playwright_web/frame.py
@@ -181,7 +181,7 @@
selector: str,
name: str,
timeout: int = None) -> str:
- await self._channel.send('getAttribute', locals_to_params(locals()))
+ return await self._channel.send('getAttribute', locals_to_params(locals()))
async def hover(self,
selector: str,
|
{"golden_diff": "diff --git a/playwright_web/frame.py b/playwright_web/frame.py\n--- a/playwright_web/frame.py\n+++ b/playwright_web/frame.py\n@@ -181,7 +181,7 @@\n selector: str,\n name: str,\n timeout: int = None) -> str:\n- await self._channel.send('getAttribute', locals_to_params(locals()))\n+ return await self._channel.send('getAttribute', locals_to_params(locals()))\n \n async def hover(self,\n selector: str,\n", "issue": "[BUG]: page.getAttribute returns None\nActual:\r\n\r\n```py\r\nimport asyncio\r\nfrom playwright_web import chromium\r\n\r\n\r\nasync def run():\r\n browser = await chromium.launch(headless=False)\r\n context = await browser.newContext(viewport=0) # 0 stands for no viewport\r\n page = await context.newPage()\r\n\r\n await page.setContent(\"\"\"\"\r\n <input id=\"kekstar\"/>\r\n \"\"\")\r\n\r\n await page.fill(\"#kekstar\", \"Foobar\")\r\n\r\n print(await page.getAttribute(\"#kekstar\", 'value'))\r\n\r\n await browser.close()\r\n\r\n\r\nasyncio.get_event_loop().run_until_complete(run())\r\n\r\n```\r\n\r\nExpected: Returns Foobar\r\n\r\nOn Try Playwright, it works: https://try.playwright.tech/?s=dzmwi\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport asyncio\nfrom playwright_web.connection import Channel, ChannelOwner, ConnectionScope, from_channel, from_nullable_channel\nfrom playwright_web.element_handle import ElementHandle, convertSelectOptionValues, ValuesToSelect\nfrom playwright_web.helper import ConsoleMessageLocation, FilePayload, SelectOption, is_function_body, locals_to_params\nfrom playwright_web.js_handle import JSHandle, parse_result, serialize_argument\nfrom playwright_web.network import Request, Response, Route\nfrom typing import Any, Awaitable, Dict, List, Optional, Union\n\nclass Frame(ChannelOwner):\n\n def __init__(self, scope: ConnectionScope, guid: str, initializer: Dict) -> None:\n super().__init__(scope, guid, initializer)\n self._parent_frame = from_nullable_channel(initializer['parentFrame'])\n if self._parent_frame:\n self._parent_frame._child_frames.append(self)\n self._name = initializer['name']\n self._url = initializer['url']\n self._detached = False\n self._child_frames: List[Frame] = list()\n self._page: Optional['Page']\n\n async def goto(self,\n url: str,\n timeout: int = None,\n waitUntil: str = None, # Literal['load', 'domcontentloaded', 'networkidle'] = None,\n referer: str = None) -> Optional[Response]:\n return from_nullable_channel(await self._channel.send('goto', locals_to_params(locals())))\n\n async def waitForNavigation(self,\n timeout: int = None,\n waitUntil: str = None, # Literal['load', 'domcontentloaded', 'networkidle'] = None,\n url: str = None # TODO: add url, callback\n ) -> Optional[Response]:\n return from_nullable_channel(await self._channel.send('waitForNavigation', locals_to_params(locals())))\n\n async def waitForLoadState(self,\n state: str = 'load',\n timeout: int = None) -> None:\n await self._channel.send('waitForLoadState', locals_to_params(locals()))\n\n async def frameElement(self) -> ElementHandle:\n return from_channel(await self._channel.send('frameElement'))\n\n async def evaluate(self, expression: str, arg: Any = None, force_expr: bool = False) -> Any:\n if not is_function_body(expression):\n force_expr = True\n return parse_result(await self._channel.send('evaluateExpression', dict(expression=expression, isFunction=not(force_expr), arg=serialize_argument(arg))))\n\n async def evaluateHandle(self, expression: str, arg: Any = None, force_expr: bool = False) -> JSHandle:\n if not is_function_body(expression):\n force_expr = True\n return from_channel(await self._channel.send('evaluateExpressionHandle', dict(expression=expression, isFunction=not(force_expr), arg=serialize_argument(arg))))\n\n async def querySelector(self, selector: str) -> Optional[ElementHandle]:\n return from_nullable_channel(await self._channel.send('querySelector', dict(selector=selector)))\n\n async def waitForSelector(self,\n selector: str,\n timeout: int = None,\n state: str = None, # Literal['attached', 'detached', 'visible', 'hidden'] = None\n ) -> Optional[ElementHandle]:\n return from_nullable_channel(await self._channel.send('waitForSelector', locals_to_params(locals())))\n\n async def dispatchEvent(self,\n selector: str,\n type: str,\n eventInit: Dict = None,\n timeout: int = None) -> None:\n await self._channel.send('dispatchEvent', dict(selector=selector, type=type, eventInit=eventInit))\n\n async def evalOnSelector(self, selector: str, expression: str, arg: Any = None, force_expr: bool = False) -> Any:\n return parse_result(await self._channel.send('evalOnSelector', dict(selector=selector, expression=expression, isFunction=not(force_expr), arg=serialize_argument(arg))))\n\n async def evalOnSelectorAll(self, selector: str, expression: str, arg: Any = None, force_expr: bool = False) -> Any:\n return parse_result(await self._channel.send('evalOnSelectorAll', dict(selector=selector, expression=expression, isFunction=not(force_expr), arg=serialize_argument(arg))))\n\n async def content(self) -> str:\n return await self._channel.send('content')\n\n async def setContent(self,\n html: str, timeout: int = None,\n waitUntil: str = None, # Literal['load', 'domcontentloaded', 'networkidle'] = None\n ) -> None:\n await self._channel.send('setContent', locals_to_params(locals()))\n\n @property\n def name(self) -> str:\n return self._name or ''\n\n @property\n def url(self) -> str:\n return self._url or ''\n\n @property\n def parentFrame(self) -> Optional['Frame']:\n return self._parent_frame\n\n @property\n def childFrames(self) -> List['Frame']:\n return self._child_frames.copy()\n\n def isDetached(self) -> bool:\n return self._detached\n\n async def addScriptTag(self,\n url: str = None,\n path: str = None,\n content: str = None) -> ElementHandle:\n return from_channel(await self._channel.send('addScriptTag', locals_to_params(locals())))\n\n async def addStyleTag(self,\n url: str = None,\n path: str = None,\n content: str = None) -> ElementHandle:\n return from_channel(await self._channel.send('addStyleTag', locals_to_params(locals())))\n\n async def click(self,\n selector: str,\n modifiers: List[str] = None, # Literal['Alt', 'Control', 'Meta', 'Shift']] = None,\n position: Dict = None,\n delay: int = None,\n button: str = None, # Literal['left', 'right', 'middle'] = None,\n clickCount: int = None,\n timeout: int = None,\n force: bool = None,\n noWaitAfter: bool = None) -> None:\n await self._channel.send('click', locals_to_params(locals()))\n\n async def dblclick(self,\n selector: str,\n modifiers: List[str] = None, # Literal['Alt', 'Control', 'Meta', 'Shift']] = None,\n position: Dict = None,\n delay: int = None,\n button: str = None, # Literal['left', 'right', 'middle'] = None,\n timeout: int = None,\n force: bool = None) -> None:\n await self._channel.send('dblclick', locals_to_params(locals()))\n\n async def fill(self,\n selector: str,\n value: str,\n timeout: int = None,\n noWaitAfter: bool = None) -> None:\n await self._channel.send('fill', locals_to_params(locals()))\n\n async def focus(self,\n selector: str,\n timeout: int = None) -> None:\n await self._channel.send('focus', locals_to_params(locals()))\n\n async def textContent(self,\n selector: str,\n timeout: int = None) -> str:\n return await self._channel.send('textContent', locals_to_params(locals()))\n\n async def innerText(self,\n selector: str,\n timeout: int = None) -> str:\n return await self._channel.send('innerText', locals_to_params(locals()))\n\n async def innerHTML(self,\n selector: str,\n timeout: int = None) -> str:\n return await self._channel.send('innerHTML', locals_to_params(locals()))\n\n async def getAttribute(self,\n selector: str,\n name: str,\n timeout: int = None) -> str:\n await self._channel.send('getAttribute', locals_to_params(locals()))\n\n async def hover(self,\n selector: str,\n modifiers: List[str] = None, # Literal['Alt', 'Control', 'Meta', 'Shift']] = None,\n position: Dict = None,\n timeout: int = None,\n force: bool = None) -> None:\n await self._channel.send('hover', locals_to_params(locals()))\n\n async def selectOption(self,\n selector: str,\n values: ValuesToSelect,\n timeout: int = None,\n noWaitAfter: bool = None) -> None:\n await self._channel.send('selectOption', dict(selector=selector, values=convertSelectOptionValues(values), timeout=timeout, noWaitAfter=noWaitAfter))\n\n async def setInputFiles(self,\n selector: str,\n files: Union[str, FilePayload, List[str], List[FilePayload]],\n timeout: int = None,\n noWaitAfter: bool = None) -> None:\n await self._channel.send('setInputFiles', locals_to_params(locals()))\n\n async def type(self,\n selector: str,\n text: str,\n delay: int = None,\n timeout: int = None,\n noWaitAfter: bool = None) -> None:\n await self._channel.send('type', locals_to_params(locals()))\n\n async def press(self,\n selector: str,\n key: str,\n delay: int = None,\n timeout: int = None,\n noWaitAfter: bool = None) -> None:\n await self._channel.send('press', locals_to_params(locals()))\n\n async def check(self,\n selector: str,\n timeout: int = None,\n force: bool = None,\n noWaitAfter: bool = None) -> None:\n await self._channel.send('check', locals_to_params(locals()))\n\n async def uncheck(self,\n selector: str,\n timeout: int = None,\n force: bool = None,\n noWaitAfter: bool = None) -> None:\n await self._channel.send('uncheck', locals_to_params(locals()))\n\n async def waitForTimeout(self, timeout: int) -> Awaitable[None]:\n return self._scope._loop.create_task(asyncio.sleep(timeout / 1000))\n\n async def waitForFunction(self,\n expression: str,\n arg: Any = None,\n force_expr: bool = False,\n timeout: int = None,\n polling: Union[int, str] = None # Union[int, Literal[\"raf\"]]\n ) -> JSHandle:\n if not is_function_body(expression):\n force_expr = True\n params = locals_to_params(locals())\n params['isFunction'] = not(force_expr)\n params['arg'] = serialize_argument(arg)\n return from_channel(await self._channel.send('waitForFunction', params))\n\n async def title(self) -> str:\n return await self._channel.send('title')\n", "path": "playwright_web/frame.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport asyncio\nfrom playwright_web.connection import Channel, ChannelOwner, ConnectionScope, from_channel, from_nullable_channel\nfrom playwright_web.element_handle import ElementHandle, convertSelectOptionValues, ValuesToSelect\nfrom playwright_web.helper import ConsoleMessageLocation, FilePayload, SelectOption, is_function_body, locals_to_params\nfrom playwright_web.js_handle import JSHandle, parse_result, serialize_argument\nfrom playwright_web.network import Request, Response, Route\nfrom typing import Any, Awaitable, Dict, List, Optional, Union\n\nclass Frame(ChannelOwner):\n\n def __init__(self, scope: ConnectionScope, guid: str, initializer: Dict) -> None:\n super().__init__(scope, guid, initializer)\n self._parent_frame = from_nullable_channel(initializer['parentFrame'])\n if self._parent_frame:\n self._parent_frame._child_frames.append(self)\n self._name = initializer['name']\n self._url = initializer['url']\n self._detached = False\n self._child_frames: List[Frame] = list()\n self._page: Optional['Page']\n\n async def goto(self,\n url: str,\n timeout: int = None,\n waitUntil: str = None, # Literal['load', 'domcontentloaded', 'networkidle'] = None,\n referer: str = None) -> Optional[Response]:\n return from_nullable_channel(await self._channel.send('goto', locals_to_params(locals())))\n\n async def waitForNavigation(self,\n timeout: int = None,\n waitUntil: str = None, # Literal['load', 'domcontentloaded', 'networkidle'] = None,\n url: str = None # TODO: add url, callback\n ) -> Optional[Response]:\n return from_nullable_channel(await self._channel.send('waitForNavigation', locals_to_params(locals())))\n\n async def waitForLoadState(self,\n state: str = 'load',\n timeout: int = None) -> None:\n await self._channel.send('waitForLoadState', locals_to_params(locals()))\n\n async def frameElement(self) -> ElementHandle:\n return from_channel(await self._channel.send('frameElement'))\n\n async def evaluate(self, expression: str, arg: Any = None, force_expr: bool = False) -> Any:\n if not is_function_body(expression):\n force_expr = True\n return parse_result(await self._channel.send('evaluateExpression', dict(expression=expression, isFunction=not(force_expr), arg=serialize_argument(arg))))\n\n async def evaluateHandle(self, expression: str, arg: Any = None, force_expr: bool = False) -> JSHandle:\n if not is_function_body(expression):\n force_expr = True\n return from_channel(await self._channel.send('evaluateExpressionHandle', dict(expression=expression, isFunction=not(force_expr), arg=serialize_argument(arg))))\n\n async def querySelector(self, selector: str) -> Optional[ElementHandle]:\n return from_nullable_channel(await self._channel.send('querySelector', dict(selector=selector)))\n\n async def waitForSelector(self,\n selector: str,\n timeout: int = None,\n state: str = None, # Literal['attached', 'detached', 'visible', 'hidden'] = None\n ) -> Optional[ElementHandle]:\n return from_nullable_channel(await self._channel.send('waitForSelector', locals_to_params(locals())))\n\n async def dispatchEvent(self,\n selector: str,\n type: str,\n eventInit: Dict = None,\n timeout: int = None) -> None:\n await self._channel.send('dispatchEvent', dict(selector=selector, type=type, eventInit=eventInit))\n\n async def evalOnSelector(self, selector: str, expression: str, arg: Any = None, force_expr: bool = False) -> Any:\n return parse_result(await self._channel.send('evalOnSelector', dict(selector=selector, expression=expression, isFunction=not(force_expr), arg=serialize_argument(arg))))\n\n async def evalOnSelectorAll(self, selector: str, expression: str, arg: Any = None, force_expr: bool = False) -> Any:\n return parse_result(await self._channel.send('evalOnSelectorAll', dict(selector=selector, expression=expression, isFunction=not(force_expr), arg=serialize_argument(arg))))\n\n async def content(self) -> str:\n return await self._channel.send('content')\n\n async def setContent(self,\n html: str, timeout: int = None,\n waitUntil: str = None, # Literal['load', 'domcontentloaded', 'networkidle'] = None\n ) -> None:\n await self._channel.send('setContent', locals_to_params(locals()))\n\n @property\n def name(self) -> str:\n return self._name or ''\n\n @property\n def url(self) -> str:\n return self._url or ''\n\n @property\n def parentFrame(self) -> Optional['Frame']:\n return self._parent_frame\n\n @property\n def childFrames(self) -> List['Frame']:\n return self._child_frames.copy()\n\n def isDetached(self) -> bool:\n return self._detached\n\n async def addScriptTag(self,\n url: str = None,\n path: str = None,\n content: str = None) -> ElementHandle:\n return from_channel(await self._channel.send('addScriptTag', locals_to_params(locals())))\n\n async def addStyleTag(self,\n url: str = None,\n path: str = None,\n content: str = None) -> ElementHandle:\n return from_channel(await self._channel.send('addStyleTag', locals_to_params(locals())))\n\n async def click(self,\n selector: str,\n modifiers: List[str] = None, # Literal['Alt', 'Control', 'Meta', 'Shift']] = None,\n position: Dict = None,\n delay: int = None,\n button: str = None, # Literal['left', 'right', 'middle'] = None,\n clickCount: int = None,\n timeout: int = None,\n force: bool = None,\n noWaitAfter: bool = None) -> None:\n await self._channel.send('click', locals_to_params(locals()))\n\n async def dblclick(self,\n selector: str,\n modifiers: List[str] = None, # Literal['Alt', 'Control', 'Meta', 'Shift']] = None,\n position: Dict = None,\n delay: int = None,\n button: str = None, # Literal['left', 'right', 'middle'] = None,\n timeout: int = None,\n force: bool = None) -> None:\n await self._channel.send('dblclick', locals_to_params(locals()))\n\n async def fill(self,\n selector: str,\n value: str,\n timeout: int = None,\n noWaitAfter: bool = None) -> None:\n await self._channel.send('fill', locals_to_params(locals()))\n\n async def focus(self,\n selector: str,\n timeout: int = None) -> None:\n await self._channel.send('focus', locals_to_params(locals()))\n\n async def textContent(self,\n selector: str,\n timeout: int = None) -> str:\n return await self._channel.send('textContent', locals_to_params(locals()))\n\n async def innerText(self,\n selector: str,\n timeout: int = None) -> str:\n return await self._channel.send('innerText', locals_to_params(locals()))\n\n async def innerHTML(self,\n selector: str,\n timeout: int = None) -> str:\n return await self._channel.send('innerHTML', locals_to_params(locals()))\n\n async def getAttribute(self,\n selector: str,\n name: str,\n timeout: int = None) -> str:\n return await self._channel.send('getAttribute', locals_to_params(locals()))\n\n async def hover(self,\n selector: str,\n modifiers: List[str] = None, # Literal['Alt', 'Control', 'Meta', 'Shift']] = None,\n position: Dict = None,\n timeout: int = None,\n force: bool = None) -> None:\n await self._channel.send('hover', locals_to_params(locals()))\n\n async def selectOption(self,\n selector: str,\n values: ValuesToSelect,\n timeout: int = None,\n noWaitAfter: bool = None) -> None:\n await self._channel.send('selectOption', dict(selector=selector, values=convertSelectOptionValues(values), timeout=timeout, noWaitAfter=noWaitAfter))\n\n async def setInputFiles(self,\n selector: str,\n files: Union[str, FilePayload, List[str], List[FilePayload]],\n timeout: int = None,\n noWaitAfter: bool = None) -> None:\n await self._channel.send('setInputFiles', locals_to_params(locals()))\n\n async def type(self,\n selector: str,\n text: str,\n delay: int = None,\n timeout: int = None,\n noWaitAfter: bool = None) -> None:\n await self._channel.send('type', locals_to_params(locals()))\n\n async def press(self,\n selector: str,\n key: str,\n delay: int = None,\n timeout: int = None,\n noWaitAfter: bool = None) -> None:\n await self._channel.send('press', locals_to_params(locals()))\n\n async def check(self,\n selector: str,\n timeout: int = None,\n force: bool = None,\n noWaitAfter: bool = None) -> None:\n await self._channel.send('check', locals_to_params(locals()))\n\n async def uncheck(self,\n selector: str,\n timeout: int = None,\n force: bool = None,\n noWaitAfter: bool = None) -> None:\n await self._channel.send('uncheck', locals_to_params(locals()))\n\n async def waitForTimeout(self, timeout: int) -> Awaitable[None]:\n return self._scope._loop.create_task(asyncio.sleep(timeout / 1000))\n\n async def waitForFunction(self,\n expression: str,\n arg: Any = None,\n force_expr: bool = False,\n timeout: int = None,\n polling: Union[int, str] = None # Union[int, Literal[\"raf\"]]\n ) -> JSHandle:\n if not is_function_body(expression):\n force_expr = True\n params = locals_to_params(locals())\n params['isFunction'] = not(force_expr)\n params['arg'] = serialize_argument(arg)\n return from_channel(await self._channel.send('waitForFunction', params))\n\n async def title(self) -> str:\n return await self._channel.send('title')\n", "path": "playwright_web/frame.py"}]}
| 3,499 | 111 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.