problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_32193 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-3536 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error when trying to reset password of other user
## Steps to reproduce
1. Set up another Mathesar user (other than the one you're logged in as).
1. Edit the another user and try to reset their password.
1. Observe this error message:

An API request is made to `/api/ui/v0/users/2/password_reset/` which returns a Django error
> AttributeError at /api/ui/v0/users/2/password_reset/
>
> 'PasswordResetSerializer' object has no attribute 'validate_password'
<details>
<summary>Traceback</summary>
```
Environment:
Request Method: POST
Request URL: http://localhost:8000/api/ui/v0/users/2/password_reset/
Django Version: 4.2.10
Python Version: 3.9.19
Installed Applications:
['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'whitenoise.runserver_nostatic',
'django.contrib.staticfiles',
'rest_framework',
'django_filters',
'django_property_filter',
'drf_spectacular',
'mathesar']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'mathesar.middleware.CursorClosedHandlerMiddleware',
'mathesar.middleware.PasswordChangeNeededMiddleware',
'django_userforeignkey.middleware.UserForeignKeyMiddleware',
'django_request_cache.middleware.RequestCacheMiddleware']
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py", line 56, in wrapper_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 466, in handle_exception
response = exception_handler(exc, context)
File "/code/mathesar/exception_handlers.py", line 63, in mathesar_exception_handler
raise exc
File "/usr/local/lib/python3.9/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/code/mathesar/api/ui/viewsets/users.py", line 29, in password_reset
serializer.is_valid(raise_exception=True)
File "/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py", line 235, in is_valid
raise ValidationError(self.errors)
File "/code/mathesar/api/exceptions/mixins.py", line 98, in errors
pretty_errors = self.build_pretty_errors(ugly_errors)
File "/code/mathesar/api/exceptions/mixins.py", line 64, in build_pretty_errors
pretty.extend(self.get_field_error_entries(errors[error_type], field))
File "/usr/local/lib/python3.9/site-packages/rest_framework_friendly_errors/mixins.py", line 180, in get_field_error_entries
return [self.get_field_error_entry(error, field) for error in errors]
File "/usr/local/lib/python3.9/site-packages/rest_framework_friendly_errors/mixins.py", line 180, in <listcomp>
return [self.get_field_error_entry(error, field) for error in errors]
File "/usr/local/lib/python3.9/site-packages/rest_framework_friendly_errors/mixins.py", line 168, in get_field_error_entry
validator = getattr(self, "validate_%s" % field.field_name)
Exception Type: AttributeError at /api/ui/v0/users/2/password_reset/
Exception Value: 'PasswordResetSerializer' object has no attribute 'validate_password'
```
</details>
I can reproduce this on the latest develop branch as well as the most recent release (Mathesar 0.1.6).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/api/ui/serializers/users.py`
Content:
```
1 from django.contrib.auth.password_validation import validate_password
2 from rest_access_policy import FieldAccessMixin, PermittedPkRelatedField
3 from rest_framework import serializers
4
5 from mathesar.api.db.permissions.database import DatabaseAccessPolicy
6 from mathesar.api.db.permissions.schema import SchemaAccessPolicy
7 from mathesar.api.exceptions.mixins import MathesarErrorMessageMixin
8 from mathesar.api.exceptions.validation_exceptions.exceptions import IncorrectOldPassword
9 from mathesar.api.ui.permissions.users import UserAccessPolicy
10 from mathesar.models.base import Database, Schema
11 from mathesar.models.users import User, DatabaseRole, SchemaRole
12
13
14 class NestedDatabaseRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):
15 class Meta:
16 model = DatabaseRole
17 fields = ['id', 'database', 'role']
18
19
20 class NestedSchemaRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):
21 class Meta:
22 model = SchemaRole
23 fields = ['id', 'schema', 'role']
24
25
26 class UserSerializer(MathesarErrorMessageMixin, FieldAccessMixin, serializers.ModelSerializer):
27 database_roles = NestedDatabaseRoleSerializer(many=True, required=False)
28 schema_roles = NestedSchemaRoleSerializer(many=True, required=False)
29 access_policy = UserAccessPolicy
30
31 class Meta:
32 model = User
33 fields = [
34 'id',
35 'full_name',
36 'short_name',
37 'username',
38 'password',
39 'email',
40 'is_superuser',
41 'database_roles',
42 'schema_roles',
43 'display_language'
44 ]
45 extra_kwargs = {
46 'password': {'write_only': True},
47 'database_roles': {'read_only': True},
48 'schema_roles': {'read_only': True}
49 }
50
51 def get_fields(self):
52 fields = super().get_fields()
53 request = self.context.get("request", None)
54 if not hasattr(request, 'parser_context'):
55 return fields
56 kwargs = request.parser_context.get('kwargs')
57 if kwargs:
58 user_pk = kwargs.get('pk')
59 if user_pk:
60 if request.user.id == int(user_pk) or not request.user.is_superuser:
61 fields["is_superuser"].read_only = True
62 return fields
63
64 def create(self, validated_data):
65 password = validated_data.pop('password')
66 user = User(**validated_data)
67 user.password_change_needed = True
68 user.set_password(password)
69 user.save()
70 return user
71
72
73 class ChangePasswordSerializer(MathesarErrorMessageMixin, serializers.Serializer):
74 password = serializers.CharField(write_only=True, required=True, validators=[validate_password])
75 old_password = serializers.CharField(write_only=True, required=True)
76
77 def validate_old_password(self, value):
78 user = self.context['request'].user
79 if user.check_password(value) is True:
80 return value
81 raise IncorrectOldPassword(field='old_password')
82
83 def update(self, instance, validated_data):
84 instance.set_password(validated_data['password'])
85 instance.save()
86 return instance
87
88
89 class PasswordResetSerializer(MathesarErrorMessageMixin, serializers.Serializer):
90 password = serializers.CharField(write_only=True, required=True, validators=[validate_password])
91
92
93 class DatabaseRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):
94 class Meta:
95 model = DatabaseRole
96 fields = ['id', 'user', 'database', 'role']
97
98 # Restrict the list of databases to which the user has access to create a database role
99 # Refer https://rsinger86.github.io/drf-access-policy/policy_reuse/ for the usage of `PermittedPkRelatedField`
100 database = PermittedPkRelatedField(
101 access_policy=DatabaseAccessPolicy,
102 queryset=Database.current_objects.all()
103 )
104
105
106 class SchemaRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):
107 class Meta:
108 model = SchemaRole
109 fields = ['id', 'user', 'schema', 'role']
110
111 schema = PermittedPkRelatedField(
112 access_policy=SchemaAccessPolicy,
113 queryset=Schema.current_objects.all()
114 )
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mathesar/api/ui/serializers/users.py b/mathesar/api/ui/serializers/users.py
--- a/mathesar/api/ui/serializers/users.py
+++ b/mathesar/api/ui/serializers/users.py
@@ -1,4 +1,5 @@
from django.contrib.auth.password_validation import validate_password
+from django.core.exceptions import ValidationError as DjangoValidationError
from rest_access_policy import FieldAccessMixin, PermittedPkRelatedField
from rest_framework import serializers
@@ -71,7 +72,7 @@
class ChangePasswordSerializer(MathesarErrorMessageMixin, serializers.Serializer):
- password = serializers.CharField(write_only=True, required=True, validators=[validate_password])
+ password = serializers.CharField(write_only=True, required=True)
old_password = serializers.CharField(write_only=True, required=True)
def validate_old_password(self, value):
@@ -80,6 +81,13 @@
return value
raise IncorrectOldPassword(field='old_password')
+ def validate_password(self, value):
+ try:
+ validate_password(value)
+ except DjangoValidationError as e:
+ raise e
+ return value
+
def update(self, instance, validated_data):
instance.set_password(validated_data['password'])
instance.save()
@@ -87,7 +95,7 @@
class PasswordResetSerializer(MathesarErrorMessageMixin, serializers.Serializer):
- password = serializers.CharField(write_only=True, required=True, validators=[validate_password])
+ password = serializers.CharField(write_only=True, required=True)
class DatabaseRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):
| {"golden_diff": "diff --git a/mathesar/api/ui/serializers/users.py b/mathesar/api/ui/serializers/users.py\n--- a/mathesar/api/ui/serializers/users.py\n+++ b/mathesar/api/ui/serializers/users.py\n@@ -1,4 +1,5 @@\n from django.contrib.auth.password_validation import validate_password\n+from django.core.exceptions import ValidationError as DjangoValidationError\n from rest_access_policy import FieldAccessMixin, PermittedPkRelatedField\n from rest_framework import serializers\n \n@@ -71,7 +72,7 @@\n \n \n class ChangePasswordSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n- password = serializers.CharField(write_only=True, required=True, validators=[validate_password])\n+ password = serializers.CharField(write_only=True, required=True)\n old_password = serializers.CharField(write_only=True, required=True)\n \n def validate_old_password(self, value):\n@@ -80,6 +81,13 @@\n return value\n raise IncorrectOldPassword(field='old_password')\n \n+ def validate_password(self, value):\n+ try:\n+ validate_password(value)\n+ except DjangoValidationError as e:\n+ raise e\n+ return value\n+\n def update(self, instance, validated_data):\n instance.set_password(validated_data['password'])\n instance.save()\n@@ -87,7 +95,7 @@\n \n \n class PasswordResetSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n- password = serializers.CharField(write_only=True, required=True, validators=[validate_password])\n+ password = serializers.CharField(write_only=True, required=True)\n \n \n class DatabaseRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n", "issue": "Error when trying to reset password of other user\n## Steps to reproduce\n\n1. Set up another Mathesar user (other than the one you're logged in as).\n\n1. Edit the another user and try to reset their password.\n\n1. Observe this error message:\n\n \n \n An API request is made to `/api/ui/v0/users/2/password_reset/` which returns a Django error\n\n > AttributeError at /api/ui/v0/users/2/password_reset/\n >\n > 'PasswordResetSerializer' object has no attribute 'validate_password'\n\n <details>\n <summary>Traceback</summary>\n\n ```\n Environment:\n\n\n Request Method: POST\n Request URL: http://localhost:8000/api/ui/v0/users/2/password_reset/\n\n Django Version: 4.2.10\n Python Version: 3.9.19\n Installed Applications:\n ['django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'whitenoise.runserver_nostatic',\n 'django.contrib.staticfiles',\n 'rest_framework',\n 'django_filters',\n 'django_property_filter',\n 'drf_spectacular',\n 'mathesar']\n Installed Middleware:\n ['django.middleware.security.SecurityMiddleware',\n 'whitenoise.middleware.WhiteNoiseMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n 'mathesar.middleware.CursorClosedHandlerMiddleware',\n 'mathesar.middleware.PasswordChangeNeededMiddleware',\n 'django_userforeignkey.middleware.UserForeignKeyMiddleware',\n 'django_request_cache.middleware.RequestCacheMiddleware']\n\n\n\n Traceback (most recent call last):\n File \"/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py\", line 55, in inner\n response = get_response(request)\n File \"/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py\", line 197, in _get_response\n response = wrapped_callback(request, *callback_args, **callback_kwargs)\n File \"/usr/local/lib/python3.9/site-packages/django/views/decorators/csrf.py\", line 56, in wrapper_view\n return view_func(*args, **kwargs)\n File \"/usr/local/lib/python3.9/site-packages/rest_framework/viewsets.py\", line 125, in view\n return self.dispatch(request, *args, **kwargs)\n File \"/usr/local/lib/python3.9/site-packages/rest_framework/views.py\", line 509, in dispatch\n response = self.handle_exception(exc)\n File \"/usr/local/lib/python3.9/site-packages/rest_framework/views.py\", line 466, in handle_exception\n response = exception_handler(exc, context)\n File \"/code/mathesar/exception_handlers.py\", line 63, in mathesar_exception_handler\n raise exc\n File \"/usr/local/lib/python3.9/site-packages/rest_framework/views.py\", line 506, in dispatch\n response = handler(request, *args, **kwargs)\n File \"/code/mathesar/api/ui/viewsets/users.py\", line 29, in password_reset\n serializer.is_valid(raise_exception=True)\n File \"/usr/local/lib/python3.9/site-packages/rest_framework/serializers.py\", line 235, in is_valid\n raise ValidationError(self.errors)\n File \"/code/mathesar/api/exceptions/mixins.py\", line 98, in errors\n pretty_errors = self.build_pretty_errors(ugly_errors)\n File \"/code/mathesar/api/exceptions/mixins.py\", line 64, in build_pretty_errors\n pretty.extend(self.get_field_error_entries(errors[error_type], field))\n File \"/usr/local/lib/python3.9/site-packages/rest_framework_friendly_errors/mixins.py\", line 180, in get_field_error_entries\n return [self.get_field_error_entry(error, field) for error in errors]\n File \"/usr/local/lib/python3.9/site-packages/rest_framework_friendly_errors/mixins.py\", line 180, in <listcomp>\n return [self.get_field_error_entry(error, field) for error in errors]\n File \"/usr/local/lib/python3.9/site-packages/rest_framework_friendly_errors/mixins.py\", line 168, in get_field_error_entry\n validator = getattr(self, \"validate_%s\" % field.field_name)\n\n Exception Type: AttributeError at /api/ui/v0/users/2/password_reset/\n Exception Value: 'PasswordResetSerializer' object has no attribute 'validate_password'\n ```\n\n </details>\n\nI can reproduce this on the latest develop branch as well as the most recent release (Mathesar 0.1.6).\n\n\n", "before_files": [{"content": "from django.contrib.auth.password_validation import validate_password\nfrom rest_access_policy import FieldAccessMixin, PermittedPkRelatedField\nfrom rest_framework import serializers\n\nfrom mathesar.api.db.permissions.database import DatabaseAccessPolicy\nfrom mathesar.api.db.permissions.schema import SchemaAccessPolicy\nfrom mathesar.api.exceptions.mixins import MathesarErrorMessageMixin\nfrom mathesar.api.exceptions.validation_exceptions.exceptions import IncorrectOldPassword\nfrom mathesar.api.ui.permissions.users import UserAccessPolicy\nfrom mathesar.models.base import Database, Schema\nfrom mathesar.models.users import User, DatabaseRole, SchemaRole\n\n\nclass NestedDatabaseRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n class Meta:\n model = DatabaseRole\n fields = ['id', 'database', 'role']\n\n\nclass NestedSchemaRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n class Meta:\n model = SchemaRole\n fields = ['id', 'schema', 'role']\n\n\nclass UserSerializer(MathesarErrorMessageMixin, FieldAccessMixin, serializers.ModelSerializer):\n database_roles = NestedDatabaseRoleSerializer(many=True, required=False)\n schema_roles = NestedSchemaRoleSerializer(many=True, required=False)\n access_policy = UserAccessPolicy\n\n class Meta:\n model = User\n fields = [\n 'id',\n 'full_name',\n 'short_name',\n 'username',\n 'password',\n 'email',\n 'is_superuser',\n 'database_roles',\n 'schema_roles',\n 'display_language'\n ]\n extra_kwargs = {\n 'password': {'write_only': True},\n 'database_roles': {'read_only': True},\n 'schema_roles': {'read_only': True}\n }\n\n def get_fields(self):\n fields = super().get_fields()\n request = self.context.get(\"request\", None)\n if not hasattr(request, 'parser_context'):\n return fields\n kwargs = request.parser_context.get('kwargs')\n if kwargs:\n user_pk = kwargs.get('pk')\n if user_pk:\n if request.user.id == int(user_pk) or not request.user.is_superuser:\n fields[\"is_superuser\"].read_only = True\n return fields\n\n def create(self, validated_data):\n password = validated_data.pop('password')\n user = User(**validated_data)\n user.password_change_needed = True\n user.set_password(password)\n user.save()\n return user\n\n\nclass ChangePasswordSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n password = serializers.CharField(write_only=True, required=True, validators=[validate_password])\n old_password = serializers.CharField(write_only=True, required=True)\n\n def validate_old_password(self, value):\n user = self.context['request'].user\n if user.check_password(value) is True:\n return value\n raise IncorrectOldPassword(field='old_password')\n\n def update(self, instance, validated_data):\n instance.set_password(validated_data['password'])\n instance.save()\n return instance\n\n\nclass PasswordResetSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n password = serializers.CharField(write_only=True, required=True, validators=[validate_password])\n\n\nclass DatabaseRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n class Meta:\n model = DatabaseRole\n fields = ['id', 'user', 'database', 'role']\n\n # Restrict the list of databases to which the user has access to create a database role\n # Refer https://rsinger86.github.io/drf-access-policy/policy_reuse/ for the usage of `PermittedPkRelatedField`\n database = PermittedPkRelatedField(\n access_policy=DatabaseAccessPolicy,\n queryset=Database.current_objects.all()\n )\n\n\nclass SchemaRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n class Meta:\n model = SchemaRole\n fields = ['id', 'user', 'schema', 'role']\n\n schema = PermittedPkRelatedField(\n access_policy=SchemaAccessPolicy,\n queryset=Schema.current_objects.all()\n )\n", "path": "mathesar/api/ui/serializers/users.py"}], "after_files": [{"content": "from django.contrib.auth.password_validation import validate_password\nfrom django.core.exceptions import ValidationError as DjangoValidationError\nfrom rest_access_policy import FieldAccessMixin, PermittedPkRelatedField\nfrom rest_framework import serializers\n\nfrom mathesar.api.db.permissions.database import DatabaseAccessPolicy\nfrom mathesar.api.db.permissions.schema import SchemaAccessPolicy\nfrom mathesar.api.exceptions.mixins import MathesarErrorMessageMixin\nfrom mathesar.api.exceptions.validation_exceptions.exceptions import IncorrectOldPassword\nfrom mathesar.api.ui.permissions.users import UserAccessPolicy\nfrom mathesar.models.base import Database, Schema\nfrom mathesar.models.users import User, DatabaseRole, SchemaRole\n\n\nclass NestedDatabaseRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n class Meta:\n model = DatabaseRole\n fields = ['id', 'database', 'role']\n\n\nclass NestedSchemaRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n class Meta:\n model = SchemaRole\n fields = ['id', 'schema', 'role']\n\n\nclass UserSerializer(MathesarErrorMessageMixin, FieldAccessMixin, serializers.ModelSerializer):\n database_roles = NestedDatabaseRoleSerializer(many=True, required=False)\n schema_roles = NestedSchemaRoleSerializer(many=True, required=False)\n access_policy = UserAccessPolicy\n\n class Meta:\n model = User\n fields = [\n 'id',\n 'full_name',\n 'short_name',\n 'username',\n 'password',\n 'email',\n 'is_superuser',\n 'database_roles',\n 'schema_roles',\n 'display_language'\n ]\n extra_kwargs = {\n 'password': {'write_only': True},\n 'database_roles': {'read_only': True},\n 'schema_roles': {'read_only': True}\n }\n\n def get_fields(self):\n fields = super().get_fields()\n request = self.context.get(\"request\", None)\n if not hasattr(request, 'parser_context'):\n return fields\n kwargs = request.parser_context.get('kwargs')\n if kwargs:\n user_pk = kwargs.get('pk')\n if user_pk:\n if request.user.id == int(user_pk) or not request.user.is_superuser:\n fields[\"is_superuser\"].read_only = True\n return fields\n\n def create(self, validated_data):\n password = validated_data.pop('password')\n user = User(**validated_data)\n user.password_change_needed = True\n user.set_password(password)\n user.save()\n return user\n\n\nclass ChangePasswordSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n password = serializers.CharField(write_only=True, required=True)\n old_password = serializers.CharField(write_only=True, required=True)\n\n def validate_old_password(self, value):\n user = self.context['request'].user\n if user.check_password(value) is True:\n return value\n raise IncorrectOldPassword(field='old_password')\n\n def validate_password(self, value):\n try:\n validate_password(value)\n except DjangoValidationError as e:\n raise e\n return value\n\n def update(self, instance, validated_data):\n instance.set_password(validated_data['password'])\n instance.save()\n return instance\n\n\nclass PasswordResetSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n password = serializers.CharField(write_only=True, required=True)\n\n\nclass DatabaseRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n class Meta:\n model = DatabaseRole\n fields = ['id', 'user', 'database', 'role']\n\n # Restrict the list of databases to which the user has access to create a database role\n # Refer https://rsinger86.github.io/drf-access-policy/policy_reuse/ for the usage of `PermittedPkRelatedField`\n database = PermittedPkRelatedField(\n access_policy=DatabaseAccessPolicy,\n queryset=Database.current_objects.all()\n )\n\n\nclass SchemaRoleSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n class Meta:\n model = SchemaRole\n fields = ['id', 'user', 'schema', 'role']\n\n schema = PermittedPkRelatedField(\n access_policy=SchemaAccessPolicy,\n queryset=Schema.current_objects.all()\n )\n", "path": "mathesar/api/ui/serializers/users.py"}]} | 2,445 | 338 |
gh_patches_debug_16329 | rasdani/github-patches | git_diff | gratipay__gratipay.com-3934 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't submit new team after changing image.
Can't believe this didn't come up yet. I noticed this while exploring [create.json.spt](https://github.com/gratipay/gratipay.com/blob/master/www/teams/create.json.spt) which inspires the new [edit.json.spt](https://github.com/gratipay/gratipay.com/pull/3923/files#diff-6).
The way it is written right now, we first write the team details to the db (with a unique generated `slug`) and _then_ try to save the team image. If a user uploads an image of size > 1Mb or an image which is not a jpg or png, the team creation won't be successful as far as the user is concerned and he'll resubmit the team application form with an appropriate image. But when he does again, we would have already created a slug for that team name resulting in a misleading message of `Sorry, there is already a team using <slug>.` when in fact the `slug` was created because we wrote the team details to the db first.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gratipay/utils/images.py`
Content:
```
1 import zipfile
2 from cStringIO import StringIO
3
4 import requests
5
6 def imgize(image, image_type):
7 large = None
8 small = None
9 crops = requests.post( 'http://gip.rocks/v1',
10 data=image,
11 headers={'Content-Type': image_type}
12 )
13 if crops.status_code == 200:
14 zf = zipfile.ZipFile(StringIO(crops.content))
15 large = zf.open('160').read()
16 small = zf.open('48').read()
17
18 return crops.status_code, large, small
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gratipay/utils/images.py b/gratipay/utils/images.py
--- a/gratipay/utils/images.py
+++ b/gratipay/utils/images.py
@@ -8,11 +8,22 @@
small = None
crops = requests.post( 'http://gip.rocks/v1',
data=image,
- headers={'Content-Type': image_type}
- )
+ headers={'Content-Type': image_type})
+
if crops.status_code == 200:
zf = zipfile.ZipFile(StringIO(crops.content))
large = zf.open('160').read()
small = zf.open('48').read()
+ return large, small
+ elif crops.status_code == 413:
+ raise ImageTooLarge
+ elif crops.status_code == 415:
+ raise InvalidImageType
+ else:
+ raise UnknownImageError
+
+class ImageTooLarge(Exception): pass
+
+class InvalidImageType(Exception): pass
- return crops.status_code, large, small
\ No newline at end of file
+class UnknownImageError(Exception): pass
| {"golden_diff": "diff --git a/gratipay/utils/images.py b/gratipay/utils/images.py\n--- a/gratipay/utils/images.py\n+++ b/gratipay/utils/images.py\n@@ -8,11 +8,22 @@\n small = None\n crops = requests.post( 'http://gip.rocks/v1',\n data=image,\n- headers={'Content-Type': image_type}\n- )\n+ headers={'Content-Type': image_type})\n+\n if crops.status_code == 200:\n zf = zipfile.ZipFile(StringIO(crops.content))\n large = zf.open('160').read()\n small = zf.open('48').read()\n+ return large, small\n+ elif crops.status_code == 413:\n+ raise ImageTooLarge\n+ elif crops.status_code == 415:\n+ raise InvalidImageType\n+ else:\n+ raise UnknownImageError\n+\n+class ImageTooLarge(Exception): pass\n+\n+class InvalidImageType(Exception): pass\n \n- return crops.status_code, large, small\n\\ No newline at end of file\n+class UnknownImageError(Exception): pass\n", "issue": "Can't submit new team after changing image.\nCan't believe this didn't come up yet. I noticed this while exploring [create.json.spt](https://github.com/gratipay/gratipay.com/blob/master/www/teams/create.json.spt) which inspires the new [edit.json.spt](https://github.com/gratipay/gratipay.com/pull/3923/files#diff-6). \n\nThe way it is written right now, we first write the team details to the db (with a unique generated `slug`) and _then_ try to save the team image. If a user uploads an image of size > 1Mb or an image which is not a jpg or png, the team creation won't be successful as far as the user is concerned and he'll resubmit the team application form with an appropriate image. But when he does again, we would have already created a slug for that team name resulting in a misleading message of `Sorry, there is already a team using <slug>.` when in fact the `slug` was created because we wrote the team details to the db first.\n\n", "before_files": [{"content": "import zipfile\nfrom cStringIO import StringIO\n\nimport requests\n\ndef imgize(image, image_type):\n large = None\n small = None\n crops = requests.post( 'http://gip.rocks/v1',\n data=image,\n headers={'Content-Type': image_type}\n )\n if crops.status_code == 200:\n zf = zipfile.ZipFile(StringIO(crops.content))\n large = zf.open('160').read()\n small = zf.open('48').read()\n\n return crops.status_code, large, small", "path": "gratipay/utils/images.py"}], "after_files": [{"content": "import zipfile\nfrom cStringIO import StringIO\n\nimport requests\n\ndef imgize(image, image_type):\n large = None\n small = None\n crops = requests.post( 'http://gip.rocks/v1',\n data=image,\n headers={'Content-Type': image_type})\n\n if crops.status_code == 200:\n zf = zipfile.ZipFile(StringIO(crops.content))\n large = zf.open('160').read()\n small = zf.open('48').read()\n return large, small\n elif crops.status_code == 413:\n raise ImageTooLarge\n elif crops.status_code == 415:\n raise InvalidImageType\n else:\n raise UnknownImageError\n\nclass ImageTooLarge(Exception): pass\n\nclass InvalidImageType(Exception): pass\n\nclass UnknownImageError(Exception): pass\n", "path": "gratipay/utils/images.py"}]} | 638 | 247 |
gh_patches_debug_24991 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-2167 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Follow Request bug: looks as if you already follow a person if you go to profile of requester
When an account with moderated follows has a follow request incoming the following happens:
You see a notification about someone wanting to follow you. You go to their profile to see what they are like and... it looks like you already follow this person? Because the "Follow" button erroneously reads "Unfollow".
Additionally, there is no other indication of a pending follow request while on their profile.
**To Reproduce**
1. Set your account to prompt follow requests
2. Get a follow request
3. Go to the profile of the requester
4. The follow button now reads "Unfollow", implying you follow them already (you don't)
As seen on bookwyrm.social
(anyway thanks for this excellent network of book nerds mouse!!)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/models/relationship.py`
Content:
```
1 """ defines relationships between users """
2 from django.apps import apps
3 from django.core.cache import cache
4 from django.db import models, transaction, IntegrityError
5 from django.db.models import Q
6
7 from bookwyrm import activitypub
8 from .activitypub_mixin import ActivitypubMixin, ActivityMixin
9 from .activitypub_mixin import generate_activity
10 from .base_model import BookWyrmModel
11 from . import fields
12
13
14 class UserRelationship(BookWyrmModel):
15 """many-to-many through table for followers"""
16
17 user_subject = fields.ForeignKey(
18 "User",
19 on_delete=models.PROTECT,
20 related_name="%(class)s_user_subject",
21 activitypub_field="actor",
22 )
23 user_object = fields.ForeignKey(
24 "User",
25 on_delete=models.PROTECT,
26 related_name="%(class)s_user_object",
27 activitypub_field="object",
28 )
29
30 @property
31 def privacy(self):
32 """all relationships are handled directly with the participants"""
33 return "direct"
34
35 @property
36 def recipients(self):
37 """the remote user needs to recieve direct broadcasts"""
38 return [u for u in [self.user_subject, self.user_object] if not u.local]
39
40 def save(self, *args, **kwargs):
41 """clear the template cache"""
42 clear_cache(self.user_subject, self.user_object)
43 super().save(*args, **kwargs)
44
45 def delete(self, *args, **kwargs):
46 """clear the template cache"""
47 clear_cache(self.user_subject, self.user_object)
48 super().delete(*args, **kwargs)
49
50 class Meta:
51 """relationships should be unique"""
52
53 abstract = True
54 constraints = [
55 models.UniqueConstraint(
56 fields=["user_subject", "user_object"], name="%(class)s_unique"
57 ),
58 models.CheckConstraint(
59 check=~models.Q(user_subject=models.F("user_object")),
60 name="%(class)s_no_self",
61 ),
62 ]
63
64 def get_remote_id(self):
65 """use shelf identifier in remote_id"""
66 base_path = self.user_subject.remote_id
67 return f"{base_path}#follows/{self.id}"
68
69
70 class UserFollows(ActivityMixin, UserRelationship):
71 """Following a user"""
72
73 status = "follows"
74
75 def to_activity(self): # pylint: disable=arguments-differ
76 """overrides default to manually set serializer"""
77 return activitypub.Follow(**generate_activity(self))
78
79 def save(self, *args, **kwargs):
80 """really really don't let a user follow someone who blocked them"""
81 # blocking in either direction is a no-go
82 if UserBlocks.objects.filter(
83 Q(
84 user_subject=self.user_subject,
85 user_object=self.user_object,
86 )
87 | Q(
88 user_subject=self.user_object,
89 user_object=self.user_subject,
90 )
91 ).exists():
92 raise IntegrityError(
93 "Attempting to follow blocked user", self.user_subject, self.user_object
94 )
95 # don't broadcast this type of relationship -- accepts and requests
96 # are handled by the UserFollowRequest model
97 super().save(*args, broadcast=False, **kwargs)
98
99 @classmethod
100 def from_request(cls, follow_request):
101 """converts a follow request into a follow relationship"""
102 obj, _ = cls.objects.get_or_create(
103 user_subject=follow_request.user_subject,
104 user_object=follow_request.user_object,
105 remote_id=follow_request.remote_id,
106 )
107 return obj
108
109
110 class UserFollowRequest(ActivitypubMixin, UserRelationship):
111 """following a user requires manual or automatic confirmation"""
112
113 status = "follow_request"
114 activity_serializer = activitypub.Follow
115
116 def save(self, *args, broadcast=True, **kwargs): # pylint: disable=arguments-differ
117 """make sure the follow or block relationship doesn't already exist"""
118 # if there's a request for a follow that already exists, accept it
119 # without changing the local database state
120 if UserFollows.objects.filter(
121 user_subject=self.user_subject,
122 user_object=self.user_object,
123 ).exists():
124 self.accept(broadcast_only=True)
125 return
126
127 # blocking in either direction is a no-go
128 if UserBlocks.objects.filter(
129 Q(
130 user_subject=self.user_subject,
131 user_object=self.user_object,
132 )
133 | Q(
134 user_subject=self.user_object,
135 user_object=self.user_subject,
136 )
137 ).exists():
138 raise IntegrityError(
139 "Attempting to follow blocked user", self.user_subject, self.user_object
140 )
141 super().save(*args, **kwargs)
142
143 if broadcast and self.user_subject.local and not self.user_object.local:
144 self.broadcast(self.to_activity(), self.user_subject)
145
146 if self.user_object.local:
147 manually_approves = self.user_object.manually_approves_followers
148 if not manually_approves:
149 self.accept()
150
151 model = apps.get_model("bookwyrm.Notification", require_ready=True)
152 notification_type = "FOLLOW_REQUEST" if manually_approves else "FOLLOW"
153 model.objects.create(
154 user=self.user_object,
155 related_user=self.user_subject,
156 notification_type=notification_type,
157 )
158
159 def get_accept_reject_id(self, status):
160 """get id for sending an accept or reject of a local user"""
161
162 base_path = self.user_object.remote_id
163 status_id = self.id or 0
164 return f"{base_path}#{status}/{status_id}"
165
166 def accept(self, broadcast_only=False):
167 """turn this request into the real deal"""
168 user = self.user_object
169 if not self.user_subject.local:
170 activity = activitypub.Accept(
171 id=self.get_accept_reject_id(status="accepts"),
172 actor=self.user_object.remote_id,
173 object=self.to_activity(),
174 ).serialize()
175 self.broadcast(activity, user)
176 if broadcast_only:
177 return
178
179 with transaction.atomic():
180 UserFollows.from_request(self)
181 if self.id:
182 self.delete()
183
184 def reject(self):
185 """generate a Reject for this follow request"""
186 if self.user_object.local:
187 activity = activitypub.Reject(
188 id=self.get_accept_reject_id(status="rejects"),
189 actor=self.user_object.remote_id,
190 object=self.to_activity(),
191 ).serialize()
192 self.broadcast(activity, self.user_object)
193
194 self.delete()
195
196
197 class UserBlocks(ActivityMixin, UserRelationship):
198 """prevent another user from following you and seeing your posts"""
199
200 status = "blocks"
201 activity_serializer = activitypub.Block
202
203 def save(self, *args, **kwargs):
204 """remove follow or follow request rels after a block is created"""
205 super().save(*args, **kwargs)
206
207 UserFollows.objects.filter(
208 Q(user_subject=self.user_subject, user_object=self.user_object)
209 | Q(user_subject=self.user_object, user_object=self.user_subject)
210 ).delete()
211 UserFollowRequest.objects.filter(
212 Q(user_subject=self.user_subject, user_object=self.user_object)
213 | Q(user_subject=self.user_object, user_object=self.user_subject)
214 ).delete()
215
216
217 def clear_cache(user_subject, user_object):
218 """clear relationship cache"""
219 cache.delete_many(
220 [
221 f"relationship-{user_subject.id}-{user_object.id}",
222 f"relationship-{user_object.id}-{user_subject.id}",
223 ]
224 )
225
```
Path: `bookwyrm/templatetags/interaction.py`
Content:
```
1 """ template filters for status interaction buttons """
2 from django import template
3
4 from bookwyrm import models
5 from bookwyrm.utils.cache import get_or_set
6
7
8 register = template.Library()
9
10
11 @register.filter(name="liked")
12 def get_user_liked(user, status):
13 """did the given user fav a status?"""
14 return get_or_set(
15 f"fav-{user.id}-{status.id}",
16 lambda u, s: models.Favorite.objects.filter(user=u, status=s).exists(),
17 user,
18 status,
19 timeout=259200,
20 )
21
22
23 @register.filter(name="boosted")
24 def get_user_boosted(user, status):
25 """did the given user fav a status?"""
26 return get_or_set(
27 f"boost-{user.id}-{status.id}",
28 lambda u: status.boosters.filter(user=u).exists(),
29 user,
30 timeout=259200,
31 )
32
33
34 @register.filter(name="saved")
35 def get_user_saved_lists(user, book_list):
36 """did the user save a list"""
37 return user.saved_lists.filter(id=book_list.id).exists()
38
39
40 @register.simple_tag(takes_context=True)
41 def get_relationship(context, user_object):
42 """caches the relationship between the logged in user and another user"""
43 user = context["request"].user
44 return get_or_set(
45 f"relationship-{user.id}-{user_object.id}",
46 get_relationship_name,
47 user,
48 user_object,
49 timeout=259200,
50 )
51
52
53 def get_relationship_name(user, user_object):
54 """returns the relationship type"""
55 types = {
56 "is_following": False,
57 "is_follow_pending": False,
58 "is_blocked": False,
59 }
60 if user_object in user.blocks.all():
61 types["is_blocked"] = True
62 elif user_object in user.following.all():
63 types["is_following"] = True
64 elif user_object in user.follower_requests.all():
65 types["is_follow_pending"] = True
66 return types
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bookwyrm/models/relationship.py b/bookwyrm/models/relationship.py
--- a/bookwyrm/models/relationship.py
+++ b/bookwyrm/models/relationship.py
@@ -218,7 +218,7 @@
"""clear relationship cache"""
cache.delete_many(
[
- f"relationship-{user_subject.id}-{user_object.id}",
- f"relationship-{user_object.id}-{user_subject.id}",
+ f"cached-relationship-{user_subject.id}-{user_object.id}",
+ f"cached-relationship-{user_object.id}-{user_subject.id}",
]
)
diff --git a/bookwyrm/templatetags/interaction.py b/bookwyrm/templatetags/interaction.py
--- a/bookwyrm/templatetags/interaction.py
+++ b/bookwyrm/templatetags/interaction.py
@@ -42,7 +42,7 @@
"""caches the relationship between the logged in user and another user"""
user = context["request"].user
return get_or_set(
- f"relationship-{user.id}-{user_object.id}",
+ f"cached-relationship-{user.id}-{user_object.id}",
get_relationship_name,
user,
user_object,
@@ -61,6 +61,6 @@
types["is_blocked"] = True
elif user_object in user.following.all():
types["is_following"] = True
- elif user_object in user.follower_requests.all():
+ elif user in user_object.follower_requests.all():
types["is_follow_pending"] = True
return types
| {"golden_diff": "diff --git a/bookwyrm/models/relationship.py b/bookwyrm/models/relationship.py\n--- a/bookwyrm/models/relationship.py\n+++ b/bookwyrm/models/relationship.py\n@@ -218,7 +218,7 @@\n \"\"\"clear relationship cache\"\"\"\n cache.delete_many(\n [\n- f\"relationship-{user_subject.id}-{user_object.id}\",\n- f\"relationship-{user_object.id}-{user_subject.id}\",\n+ f\"cached-relationship-{user_subject.id}-{user_object.id}\",\n+ f\"cached-relationship-{user_object.id}-{user_subject.id}\",\n ]\n )\ndiff --git a/bookwyrm/templatetags/interaction.py b/bookwyrm/templatetags/interaction.py\n--- a/bookwyrm/templatetags/interaction.py\n+++ b/bookwyrm/templatetags/interaction.py\n@@ -42,7 +42,7 @@\n \"\"\"caches the relationship between the logged in user and another user\"\"\"\n user = context[\"request\"].user\n return get_or_set(\n- f\"relationship-{user.id}-{user_object.id}\",\n+ f\"cached-relationship-{user.id}-{user_object.id}\",\n get_relationship_name,\n user,\n user_object,\n@@ -61,6 +61,6 @@\n types[\"is_blocked\"] = True\n elif user_object in user.following.all():\n types[\"is_following\"] = True\n- elif user_object in user.follower_requests.all():\n+ elif user in user_object.follower_requests.all():\n types[\"is_follow_pending\"] = True\n return types\n", "issue": "Follow Request bug: looks as if you already follow a person if you go to profile of requester\nWhen an account with moderated follows has a follow request incoming the following happens:\r\n\r\nYou see a notification about someone wanting to follow you. You go to their profile to see what they are like and... it looks like you already follow this person? Because the \"Follow\" button erroneously reads \"Unfollow\". \r\n\r\nAdditionally, there is no other indication of a pending follow request while on their profile.\r\n\r\n**To Reproduce**\r\n\r\n1. Set your account to prompt follow requests\r\n2. Get a follow request\r\n3. Go to the profile of the requester\r\n4. The follow button now reads \"Unfollow\", implying you follow them already (you don't)\r\n\r\nAs seen on bookwyrm.social\r\n\r\n(anyway thanks for this excellent network of book nerds mouse!!)\r\n\n", "before_files": [{"content": "\"\"\" defines relationships between users \"\"\"\nfrom django.apps import apps\nfrom django.core.cache import cache\nfrom django.db import models, transaction, IntegrityError\nfrom django.db.models import Q\n\nfrom bookwyrm import activitypub\nfrom .activitypub_mixin import ActivitypubMixin, ActivityMixin\nfrom .activitypub_mixin import generate_activity\nfrom .base_model import BookWyrmModel\nfrom . import fields\n\n\nclass UserRelationship(BookWyrmModel):\n \"\"\"many-to-many through table for followers\"\"\"\n\n user_subject = fields.ForeignKey(\n \"User\",\n on_delete=models.PROTECT,\n related_name=\"%(class)s_user_subject\",\n activitypub_field=\"actor\",\n )\n user_object = fields.ForeignKey(\n \"User\",\n on_delete=models.PROTECT,\n related_name=\"%(class)s_user_object\",\n activitypub_field=\"object\",\n )\n\n @property\n def privacy(self):\n \"\"\"all relationships are handled directly with the participants\"\"\"\n return \"direct\"\n\n @property\n def recipients(self):\n \"\"\"the remote user needs to recieve direct broadcasts\"\"\"\n return [u for u in [self.user_subject, self.user_object] if not u.local]\n\n def save(self, *args, **kwargs):\n \"\"\"clear the template cache\"\"\"\n clear_cache(self.user_subject, self.user_object)\n super().save(*args, **kwargs)\n\n def delete(self, *args, **kwargs):\n \"\"\"clear the template cache\"\"\"\n clear_cache(self.user_subject, self.user_object)\n super().delete(*args, **kwargs)\n\n class Meta:\n \"\"\"relationships should be unique\"\"\"\n\n abstract = True\n constraints = [\n models.UniqueConstraint(\n fields=[\"user_subject\", \"user_object\"], name=\"%(class)s_unique\"\n ),\n models.CheckConstraint(\n check=~models.Q(user_subject=models.F(\"user_object\")),\n name=\"%(class)s_no_self\",\n ),\n ]\n\n def get_remote_id(self):\n \"\"\"use shelf identifier in remote_id\"\"\"\n base_path = self.user_subject.remote_id\n return f\"{base_path}#follows/{self.id}\"\n\n\nclass UserFollows(ActivityMixin, UserRelationship):\n \"\"\"Following a user\"\"\"\n\n status = \"follows\"\n\n def to_activity(self): # pylint: disable=arguments-differ\n \"\"\"overrides default to manually set serializer\"\"\"\n return activitypub.Follow(**generate_activity(self))\n\n def save(self, *args, **kwargs):\n \"\"\"really really don't let a user follow someone who blocked them\"\"\"\n # blocking in either direction is a no-go\n if UserBlocks.objects.filter(\n Q(\n user_subject=self.user_subject,\n user_object=self.user_object,\n )\n | Q(\n user_subject=self.user_object,\n user_object=self.user_subject,\n )\n ).exists():\n raise IntegrityError(\n \"Attempting to follow blocked user\", self.user_subject, self.user_object\n )\n # don't broadcast this type of relationship -- accepts and requests\n # are handled by the UserFollowRequest model\n super().save(*args, broadcast=False, **kwargs)\n\n @classmethod\n def from_request(cls, follow_request):\n \"\"\"converts a follow request into a follow relationship\"\"\"\n obj, _ = cls.objects.get_or_create(\n user_subject=follow_request.user_subject,\n user_object=follow_request.user_object,\n remote_id=follow_request.remote_id,\n )\n return obj\n\n\nclass UserFollowRequest(ActivitypubMixin, UserRelationship):\n \"\"\"following a user requires manual or automatic confirmation\"\"\"\n\n status = \"follow_request\"\n activity_serializer = activitypub.Follow\n\n def save(self, *args, broadcast=True, **kwargs): # pylint: disable=arguments-differ\n \"\"\"make sure the follow or block relationship doesn't already exist\"\"\"\n # if there's a request for a follow that already exists, accept it\n # without changing the local database state\n if UserFollows.objects.filter(\n user_subject=self.user_subject,\n user_object=self.user_object,\n ).exists():\n self.accept(broadcast_only=True)\n return\n\n # blocking in either direction is a no-go\n if UserBlocks.objects.filter(\n Q(\n user_subject=self.user_subject,\n user_object=self.user_object,\n )\n | Q(\n user_subject=self.user_object,\n user_object=self.user_subject,\n )\n ).exists():\n raise IntegrityError(\n \"Attempting to follow blocked user\", self.user_subject, self.user_object\n )\n super().save(*args, **kwargs)\n\n if broadcast and self.user_subject.local and not self.user_object.local:\n self.broadcast(self.to_activity(), self.user_subject)\n\n if self.user_object.local:\n manually_approves = self.user_object.manually_approves_followers\n if not manually_approves:\n self.accept()\n\n model = apps.get_model(\"bookwyrm.Notification\", require_ready=True)\n notification_type = \"FOLLOW_REQUEST\" if manually_approves else \"FOLLOW\"\n model.objects.create(\n user=self.user_object,\n related_user=self.user_subject,\n notification_type=notification_type,\n )\n\n def get_accept_reject_id(self, status):\n \"\"\"get id for sending an accept or reject of a local user\"\"\"\n\n base_path = self.user_object.remote_id\n status_id = self.id or 0\n return f\"{base_path}#{status}/{status_id}\"\n\n def accept(self, broadcast_only=False):\n \"\"\"turn this request into the real deal\"\"\"\n user = self.user_object\n if not self.user_subject.local:\n activity = activitypub.Accept(\n id=self.get_accept_reject_id(status=\"accepts\"),\n actor=self.user_object.remote_id,\n object=self.to_activity(),\n ).serialize()\n self.broadcast(activity, user)\n if broadcast_only:\n return\n\n with transaction.atomic():\n UserFollows.from_request(self)\n if self.id:\n self.delete()\n\n def reject(self):\n \"\"\"generate a Reject for this follow request\"\"\"\n if self.user_object.local:\n activity = activitypub.Reject(\n id=self.get_accept_reject_id(status=\"rejects\"),\n actor=self.user_object.remote_id,\n object=self.to_activity(),\n ).serialize()\n self.broadcast(activity, self.user_object)\n\n self.delete()\n\n\nclass UserBlocks(ActivityMixin, UserRelationship):\n \"\"\"prevent another user from following you and seeing your posts\"\"\"\n\n status = \"blocks\"\n activity_serializer = activitypub.Block\n\n def save(self, *args, **kwargs):\n \"\"\"remove follow or follow request rels after a block is created\"\"\"\n super().save(*args, **kwargs)\n\n UserFollows.objects.filter(\n Q(user_subject=self.user_subject, user_object=self.user_object)\n | Q(user_subject=self.user_object, user_object=self.user_subject)\n ).delete()\n UserFollowRequest.objects.filter(\n Q(user_subject=self.user_subject, user_object=self.user_object)\n | Q(user_subject=self.user_object, user_object=self.user_subject)\n ).delete()\n\n\ndef clear_cache(user_subject, user_object):\n \"\"\"clear relationship cache\"\"\"\n cache.delete_many(\n [\n f\"relationship-{user_subject.id}-{user_object.id}\",\n f\"relationship-{user_object.id}-{user_subject.id}\",\n ]\n )\n", "path": "bookwyrm/models/relationship.py"}, {"content": "\"\"\" template filters for status interaction buttons \"\"\"\nfrom django import template\n\nfrom bookwyrm import models\nfrom bookwyrm.utils.cache import get_or_set\n\n\nregister = template.Library()\n\n\[email protected](name=\"liked\")\ndef get_user_liked(user, status):\n \"\"\"did the given user fav a status?\"\"\"\n return get_or_set(\n f\"fav-{user.id}-{status.id}\",\n lambda u, s: models.Favorite.objects.filter(user=u, status=s).exists(),\n user,\n status,\n timeout=259200,\n )\n\n\[email protected](name=\"boosted\")\ndef get_user_boosted(user, status):\n \"\"\"did the given user fav a status?\"\"\"\n return get_or_set(\n f\"boost-{user.id}-{status.id}\",\n lambda u: status.boosters.filter(user=u).exists(),\n user,\n timeout=259200,\n )\n\n\[email protected](name=\"saved\")\ndef get_user_saved_lists(user, book_list):\n \"\"\"did the user save a list\"\"\"\n return user.saved_lists.filter(id=book_list.id).exists()\n\n\[email protected]_tag(takes_context=True)\ndef get_relationship(context, user_object):\n \"\"\"caches the relationship between the logged in user and another user\"\"\"\n user = context[\"request\"].user\n return get_or_set(\n f\"relationship-{user.id}-{user_object.id}\",\n get_relationship_name,\n user,\n user_object,\n timeout=259200,\n )\n\n\ndef get_relationship_name(user, user_object):\n \"\"\"returns the relationship type\"\"\"\n types = {\n \"is_following\": False,\n \"is_follow_pending\": False,\n \"is_blocked\": False,\n }\n if user_object in user.blocks.all():\n types[\"is_blocked\"] = True\n elif user_object in user.following.all():\n types[\"is_following\"] = True\n elif user_object in user.follower_requests.all():\n types[\"is_follow_pending\"] = True\n return types\n", "path": "bookwyrm/templatetags/interaction.py"}], "after_files": [{"content": "\"\"\" defines relationships between users \"\"\"\nfrom django.apps import apps\nfrom django.core.cache import cache\nfrom django.db import models, transaction, IntegrityError\nfrom django.db.models import Q\n\nfrom bookwyrm import activitypub\nfrom .activitypub_mixin import ActivitypubMixin, ActivityMixin\nfrom .activitypub_mixin import generate_activity\nfrom .base_model import BookWyrmModel\nfrom . import fields\n\n\nclass UserRelationship(BookWyrmModel):\n \"\"\"many-to-many through table for followers\"\"\"\n\n user_subject = fields.ForeignKey(\n \"User\",\n on_delete=models.PROTECT,\n related_name=\"%(class)s_user_subject\",\n activitypub_field=\"actor\",\n )\n user_object = fields.ForeignKey(\n \"User\",\n on_delete=models.PROTECT,\n related_name=\"%(class)s_user_object\",\n activitypub_field=\"object\",\n )\n\n @property\n def privacy(self):\n \"\"\"all relationships are handled directly with the participants\"\"\"\n return \"direct\"\n\n @property\n def recipients(self):\n \"\"\"the remote user needs to recieve direct broadcasts\"\"\"\n return [u for u in [self.user_subject, self.user_object] if not u.local]\n\n def save(self, *args, **kwargs):\n \"\"\"clear the template cache\"\"\"\n clear_cache(self.user_subject, self.user_object)\n super().save(*args, **kwargs)\n\n def delete(self, *args, **kwargs):\n \"\"\"clear the template cache\"\"\"\n clear_cache(self.user_subject, self.user_object)\n super().delete(*args, **kwargs)\n\n class Meta:\n \"\"\"relationships should be unique\"\"\"\n\n abstract = True\n constraints = [\n models.UniqueConstraint(\n fields=[\"user_subject\", \"user_object\"], name=\"%(class)s_unique\"\n ),\n models.CheckConstraint(\n check=~models.Q(user_subject=models.F(\"user_object\")),\n name=\"%(class)s_no_self\",\n ),\n ]\n\n def get_remote_id(self):\n \"\"\"use shelf identifier in remote_id\"\"\"\n base_path = self.user_subject.remote_id\n return f\"{base_path}#follows/{self.id}\"\n\n\nclass UserFollows(ActivityMixin, UserRelationship):\n \"\"\"Following a user\"\"\"\n\n status = \"follows\"\n\n def to_activity(self): # pylint: disable=arguments-differ\n \"\"\"overrides default to manually set serializer\"\"\"\n return activitypub.Follow(**generate_activity(self))\n\n def save(self, *args, **kwargs):\n \"\"\"really really don't let a user follow someone who blocked them\"\"\"\n # blocking in either direction is a no-go\n if UserBlocks.objects.filter(\n Q(\n user_subject=self.user_subject,\n user_object=self.user_object,\n )\n | Q(\n user_subject=self.user_object,\n user_object=self.user_subject,\n )\n ).exists():\n raise IntegrityError(\n \"Attempting to follow blocked user\", self.user_subject, self.user_object\n )\n # don't broadcast this type of relationship -- accepts and requests\n # are handled by the UserFollowRequest model\n super().save(*args, broadcast=False, **kwargs)\n\n @classmethod\n def from_request(cls, follow_request):\n \"\"\"converts a follow request into a follow relationship\"\"\"\n obj, _ = cls.objects.get_or_create(\n user_subject=follow_request.user_subject,\n user_object=follow_request.user_object,\n remote_id=follow_request.remote_id,\n )\n return obj\n\n\nclass UserFollowRequest(ActivitypubMixin, UserRelationship):\n \"\"\"following a user requires manual or automatic confirmation\"\"\"\n\n status = \"follow_request\"\n activity_serializer = activitypub.Follow\n\n def save(self, *args, broadcast=True, **kwargs): # pylint: disable=arguments-differ\n \"\"\"make sure the follow or block relationship doesn't already exist\"\"\"\n # if there's a request for a follow that already exists, accept it\n # without changing the local database state\n if UserFollows.objects.filter(\n user_subject=self.user_subject,\n user_object=self.user_object,\n ).exists():\n self.accept(broadcast_only=True)\n return\n\n # blocking in either direction is a no-go\n if UserBlocks.objects.filter(\n Q(\n user_subject=self.user_subject,\n user_object=self.user_object,\n )\n | Q(\n user_subject=self.user_object,\n user_object=self.user_subject,\n )\n ).exists():\n raise IntegrityError(\n \"Attempting to follow blocked user\", self.user_subject, self.user_object\n )\n super().save(*args, **kwargs)\n\n if broadcast and self.user_subject.local and not self.user_object.local:\n self.broadcast(self.to_activity(), self.user_subject)\n\n if self.user_object.local:\n manually_approves = self.user_object.manually_approves_followers\n if not manually_approves:\n self.accept()\n\n model = apps.get_model(\"bookwyrm.Notification\", require_ready=True)\n notification_type = \"FOLLOW_REQUEST\" if manually_approves else \"FOLLOW\"\n model.objects.create(\n user=self.user_object,\n related_user=self.user_subject,\n notification_type=notification_type,\n )\n\n def get_accept_reject_id(self, status):\n \"\"\"get id for sending an accept or reject of a local user\"\"\"\n\n base_path = self.user_object.remote_id\n status_id = self.id or 0\n return f\"{base_path}#{status}/{status_id}\"\n\n def accept(self, broadcast_only=False):\n \"\"\"turn this request into the real deal\"\"\"\n user = self.user_object\n if not self.user_subject.local:\n activity = activitypub.Accept(\n id=self.get_accept_reject_id(status=\"accepts\"),\n actor=self.user_object.remote_id,\n object=self.to_activity(),\n ).serialize()\n self.broadcast(activity, user)\n if broadcast_only:\n return\n\n with transaction.atomic():\n UserFollows.from_request(self)\n if self.id:\n self.delete()\n\n def reject(self):\n \"\"\"generate a Reject for this follow request\"\"\"\n if self.user_object.local:\n activity = activitypub.Reject(\n id=self.get_accept_reject_id(status=\"rejects\"),\n actor=self.user_object.remote_id,\n object=self.to_activity(),\n ).serialize()\n self.broadcast(activity, self.user_object)\n\n self.delete()\n\n\nclass UserBlocks(ActivityMixin, UserRelationship):\n \"\"\"prevent another user from following you and seeing your posts\"\"\"\n\n status = \"blocks\"\n activity_serializer = activitypub.Block\n\n def save(self, *args, **kwargs):\n \"\"\"remove follow or follow request rels after a block is created\"\"\"\n super().save(*args, **kwargs)\n\n UserFollows.objects.filter(\n Q(user_subject=self.user_subject, user_object=self.user_object)\n | Q(user_subject=self.user_object, user_object=self.user_subject)\n ).delete()\n UserFollowRequest.objects.filter(\n Q(user_subject=self.user_subject, user_object=self.user_object)\n | Q(user_subject=self.user_object, user_object=self.user_subject)\n ).delete()\n\n\ndef clear_cache(user_subject, user_object):\n \"\"\"clear relationship cache\"\"\"\n cache.delete_many(\n [\n f\"cached-relationship-{user_subject.id}-{user_object.id}\",\n f\"cached-relationship-{user_object.id}-{user_subject.id}\",\n ]\n )\n", "path": "bookwyrm/models/relationship.py"}, {"content": "\"\"\" template filters for status interaction buttons \"\"\"\nfrom django import template\n\nfrom bookwyrm import models\nfrom bookwyrm.utils.cache import get_or_set\n\n\nregister = template.Library()\n\n\[email protected](name=\"liked\")\ndef get_user_liked(user, status):\n \"\"\"did the given user fav a status?\"\"\"\n return get_or_set(\n f\"fav-{user.id}-{status.id}\",\n lambda u, s: models.Favorite.objects.filter(user=u, status=s).exists(),\n user,\n status,\n timeout=259200,\n )\n\n\[email protected](name=\"boosted\")\ndef get_user_boosted(user, status):\n \"\"\"did the given user fav a status?\"\"\"\n return get_or_set(\n f\"boost-{user.id}-{status.id}\",\n lambda u: status.boosters.filter(user=u).exists(),\n user,\n timeout=259200,\n )\n\n\[email protected](name=\"saved\")\ndef get_user_saved_lists(user, book_list):\n \"\"\"did the user save a list\"\"\"\n return user.saved_lists.filter(id=book_list.id).exists()\n\n\[email protected]_tag(takes_context=True)\ndef get_relationship(context, user_object):\n \"\"\"caches the relationship between the logged in user and another user\"\"\"\n user = context[\"request\"].user\n return get_or_set(\n f\"cached-relationship-{user.id}-{user_object.id}\",\n get_relationship_name,\n user,\n user_object,\n timeout=259200,\n )\n\n\ndef get_relationship_name(user, user_object):\n \"\"\"returns the relationship type\"\"\"\n types = {\n \"is_following\": False,\n \"is_follow_pending\": False,\n \"is_blocked\": False,\n }\n if user_object in user.blocks.all():\n types[\"is_blocked\"] = True\n elif user_object in user.following.all():\n types[\"is_following\"] = True\n elif user in user_object.follower_requests.all():\n types[\"is_follow_pending\"] = True\n return types\n", "path": "bookwyrm/templatetags/interaction.py"}]} | 3,135 | 348 |
gh_patches_debug_8456 | rasdani/github-patches | git_diff | microsoft__ptvsd-797 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add ability to launch the debugger in non-debug mode
Currently we can only launch the debugger in non-debug mode when using `-m`.
I'd like to have the same feature by importing PTVSD and invoking a function, similar to debugging using the `debug` function in `debugger.py`
Basically this is necessary to launch the debugger in non-debug mode when using a launcher script.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ptvsd/debugger.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License. See LICENSE in the project root
3 # for license information.
4
5 import sys
6
7 from ptvsd._local import run_module, run_file
8
9
10 # TODO: not needed?
11 DONT_DEBUG = []
12
13 LOCALHOST = 'localhost'
14
15 RUNNERS = {
16 'module': run_module, # python -m spam
17 'script': run_file, # python spam.py
18 'code': run_file, # python -c 'print("spam")'
19 None: run_file, # catchall
20 }
21
22
23 def debug(filename, port_num, debug_id, debug_options, run_as,
24 _runners=RUNNERS, _extra=None, *args, **kwargs):
25 # TODO: docstring
26 if _extra is None:
27 _extra = sys.argv[1:]
28 address = (LOCALHOST, port_num)
29 try:
30 run = _runners[run_as]
31 except KeyError:
32 # TODO: fail?
33 run = _runners[None]
34 if _extra:
35 args = _extra + list(args)
36 kwargs.setdefault('singlesession', True)
37 run(address, filename, *args, **kwargs)
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ptvsd/debugger.py b/ptvsd/debugger.py
--- a/ptvsd/debugger.py
+++ b/ptvsd/debugger.py
@@ -4,7 +4,7 @@
import sys
-from ptvsd._local import run_module, run_file
+from ptvsd._local import run_module, run_file, run_main
# TODO: not needed?
@@ -35,3 +35,9 @@
args = _extra + list(args)
kwargs.setdefault('singlesession', True)
run(address, filename, *args, **kwargs)
+
+
+def run(filename, port_num, run_as,
+ *args, **kwargs):
+ address = (LOCALHOST, port_num)
+ run_main(address, filename, run_as, *args, **kwargs)
| {"golden_diff": "diff --git a/ptvsd/debugger.py b/ptvsd/debugger.py\n--- a/ptvsd/debugger.py\n+++ b/ptvsd/debugger.py\n@@ -4,7 +4,7 @@\n \n import sys\n \n-from ptvsd._local import run_module, run_file\n+from ptvsd._local import run_module, run_file, run_main\n \n \n # TODO: not needed?\n@@ -35,3 +35,9 @@\n args = _extra + list(args)\n kwargs.setdefault('singlesession', True)\n run(address, filename, *args, **kwargs)\n+\n+\n+def run(filename, port_num, run_as,\n+ *args, **kwargs):\n+ address = (LOCALHOST, port_num)\n+ run_main(address, filename, run_as, *args, **kwargs)\n", "issue": "Add ability to launch the debugger in non-debug mode\nCurrently we can only launch the debugger in non-debug mode when using `-m`.\r\nI'd like to have the same feature by importing PTVSD and invoking a function, similar to debugging using the `debug` function in `debugger.py`\r\n\r\nBasically this is necessary to launch the debugger in non-debug mode when using a launcher script.\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport sys\n\nfrom ptvsd._local import run_module, run_file\n\n\n# TODO: not needed?\nDONT_DEBUG = []\n\nLOCALHOST = 'localhost'\n\nRUNNERS = {\n 'module': run_module, # python -m spam\n 'script': run_file, # python spam.py\n 'code': run_file, # python -c 'print(\"spam\")'\n None: run_file, # catchall\n}\n\n\ndef debug(filename, port_num, debug_id, debug_options, run_as,\n _runners=RUNNERS, _extra=None, *args, **kwargs):\n # TODO: docstring\n if _extra is None:\n _extra = sys.argv[1:]\n address = (LOCALHOST, port_num)\n try:\n run = _runners[run_as]\n except KeyError:\n # TODO: fail?\n run = _runners[None]\n if _extra:\n args = _extra + list(args)\n kwargs.setdefault('singlesession', True)\n run(address, filename, *args, **kwargs)\n", "path": "ptvsd/debugger.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See LICENSE in the project root\n# for license information.\n\nimport sys\n\nfrom ptvsd._local import run_module, run_file, run_main\n\n\n# TODO: not needed?\nDONT_DEBUG = []\n\nLOCALHOST = 'localhost'\n\nRUNNERS = {\n 'module': run_module, # python -m spam\n 'script': run_file, # python spam.py\n 'code': run_file, # python -c 'print(\"spam\")'\n None: run_file, # catchall\n}\n\n\ndef debug(filename, port_num, debug_id, debug_options, run_as,\n _runners=RUNNERS, _extra=None, *args, **kwargs):\n # TODO: docstring\n if _extra is None:\n _extra = sys.argv[1:]\n address = (LOCALHOST, port_num)\n try:\n run = _runners[run_as]\n except KeyError:\n # TODO: fail?\n run = _runners[None]\n if _extra:\n args = _extra + list(args)\n kwargs.setdefault('singlesession', True)\n run(address, filename, *args, **kwargs)\n\n\ndef run(filename, port_num, run_as,\n *args, **kwargs):\n address = (LOCALHOST, port_num)\n run_main(address, filename, run_as, *args, **kwargs)\n", "path": "ptvsd/debugger.py"}]} | 677 | 183 |
gh_patches_debug_6322 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-2968 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Upgrade to latest uvicorn
A cant install latest uvicorn 0.23 with latest strawberry 0.195.2 and debug-server, just uvicorn and strawberry, without debug server, or uvicorn 0.21.3 and strawberry with debug server
## Describe the Bug
<!-- A clear and concise description of what the bug is. -->
## System Information
- Arch Linux
- Strawberry version (if applicable): latest
## Additional Context
<!-- POLAR PLEDGE BADGE START -->
## Upvote & Fund
- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.
- We receive the funding once the issue is completed & confirmed by you.
- Thank you in advance for helping prioritize & fund our backlog.
<a href="https://polar.sh/strawberry-graphql/strawberry/issues/2956">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2956/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2956/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `noxfile.py`
Content:
```
1 import nox
2 from nox_poetry import Session, session
3
4 nox.options.reuse_existing_virtualenvs = True
5 nox.options.error_on_external_run = True
6
7 PYTHON_VERSIONS = ["3.11", "3.10", "3.9", "3.8", "3.7"]
8
9
10 COMMON_PYTEST_OPTIONS = [
11 "--cov=.",
12 "--cov-append",
13 "--cov-report=xml",
14 "-n",
15 "auto",
16 "--showlocals",
17 "-vv",
18 "--ignore=tests/mypy",
19 "--ignore=tests/pyright",
20 "--ignore=tests/cli",
21 # TODO: reintroduce this in its own test session
22 "--ignore=tests/experimental/pydantic",
23 ]
24
25 INTEGRATIONS = [
26 "asgi",
27 "aiohttp",
28 "chalice",
29 "channels",
30 "django",
31 "fastapi",
32 "flask",
33 "sanic",
34 "starlite",
35 "pydantic",
36 ]
37
38
39 @session(python=PYTHON_VERSIONS, name="Tests", tags=["tests"])
40 def tests(session: Session) -> None:
41 session.run_always("poetry", "install", external=True)
42
43 markers = (
44 ["-m", f"not {integration}", f"--ignore=tests/{integration}"]
45 for integration in INTEGRATIONS
46 )
47 markers = [item for sublist in markers for item in sublist]
48
49 session.run(
50 "pytest",
51 *COMMON_PYTEST_OPTIONS,
52 *markers,
53 )
54
55
56 @session(python=["3.11"], name="Django tests", tags=["tests"])
57 @nox.parametrize("django", ["4.2.0", "4.1.0", "4.0.0", "3.2.0"])
58 def tests_django(session: Session, django: str) -> None:
59 session.run_always("poetry", "install", external=True)
60
61 session._session.install(f"django~={django}") # type: ignore
62 session._session.install("pytest-django") # type: ignore
63
64 session.run("pytest", *COMMON_PYTEST_OPTIONS, "-m", "django")
65
66
67 @session(python=["3.11"], name="Starlette tests", tags=["tests"])
68 @nox.parametrize("starlette", ["0.28.0", "0.27.0", "0.26.1"])
69 def tests_starlette(session: Session, starlette: str) -> None:
70 session.run_always("poetry", "install", external=True)
71
72 session._session.install(f"starlette=={starlette}") # type: ignore
73
74 session.run("pytest", *COMMON_PYTEST_OPTIONS, "-m", "asgi")
75
76
77 @session(python=["3.11"], name="Test integrations", tags=["tests"])
78 @nox.parametrize(
79 "integration",
80 [
81 "aiohttp",
82 "chalice",
83 "channels",
84 "fastapi",
85 "flask",
86 "sanic",
87 "starlite",
88 ],
89 )
90 def tests_integrations(session: Session, integration: str) -> None:
91 session.run_always("poetry", "install", external=True)
92
93 session._session.install(integration) # type: ignore
94
95 if integration == "aiohttp":
96 session._session.install("pytest-aiohttp") # type: ignore
97 elif integration == "flask":
98 session._session.install("pytest-flask") # type: ignore
99 elif integration == "channels":
100 session._session.install("pytest-django") # type: ignore
101 session._session.install("daphne") # type: ignore
102 elif integration == "starlite":
103 session._session.install("pydantic<2.0") # type: ignore
104
105 session.run("pytest", *COMMON_PYTEST_OPTIONS, "-m", integration)
106
107
108 @session(python=["3.11"], name="Pydantic tests", tags=["tests"])
109 # TODO: add pydantic 2.0 here :)
110 @nox.parametrize("pydantic", ["1.10"])
111 def test_pydantic(session: Session, pydantic: str) -> None:
112 session.run_always("poetry", "install", external=True)
113
114 session._session.install(f"pydantic~={pydantic}") # type: ignore
115
116 session.run(
117 "pytest",
118 "--cov=.",
119 "--cov-append",
120 "--cov-report=xml",
121 "-m",
122 "pydantic",
123 )
124
125
126 @session(python=PYTHON_VERSIONS, name="Mypy tests")
127 def tests_mypy(session: Session) -> None:
128 session.run_always("poetry", "install", "--with", "integrations", external=True)
129
130 session.run(
131 "pytest",
132 "--cov=.",
133 "--cov-append",
134 "--cov-report=xml",
135 "tests/mypy",
136 "-vv",
137 )
138
139
140 @session(python=PYTHON_VERSIONS, name="Pyright tests", tags=["tests"])
141 def tests_pyright(session: Session) -> None:
142 session.run_always("poetry", "install", external=True)
143 session.install("pyright")
144
145 session.run(
146 "pytest",
147 "--cov=.",
148 "--cov-append",
149 "--cov-report=xml",
150 "tests/pyright",
151 "-vv",
152 )
153
154
155 @session(name="Mypy", tags=["lint"])
156 def mypy(session: Session) -> None:
157 session.run_always("poetry", "install", "--with", "integrations", external=True)
158
159 session.run("mypy", "--config-file", "mypy.ini")
160
161
162 @session(python=PYTHON_VERSIONS, name="CLI tests", tags=["tests"])
163 def tests_cli(session: Session) -> None:
164 session.run_always("poetry", "install", external=True)
165
166 session._session.install("uvicorn") # type: ignore
167 session._session.install("starlette") # type: ignore
168
169 session.run(
170 "pytest",
171 "--cov=.",
172 "--cov-append",
173 "--cov-report=xml",
174 "tests/cli",
175 "-vv",
176 )
177
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -18,7 +18,6 @@
"--ignore=tests/mypy",
"--ignore=tests/pyright",
"--ignore=tests/cli",
- # TODO: reintroduce this in its own test session
"--ignore=tests/experimental/pydantic",
]
@@ -120,6 +119,7 @@
"--cov-report=xml",
"-m",
"pydantic",
+ "--ignore=tests/cli",
)
| {"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -18,7 +18,6 @@\n \"--ignore=tests/mypy\",\n \"--ignore=tests/pyright\",\n \"--ignore=tests/cli\",\n- # TODO: reintroduce this in its own test session\n \"--ignore=tests/experimental/pydantic\",\n ]\n \n@@ -120,6 +119,7 @@\n \"--cov-report=xml\",\n \"-m\",\n \"pydantic\",\n+ \"--ignore=tests/cli\",\n )\n", "issue": "Upgrade to latest uvicorn\nA cant install latest uvicorn 0.23 with latest strawberry 0.195.2 and debug-server, just uvicorn and strawberry, without debug server, or uvicorn 0.21.3 and strawberry with debug server\r\n\r\n## Describe the Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## System Information\r\n\r\n - Arch Linux \r\n - Strawberry version (if applicable): latest\r\n\r\n## Additional Context\n\n<!-- POLAR PLEDGE BADGE START -->\n## Upvote & Fund\n\n- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.\n- We receive the funding once the issue is completed & confirmed by you.\n- Thank you in advance for helping prioritize & fund our backlog.\n\n<a href=\"https://polar.sh/strawberry-graphql/strawberry/issues/2956\">\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2956/pledge.svg?darkmode=1\">\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2956/pledge.svg\">\n</picture>\n</a>\n<!-- POLAR PLEDGE BADGE END -->\n\n", "before_files": [{"content": "import nox\nfrom nox_poetry import Session, session\n\nnox.options.reuse_existing_virtualenvs = True\nnox.options.error_on_external_run = True\n\nPYTHON_VERSIONS = [\"3.11\", \"3.10\", \"3.9\", \"3.8\", \"3.7\"]\n\n\nCOMMON_PYTEST_OPTIONS = [\n \"--cov=.\",\n \"--cov-append\",\n \"--cov-report=xml\",\n \"-n\",\n \"auto\",\n \"--showlocals\",\n \"-vv\",\n \"--ignore=tests/mypy\",\n \"--ignore=tests/pyright\",\n \"--ignore=tests/cli\",\n # TODO: reintroduce this in its own test session\n \"--ignore=tests/experimental/pydantic\",\n]\n\nINTEGRATIONS = [\n \"asgi\",\n \"aiohttp\",\n \"chalice\",\n \"channels\",\n \"django\",\n \"fastapi\",\n \"flask\",\n \"sanic\",\n \"starlite\",\n \"pydantic\",\n]\n\n\n@session(python=PYTHON_VERSIONS, name=\"Tests\", tags=[\"tests\"])\ndef tests(session: Session) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n\n markers = (\n [\"-m\", f\"not {integration}\", f\"--ignore=tests/{integration}\"]\n for integration in INTEGRATIONS\n )\n markers = [item for sublist in markers for item in sublist]\n\n session.run(\n \"pytest\",\n *COMMON_PYTEST_OPTIONS,\n *markers,\n )\n\n\n@session(python=[\"3.11\"], name=\"Django tests\", tags=[\"tests\"])\[email protected](\"django\", [\"4.2.0\", \"4.1.0\", \"4.0.0\", \"3.2.0\"])\ndef tests_django(session: Session, django: str) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n\n session._session.install(f\"django~={django}\") # type: ignore\n session._session.install(\"pytest-django\") # type: ignore\n\n session.run(\"pytest\", *COMMON_PYTEST_OPTIONS, \"-m\", \"django\")\n\n\n@session(python=[\"3.11\"], name=\"Starlette tests\", tags=[\"tests\"])\[email protected](\"starlette\", [\"0.28.0\", \"0.27.0\", \"0.26.1\"])\ndef tests_starlette(session: Session, starlette: str) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n\n session._session.install(f\"starlette=={starlette}\") # type: ignore\n\n session.run(\"pytest\", *COMMON_PYTEST_OPTIONS, \"-m\", \"asgi\")\n\n\n@session(python=[\"3.11\"], name=\"Test integrations\", tags=[\"tests\"])\[email protected](\n \"integration\",\n [\n \"aiohttp\",\n \"chalice\",\n \"channels\",\n \"fastapi\",\n \"flask\",\n \"sanic\",\n \"starlite\",\n ],\n)\ndef tests_integrations(session: Session, integration: str) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n\n session._session.install(integration) # type: ignore\n\n if integration == \"aiohttp\":\n session._session.install(\"pytest-aiohttp\") # type: ignore\n elif integration == \"flask\":\n session._session.install(\"pytest-flask\") # type: ignore\n elif integration == \"channels\":\n session._session.install(\"pytest-django\") # type: ignore\n session._session.install(\"daphne\") # type: ignore\n elif integration == \"starlite\":\n session._session.install(\"pydantic<2.0\") # type: ignore\n\n session.run(\"pytest\", *COMMON_PYTEST_OPTIONS, \"-m\", integration)\n\n\n@session(python=[\"3.11\"], name=\"Pydantic tests\", tags=[\"tests\"])\n# TODO: add pydantic 2.0 here :)\[email protected](\"pydantic\", [\"1.10\"])\ndef test_pydantic(session: Session, pydantic: str) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n\n session._session.install(f\"pydantic~={pydantic}\") # type: ignore\n\n session.run(\n \"pytest\",\n \"--cov=.\",\n \"--cov-append\",\n \"--cov-report=xml\",\n \"-m\",\n \"pydantic\",\n )\n\n\n@session(python=PYTHON_VERSIONS, name=\"Mypy tests\")\ndef tests_mypy(session: Session) -> None:\n session.run_always(\"poetry\", \"install\", \"--with\", \"integrations\", external=True)\n\n session.run(\n \"pytest\",\n \"--cov=.\",\n \"--cov-append\",\n \"--cov-report=xml\",\n \"tests/mypy\",\n \"-vv\",\n )\n\n\n@session(python=PYTHON_VERSIONS, name=\"Pyright tests\", tags=[\"tests\"])\ndef tests_pyright(session: Session) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n session.install(\"pyright\")\n\n session.run(\n \"pytest\",\n \"--cov=.\",\n \"--cov-append\",\n \"--cov-report=xml\",\n \"tests/pyright\",\n \"-vv\",\n )\n\n\n@session(name=\"Mypy\", tags=[\"lint\"])\ndef mypy(session: Session) -> None:\n session.run_always(\"poetry\", \"install\", \"--with\", \"integrations\", external=True)\n\n session.run(\"mypy\", \"--config-file\", \"mypy.ini\")\n\n\n@session(python=PYTHON_VERSIONS, name=\"CLI tests\", tags=[\"tests\"])\ndef tests_cli(session: Session) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n\n session._session.install(\"uvicorn\") # type: ignore\n session._session.install(\"starlette\") # type: ignore\n\n session.run(\n \"pytest\",\n \"--cov=.\",\n \"--cov-append\",\n \"--cov-report=xml\",\n \"tests/cli\",\n \"-vv\",\n )\n", "path": "noxfile.py"}], "after_files": [{"content": "import nox\nfrom nox_poetry import Session, session\n\nnox.options.reuse_existing_virtualenvs = True\nnox.options.error_on_external_run = True\n\nPYTHON_VERSIONS = [\"3.11\", \"3.10\", \"3.9\", \"3.8\", \"3.7\"]\n\n\nCOMMON_PYTEST_OPTIONS = [\n \"--cov=.\",\n \"--cov-append\",\n \"--cov-report=xml\",\n \"-n\",\n \"auto\",\n \"--showlocals\",\n \"-vv\",\n \"--ignore=tests/mypy\",\n \"--ignore=tests/pyright\",\n \"--ignore=tests/cli\",\n \"--ignore=tests/experimental/pydantic\",\n]\n\nINTEGRATIONS = [\n \"asgi\",\n \"aiohttp\",\n \"chalice\",\n \"channels\",\n \"django\",\n \"fastapi\",\n \"flask\",\n \"sanic\",\n \"starlite\",\n \"pydantic\",\n]\n\n\n@session(python=PYTHON_VERSIONS, name=\"Tests\", tags=[\"tests\"])\ndef tests(session: Session) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n\n markers = (\n [\"-m\", f\"not {integration}\", f\"--ignore=tests/{integration}\"]\n for integration in INTEGRATIONS\n )\n markers = [item for sublist in markers for item in sublist]\n\n session.run(\n \"pytest\",\n *COMMON_PYTEST_OPTIONS,\n *markers,\n )\n\n\n@session(python=[\"3.11\"], name=\"Django tests\", tags=[\"tests\"])\[email protected](\"django\", [\"4.2.0\", \"4.1.0\", \"4.0.0\", \"3.2.0\"])\ndef tests_django(session: Session, django: str) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n\n session._session.install(f\"django~={django}\") # type: ignore\n session._session.install(\"pytest-django\") # type: ignore\n\n session.run(\"pytest\", *COMMON_PYTEST_OPTIONS, \"-m\", \"django\")\n\n\n@session(python=[\"3.11\"], name=\"Starlette tests\", tags=[\"tests\"])\[email protected](\"starlette\", [\"0.28.0\", \"0.27.0\", \"0.26.1\"])\ndef tests_starlette(session: Session, starlette: str) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n\n session._session.install(f\"starlette=={starlette}\") # type: ignore\n\n session.run(\"pytest\", *COMMON_PYTEST_OPTIONS, \"-m\", \"asgi\")\n\n\n@session(python=[\"3.11\"], name=\"Test integrations\", tags=[\"tests\"])\[email protected](\n \"integration\",\n [\n \"aiohttp\",\n \"chalice\",\n \"channels\",\n \"fastapi\",\n \"flask\",\n \"sanic\",\n \"starlite\",\n ],\n)\ndef tests_integrations(session: Session, integration: str) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n\n session._session.install(integration) # type: ignore\n\n if integration == \"aiohttp\":\n session._session.install(\"pytest-aiohttp\") # type: ignore\n elif integration == \"flask\":\n session._session.install(\"pytest-flask\") # type: ignore\n elif integration == \"channels\":\n session._session.install(\"pytest-django\") # type: ignore\n session._session.install(\"daphne\") # type: ignore\n elif integration == \"starlite\":\n session._session.install(\"pydantic<2.0\") # type: ignore\n\n session.run(\"pytest\", *COMMON_PYTEST_OPTIONS, \"-m\", integration)\n\n\n@session(python=[\"3.11\"], name=\"Pydantic tests\", tags=[\"tests\"])\n# TODO: add pydantic 2.0 here :)\[email protected](\"pydantic\", [\"1.10\"])\ndef test_pydantic(session: Session, pydantic: str) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n\n session._session.install(f\"pydantic~={pydantic}\") # type: ignore\n\n session.run(\n \"pytest\",\n \"--cov=.\",\n \"--cov-append\",\n \"--cov-report=xml\",\n \"-m\",\n \"pydantic\",\n \"--ignore=tests/cli\",\n )\n\n\n@session(python=PYTHON_VERSIONS, name=\"Mypy tests\")\ndef tests_mypy(session: Session) -> None:\n session.run_always(\"poetry\", \"install\", \"--with\", \"integrations\", external=True)\n\n session.run(\n \"pytest\",\n \"--cov=.\",\n \"--cov-append\",\n \"--cov-report=xml\",\n \"tests/mypy\",\n \"-vv\",\n )\n\n\n@session(python=PYTHON_VERSIONS, name=\"Pyright tests\", tags=[\"tests\"])\ndef tests_pyright(session: Session) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n session.install(\"pyright\")\n\n session.run(\n \"pytest\",\n \"--cov=.\",\n \"--cov-append\",\n \"--cov-report=xml\",\n \"tests/pyright\",\n \"-vv\",\n )\n\n\n@session(name=\"Mypy\", tags=[\"lint\"])\ndef mypy(session: Session) -> None:\n session.run_always(\"poetry\", \"install\", \"--with\", \"integrations\", external=True)\n\n session.run(\"mypy\", \"--config-file\", \"mypy.ini\")\n\n\n@session(python=PYTHON_VERSIONS, name=\"CLI tests\", tags=[\"tests\"])\ndef tests_cli(session: Session) -> None:\n session.run_always(\"poetry\", \"install\", external=True)\n\n session._session.install(\"uvicorn\") # type: ignore\n session._session.install(\"starlette\") # type: ignore\n\n session.run(\n \"pytest\",\n \"--cov=.\",\n \"--cov-append\",\n \"--cov-report=xml\",\n \"tests/cli\",\n \"-vv\",\n )\n", "path": "noxfile.py"}]} | 2,331 | 131 |
gh_patches_debug_1811 | rasdani/github-patches | git_diff | iterative__dvc-2364 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
status: change nothing to reproduce message
If I use DVC only to version data/models and don't care about pipelines, this message:
`Pipelines are up to date. Nothing to reproduce.`
looks really strange.
Let's change it to something more generic:
`Data and pipelines are up to date.`
or something similar
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/command/status.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import logging
4
5 from dvc.command.data_sync import CmdDataBase
6 from dvc.utils.compat import str
7
8
9 logger = logging.getLogger(__name__)
10
11
12 class CmdDataStatus(CmdDataBase):
13 STATUS_LEN = 20
14 STATUS_INDENT = "\t"
15 UP_TO_DATE_MSG = "Pipelines are up to date. Nothing to reproduce."
16
17 def _normalize(self, s):
18 s += ":"
19 assert len(s) < self.STATUS_LEN
20 return s + (self.STATUS_LEN - len(s)) * " "
21
22 def _show(self, status, indent=0):
23 ind = indent * self.STATUS_INDENT
24
25 if isinstance(status, str):
26 logger.info("{}{}".format(ind, status))
27 return
28
29 if isinstance(status, list):
30 for entry in status:
31 self._show(entry, indent)
32 return
33
34 assert isinstance(status, dict)
35
36 for key, value in status.items():
37 if isinstance(value, str):
38 logger.info("{}{}{}".format(ind, self._normalize(value), key))
39 elif value:
40 logger.info("{}{}:".format(ind, key))
41 self._show(value, indent + 1)
42
43 def run(self):
44 indent = 1 if self.args.cloud else 0
45 try:
46 st = self.repo.status(
47 targets=self.args.targets,
48 jobs=self.args.jobs,
49 cloud=self.args.cloud,
50 remote=self.args.remote,
51 all_branches=self.args.all_branches,
52 all_tags=self.args.all_tags,
53 with_deps=self.args.with_deps,
54 )
55 if st:
56 if self.args.quiet:
57 return 1
58 else:
59 self._show(st, indent)
60 else:
61 logger.info(self.UP_TO_DATE_MSG)
62
63 except Exception:
64 logger.exception("failed to obtain data status")
65 return 1
66 return 0
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dvc/command/status.py b/dvc/command/status.py
--- a/dvc/command/status.py
+++ b/dvc/command/status.py
@@ -12,7 +12,7 @@
class CmdDataStatus(CmdDataBase):
STATUS_LEN = 20
STATUS_INDENT = "\t"
- UP_TO_DATE_MSG = "Pipelines are up to date. Nothing to reproduce."
+ UP_TO_DATE_MSG = "Data and pipelines are up to date."
def _normalize(self, s):
s += ":"
| {"golden_diff": "diff --git a/dvc/command/status.py b/dvc/command/status.py\n--- a/dvc/command/status.py\n+++ b/dvc/command/status.py\n@@ -12,7 +12,7 @@\n class CmdDataStatus(CmdDataBase):\n STATUS_LEN = 20\n STATUS_INDENT = \"\\t\"\n- UP_TO_DATE_MSG = \"Pipelines are up to date. Nothing to reproduce.\"\n+ UP_TO_DATE_MSG = \"Data and pipelines are up to date.\"\n \n def _normalize(self, s):\n s += \":\"\n", "issue": "status: change nothing to reproduce message\nIf I use DVC only to version data/models and don't care about pipelines, this message:\r\n\r\n`Pipelines are up to date. Nothing to reproduce.` \r\n\r\nlooks really strange.\r\n\r\nLet's change it to something more generic:\r\n\r\n`Data and pipelines are up to date.` \r\n\r\nor something similar\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport logging\n\nfrom dvc.command.data_sync import CmdDataBase\nfrom dvc.utils.compat import str\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdDataStatus(CmdDataBase):\n STATUS_LEN = 20\n STATUS_INDENT = \"\\t\"\n UP_TO_DATE_MSG = \"Pipelines are up to date. Nothing to reproduce.\"\n\n def _normalize(self, s):\n s += \":\"\n assert len(s) < self.STATUS_LEN\n return s + (self.STATUS_LEN - len(s)) * \" \"\n\n def _show(self, status, indent=0):\n ind = indent * self.STATUS_INDENT\n\n if isinstance(status, str):\n logger.info(\"{}{}\".format(ind, status))\n return\n\n if isinstance(status, list):\n for entry in status:\n self._show(entry, indent)\n return\n\n assert isinstance(status, dict)\n\n for key, value in status.items():\n if isinstance(value, str):\n logger.info(\"{}{}{}\".format(ind, self._normalize(value), key))\n elif value:\n logger.info(\"{}{}:\".format(ind, key))\n self._show(value, indent + 1)\n\n def run(self):\n indent = 1 if self.args.cloud else 0\n try:\n st = self.repo.status(\n targets=self.args.targets,\n jobs=self.args.jobs,\n cloud=self.args.cloud,\n remote=self.args.remote,\n all_branches=self.args.all_branches,\n all_tags=self.args.all_tags,\n with_deps=self.args.with_deps,\n )\n if st:\n if self.args.quiet:\n return 1\n else:\n self._show(st, indent)\n else:\n logger.info(self.UP_TO_DATE_MSG)\n\n except Exception:\n logger.exception(\"failed to obtain data status\")\n return 1\n return 0\n", "path": "dvc/command/status.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport logging\n\nfrom dvc.command.data_sync import CmdDataBase\nfrom dvc.utils.compat import str\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass CmdDataStatus(CmdDataBase):\n STATUS_LEN = 20\n STATUS_INDENT = \"\\t\"\n UP_TO_DATE_MSG = \"Data and pipelines are up to date.\"\n\n def _normalize(self, s):\n s += \":\"\n assert len(s) < self.STATUS_LEN\n return s + (self.STATUS_LEN - len(s)) * \" \"\n\n def _show(self, status, indent=0):\n ind = indent * self.STATUS_INDENT\n\n if isinstance(status, str):\n logger.info(\"{}{}\".format(ind, status))\n return\n\n if isinstance(status, list):\n for entry in status:\n self._show(entry, indent)\n return\n\n assert isinstance(status, dict)\n\n for key, value in status.items():\n if isinstance(value, str):\n logger.info(\"{}{}{}\".format(ind, self._normalize(value), key))\n elif value:\n logger.info(\"{}{}:\".format(ind, key))\n self._show(value, indent + 1)\n\n def run(self):\n indent = 1 if self.args.cloud else 0\n try:\n st = self.repo.status(\n targets=self.args.targets,\n jobs=self.args.jobs,\n cloud=self.args.cloud,\n remote=self.args.remote,\n all_branches=self.args.all_branches,\n all_tags=self.args.all_tags,\n with_deps=self.args.with_deps,\n )\n if st:\n if self.args.quiet:\n return 1\n else:\n self._show(st, indent)\n else:\n logger.info(self.UP_TO_DATE_MSG)\n\n except Exception:\n logger.exception(\"failed to obtain data status\")\n return 1\n return 0\n", "path": "dvc/command/status.py"}]} | 857 | 117 |
gh_patches_debug_43367 | rasdani/github-patches | git_diff | akvo__akvo-rsr-4519 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
support saving nested comments at indicator_period_data_framework endpoint
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `akvo/rest/serializers/indicator_period_data.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6 from rest_framework import serializers
7 from django.db.models import Sum
8
9 from akvo.rest.serializers.disaggregation import DisaggregationSerializer, DisaggregationReadOnlySerializer
10 from akvo.rest.serializers.rsr_serializer import BaseRSRSerializer
11 from akvo.rest.serializers.user import UserDetailsSerializer
12 from akvo.rsr.models import (
13 IndicatorPeriod, IndicatorPeriodData, IndicatorPeriodDataComment, IndicatorPeriodDataFile, IndicatorPeriodDataPhoto,
14 IndicatorDimensionValue, Disaggregation
15 )
16 from akvo.utils import ensure_decimal
17
18
19 class IndicatorPeriodDataCommentSerializer(BaseRSRSerializer):
20
21 user_details = UserDetailsSerializer(read_only=True, source='user')
22
23 class Meta:
24 model = IndicatorPeriodDataComment
25 fields = '__all__'
26 read_only_fields = ['user']
27
28
29 class IndicatorPeriodDataFileSerializer(BaseRSRSerializer):
30 class Meta:
31 model = IndicatorPeriodDataFile
32 fields = '__all__'
33
34
35 class IndicatorPeriodDataPhotoSerializer(BaseRSRSerializer):
36 class Meta:
37 model = IndicatorPeriodDataPhoto
38 fields = '__all__'
39
40
41 class IndicatorPeriodDataSerializer(BaseRSRSerializer):
42
43 user_details = UserDetailsSerializer(read_only=True, source='user')
44 approver_details = UserDetailsSerializer(read_only=True, source='approved_by')
45 status_display = serializers.ReadOnlyField()
46 photo_url = serializers.ReadOnlyField()
47 file_url = serializers.ReadOnlyField()
48
49 class Meta:
50 model = IndicatorPeriodData
51 fields = '__all__'
52 read_only_fields = ['user']
53
54
55 class IndicatorPeriodDataLiteSerializer(BaseRSRSerializer):
56
57 user_details = UserDetailsSerializer(required=False, source='user')
58 status_display = serializers.ReadOnlyField()
59 photo_url = serializers.ReadOnlyField()
60 file_url = serializers.ReadOnlyField()
61 disaggregations = DisaggregationReadOnlySerializer(many=True, required=False)
62 value = serializers.SerializerMethodField()
63 file_set = IndicatorPeriodDataFileSerializer(many=True, read_only=True, source='indicatorperioddatafile_set')
64 photo_set = IndicatorPeriodDataPhotoSerializer(many=True, read_only=True, source='indicatorperioddataphoto_set')
65 comments = IndicatorPeriodDataCommentSerializer(read_only=True, many=True, required=False)
66
67 def get_value(self, obj):
68 return ensure_decimal(obj.value)
69
70 class Meta:
71 model = IndicatorPeriodData
72 fields = (
73 'id', 'user_details', 'status', 'status_display', 'update_method', 'value', 'numerator', 'denominator', 'text',
74 'disaggregations', 'narrative', 'photo_url', 'file_url', 'period_actual_value', 'created_at', 'last_modified_at',
75 'file_set', 'photo_set', 'review_note', 'comments',
76 )
77
78
79 class IndicatorPeriodDataFrameworkSerializer(BaseRSRSerializer):
80
81 period = serializers.PrimaryKeyRelatedField(queryset=IndicatorPeriod.objects.all())
82 comments = IndicatorPeriodDataCommentSerializer(read_only=True, many=True, required=False)
83 disaggregations = DisaggregationSerializer(many=True, required=False)
84 user_details = UserDetailsSerializer(read_only=True, source='user')
85 approver_details = UserDetailsSerializer(read_only=True, source='approved_by')
86 status_display = serializers.ReadOnlyField()
87 photo_url = serializers.ReadOnlyField()
88 file_url = serializers.ReadOnlyField()
89 period_can_add_update = serializers.ReadOnlyField(source='period.can_save_update')
90 files = serializers.ListField(child=serializers.FileField(), required=False, write_only=True)
91 photos = serializers.ListField(child=serializers.FileField(), required=False, write_only=True)
92 file_set = IndicatorPeriodDataFileSerializer(many=True, read_only=True, source='indicatorperioddatafile_set')
93 photo_set = IndicatorPeriodDataPhotoSerializer(many=True, read_only=True, source='indicatorperioddataphoto_set')
94
95 class Meta:
96 model = IndicatorPeriodData
97 fields = '__all__'
98 read_only_fields = ['user']
99
100 def create(self, validated_data):
101 self._validate_disaggregations(
102 self._disaggregations_data,
103 value=ensure_decimal(validated_data.get('value', 0)),
104 numerator=ensure_decimal(validated_data.get('numerator', None)),
105 denominator=ensure_decimal(validated_data.get('denominator', None))
106 )
107 """Over-ridden to handle nested writes."""
108 files = validated_data.pop('files', [])
109 photos = validated_data.pop('photos', [])
110 update = super(IndicatorPeriodDataFrameworkSerializer, self).create(validated_data)
111 for disaggregation in self._disaggregations_data:
112 disaggregation['update'] = update.id
113 if 'type_id' in disaggregation and 'dimension_value' not in disaggregation:
114 disaggregation['dimension_value'] = disaggregation['type_id']
115 serializer = DisaggregationSerializer(data=disaggregation)
116 serializer.is_valid(raise_exception=True)
117 serializer.create(serializer.validated_data)
118 for file in files:
119 IndicatorPeriodDataFile.objects.create(update=update, file=file)
120 for photo in photos:
121 IndicatorPeriodDataPhoto.objects.create(update=update, photo=photo)
122
123 return update
124
125 def update(self, instance, validated_data):
126 self._validate_disaggregations(
127 self._disaggregations_data,
128 value=ensure_decimal(validated_data.get('value', instance.value)),
129 numerator=ensure_decimal(validated_data.get('numerator', instance.numerator)),
130 denominator=ensure_decimal(validated_data.get('denominator', instance.denominator)),
131 update=instance
132 )
133 """Over-ridden to handle nested updates."""
134 files = validated_data.pop('files', [])
135 photos = validated_data.pop('photos', [])
136 super(IndicatorPeriodDataFrameworkSerializer, self).update(instance, validated_data)
137 for disaggregation in self._disaggregations_data:
138 disaggregation['update'] = instance.id
139 serializer = DisaggregationSerializer(data=disaggregation)
140 serializer.is_valid(raise_exception=True)
141 disaggregation_instance, _ = instance.disaggregations.get_or_create(
142 update=instance,
143 dimension_value=serializer.validated_data['dimension_value'],
144 )
145 serializer.update(disaggregation_instance, serializer.validated_data)
146 for file in files:
147 IndicatorPeriodDataFile.objects.create(update=instance, file=file)
148 for photo in photos:
149 IndicatorPeriodDataPhoto.objects.create(update=instance, photo=photo)
150
151 return instance._meta.model.objects.select_related(
152 'period',
153 'user',
154 'approved_by',
155 ).prefetch_related(
156 'comments',
157 'disaggregations',
158 ).get(id=instance.id)
159
160 def _validate_disaggregations(self, disaggregations, value, numerator=None, denominator=None, update=None):
161 adjustments = {}
162 for disaggregation in disaggregations:
163 type_id = disaggregation.get('type_id', disaggregation.get('dimension_value', None))
164 if type_id is None:
165 continue
166 if denominator is not None:
167 disaggregation_denominator = ensure_decimal(disaggregation.get('denominator', 0))
168 if disaggregation_denominator > denominator:
169 raise serializers.ValidationError("disaggregations denominator should not exceed update denominator")
170 category = IndicatorDimensionValue.objects.get(pk=type_id).name
171 if category.id not in adjustments:
172 adjustments[category.id] = {'values': 0, 'numerators': 0, 'type_ids': []}
173 adjustments[category.id]['values'] += ensure_decimal(disaggregation.get('value', 0))
174 adjustments[category.id]['numerators'] += ensure_decimal(disaggregation.get('numerator', 0))
175 adjustments[category.id]['type_ids'].append(type_id)
176 for key, adjustment in adjustments.items():
177 unmodifieds = Disaggregation.objects.filter(update=update, dimension_value__name=key)\
178 .exclude(dimension_value__in=adjustment['type_ids'])\
179 .aggregate(values=Sum('value'))
180 total = adjustment['values'] + ensure_decimal(unmodifieds['values'])
181 if numerator is not None and adjustment['numerators'] > numerator:
182 raise serializers.ValidationError("The disaggregation numerator should not exceed update numerator")
183 if total > value:
184 raise serializers.ValidationError("The accumulated disaggregations value should not exceed update value")
185
186 def is_valid(self, raise_exception=False):
187 # HACK to allow nested posting...
188 self._disaggregations_data = self.initial_data.pop('disaggregations', [])
189 super(IndicatorPeriodDataFrameworkSerializer, self).is_valid(raise_exception=raise_exception)
190
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/akvo/rest/serializers/indicator_period_data.py b/akvo/rest/serializers/indicator_period_data.py
--- a/akvo/rest/serializers/indicator_period_data.py
+++ b/akvo/rest/serializers/indicator_period_data.py
@@ -26,6 +26,15 @@
read_only_fields = ['user']
+class IndicatorPeriodDataCommentNestedSerializer(BaseRSRSerializer):
+ id = serializers.IntegerField(required=False)
+
+ class Meta:
+ model = IndicatorPeriodDataComment
+ fields = '__all__'
+ read_only_fields = ('id', 'data', 'user')
+
+
class IndicatorPeriodDataFileSerializer(BaseRSRSerializer):
class Meta:
model = IndicatorPeriodDataFile
@@ -79,7 +88,7 @@
class IndicatorPeriodDataFrameworkSerializer(BaseRSRSerializer):
period = serializers.PrimaryKeyRelatedField(queryset=IndicatorPeriod.objects.all())
- comments = IndicatorPeriodDataCommentSerializer(read_only=True, many=True, required=False)
+ comments = IndicatorPeriodDataCommentNestedSerializer(many=True, required=False)
disaggregations = DisaggregationSerializer(many=True, required=False)
user_details = UserDetailsSerializer(read_only=True, source='user')
approver_details = UserDetailsSerializer(read_only=True, source='approved_by')
@@ -107,6 +116,7 @@
"""Over-ridden to handle nested writes."""
files = validated_data.pop('files', [])
photos = validated_data.pop('photos', [])
+ comments = validated_data.pop('comments', [])
update = super(IndicatorPeriodDataFrameworkSerializer, self).create(validated_data)
for disaggregation in self._disaggregations_data:
disaggregation['update'] = update.id
@@ -119,6 +129,8 @@
IndicatorPeriodDataFile.objects.create(update=update, file=file)
for photo in photos:
IndicatorPeriodDataPhoto.objects.create(update=update, photo=photo)
+ for comment in comments:
+ IndicatorPeriodDataComment.objects.create(data=update, user=update.user, comment=comment['comment'])
return update
@@ -133,6 +145,7 @@
"""Over-ridden to handle nested updates."""
files = validated_data.pop('files', [])
photos = validated_data.pop('photos', [])
+ comments = validated_data.pop('comments', [])
super(IndicatorPeriodDataFrameworkSerializer, self).update(instance, validated_data)
for disaggregation in self._disaggregations_data:
disaggregation['update'] = instance.id
@@ -147,6 +160,18 @@
IndicatorPeriodDataFile.objects.create(update=instance, file=file)
for photo in photos:
IndicatorPeriodDataPhoto.objects.create(update=instance, photo=photo)
+ for comment in comments:
+ comment_id = int(comment.get('id', 0))
+ comment_txt = str(comment.get('comment', ''))
+ if not comment_id:
+ IndicatorPeriodDataComment.objects.create(data=instance, user=instance.user, comment=comment['comment'])
+ else:
+ comment_obj = IndicatorPeriodDataComment.objects.get(id=comment_id)
+ if not comment_txt:
+ comment_obj.delete()
+ else:
+ comment_obj.comment = comment_txt
+ comment_obj.save()
return instance._meta.model.objects.select_related(
'period',
| {"golden_diff": "diff --git a/akvo/rest/serializers/indicator_period_data.py b/akvo/rest/serializers/indicator_period_data.py\n--- a/akvo/rest/serializers/indicator_period_data.py\n+++ b/akvo/rest/serializers/indicator_period_data.py\n@@ -26,6 +26,15 @@\n read_only_fields = ['user']\n \n \n+class IndicatorPeriodDataCommentNestedSerializer(BaseRSRSerializer):\n+ id = serializers.IntegerField(required=False)\n+\n+ class Meta:\n+ model = IndicatorPeriodDataComment\n+ fields = '__all__'\n+ read_only_fields = ('id', 'data', 'user')\n+\n+\n class IndicatorPeriodDataFileSerializer(BaseRSRSerializer):\n class Meta:\n model = IndicatorPeriodDataFile\n@@ -79,7 +88,7 @@\n class IndicatorPeriodDataFrameworkSerializer(BaseRSRSerializer):\n \n period = serializers.PrimaryKeyRelatedField(queryset=IndicatorPeriod.objects.all())\n- comments = IndicatorPeriodDataCommentSerializer(read_only=True, many=True, required=False)\n+ comments = IndicatorPeriodDataCommentNestedSerializer(many=True, required=False)\n disaggregations = DisaggregationSerializer(many=True, required=False)\n user_details = UserDetailsSerializer(read_only=True, source='user')\n approver_details = UserDetailsSerializer(read_only=True, source='approved_by')\n@@ -107,6 +116,7 @@\n \"\"\"Over-ridden to handle nested writes.\"\"\"\n files = validated_data.pop('files', [])\n photos = validated_data.pop('photos', [])\n+ comments = validated_data.pop('comments', [])\n update = super(IndicatorPeriodDataFrameworkSerializer, self).create(validated_data)\n for disaggregation in self._disaggregations_data:\n disaggregation['update'] = update.id\n@@ -119,6 +129,8 @@\n IndicatorPeriodDataFile.objects.create(update=update, file=file)\n for photo in photos:\n IndicatorPeriodDataPhoto.objects.create(update=update, photo=photo)\n+ for comment in comments:\n+ IndicatorPeriodDataComment.objects.create(data=update, user=update.user, comment=comment['comment'])\n \n return update\n \n@@ -133,6 +145,7 @@\n \"\"\"Over-ridden to handle nested updates.\"\"\"\n files = validated_data.pop('files', [])\n photos = validated_data.pop('photos', [])\n+ comments = validated_data.pop('comments', [])\n super(IndicatorPeriodDataFrameworkSerializer, self).update(instance, validated_data)\n for disaggregation in self._disaggregations_data:\n disaggregation['update'] = instance.id\n@@ -147,6 +160,18 @@\n IndicatorPeriodDataFile.objects.create(update=instance, file=file)\n for photo in photos:\n IndicatorPeriodDataPhoto.objects.create(update=instance, photo=photo)\n+ for comment in comments:\n+ comment_id = int(comment.get('id', 0))\n+ comment_txt = str(comment.get('comment', ''))\n+ if not comment_id:\n+ IndicatorPeriodDataComment.objects.create(data=instance, user=instance.user, comment=comment['comment'])\n+ else:\n+ comment_obj = IndicatorPeriodDataComment.objects.get(id=comment_id)\n+ if not comment_txt:\n+ comment_obj.delete()\n+ else:\n+ comment_obj.comment = comment_txt\n+ comment_obj.save()\n \n return instance._meta.model.objects.select_related(\n 'period',\n", "issue": "support saving nested comments at indicator_period_data_framework endpoint\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\nfrom rest_framework import serializers\nfrom django.db.models import Sum\n\nfrom akvo.rest.serializers.disaggregation import DisaggregationSerializer, DisaggregationReadOnlySerializer\nfrom akvo.rest.serializers.rsr_serializer import BaseRSRSerializer\nfrom akvo.rest.serializers.user import UserDetailsSerializer\nfrom akvo.rsr.models import (\n IndicatorPeriod, IndicatorPeriodData, IndicatorPeriodDataComment, IndicatorPeriodDataFile, IndicatorPeriodDataPhoto,\n IndicatorDimensionValue, Disaggregation\n)\nfrom akvo.utils import ensure_decimal\n\n\nclass IndicatorPeriodDataCommentSerializer(BaseRSRSerializer):\n\n user_details = UserDetailsSerializer(read_only=True, source='user')\n\n class Meta:\n model = IndicatorPeriodDataComment\n fields = '__all__'\n read_only_fields = ['user']\n\n\nclass IndicatorPeriodDataFileSerializer(BaseRSRSerializer):\n class Meta:\n model = IndicatorPeriodDataFile\n fields = '__all__'\n\n\nclass IndicatorPeriodDataPhotoSerializer(BaseRSRSerializer):\n class Meta:\n model = IndicatorPeriodDataPhoto\n fields = '__all__'\n\n\nclass IndicatorPeriodDataSerializer(BaseRSRSerializer):\n\n user_details = UserDetailsSerializer(read_only=True, source='user')\n approver_details = UserDetailsSerializer(read_only=True, source='approved_by')\n status_display = serializers.ReadOnlyField()\n photo_url = serializers.ReadOnlyField()\n file_url = serializers.ReadOnlyField()\n\n class Meta:\n model = IndicatorPeriodData\n fields = '__all__'\n read_only_fields = ['user']\n\n\nclass IndicatorPeriodDataLiteSerializer(BaseRSRSerializer):\n\n user_details = UserDetailsSerializer(required=False, source='user')\n status_display = serializers.ReadOnlyField()\n photo_url = serializers.ReadOnlyField()\n file_url = serializers.ReadOnlyField()\n disaggregations = DisaggregationReadOnlySerializer(many=True, required=False)\n value = serializers.SerializerMethodField()\n file_set = IndicatorPeriodDataFileSerializer(many=True, read_only=True, source='indicatorperioddatafile_set')\n photo_set = IndicatorPeriodDataPhotoSerializer(many=True, read_only=True, source='indicatorperioddataphoto_set')\n comments = IndicatorPeriodDataCommentSerializer(read_only=True, many=True, required=False)\n\n def get_value(self, obj):\n return ensure_decimal(obj.value)\n\n class Meta:\n model = IndicatorPeriodData\n fields = (\n 'id', 'user_details', 'status', 'status_display', 'update_method', 'value', 'numerator', 'denominator', 'text',\n 'disaggregations', 'narrative', 'photo_url', 'file_url', 'period_actual_value', 'created_at', 'last_modified_at',\n 'file_set', 'photo_set', 'review_note', 'comments',\n )\n\n\nclass IndicatorPeriodDataFrameworkSerializer(BaseRSRSerializer):\n\n period = serializers.PrimaryKeyRelatedField(queryset=IndicatorPeriod.objects.all())\n comments = IndicatorPeriodDataCommentSerializer(read_only=True, many=True, required=False)\n disaggregations = DisaggregationSerializer(many=True, required=False)\n user_details = UserDetailsSerializer(read_only=True, source='user')\n approver_details = UserDetailsSerializer(read_only=True, source='approved_by')\n status_display = serializers.ReadOnlyField()\n photo_url = serializers.ReadOnlyField()\n file_url = serializers.ReadOnlyField()\n period_can_add_update = serializers.ReadOnlyField(source='period.can_save_update')\n files = serializers.ListField(child=serializers.FileField(), required=False, write_only=True)\n photos = serializers.ListField(child=serializers.FileField(), required=False, write_only=True)\n file_set = IndicatorPeriodDataFileSerializer(many=True, read_only=True, source='indicatorperioddatafile_set')\n photo_set = IndicatorPeriodDataPhotoSerializer(many=True, read_only=True, source='indicatorperioddataphoto_set')\n\n class Meta:\n model = IndicatorPeriodData\n fields = '__all__'\n read_only_fields = ['user']\n\n def create(self, validated_data):\n self._validate_disaggregations(\n self._disaggregations_data,\n value=ensure_decimal(validated_data.get('value', 0)),\n numerator=ensure_decimal(validated_data.get('numerator', None)),\n denominator=ensure_decimal(validated_data.get('denominator', None))\n )\n \"\"\"Over-ridden to handle nested writes.\"\"\"\n files = validated_data.pop('files', [])\n photos = validated_data.pop('photos', [])\n update = super(IndicatorPeriodDataFrameworkSerializer, self).create(validated_data)\n for disaggregation in self._disaggregations_data:\n disaggregation['update'] = update.id\n if 'type_id' in disaggregation and 'dimension_value' not in disaggregation:\n disaggregation['dimension_value'] = disaggregation['type_id']\n serializer = DisaggregationSerializer(data=disaggregation)\n serializer.is_valid(raise_exception=True)\n serializer.create(serializer.validated_data)\n for file in files:\n IndicatorPeriodDataFile.objects.create(update=update, file=file)\n for photo in photos:\n IndicatorPeriodDataPhoto.objects.create(update=update, photo=photo)\n\n return update\n\n def update(self, instance, validated_data):\n self._validate_disaggregations(\n self._disaggregations_data,\n value=ensure_decimal(validated_data.get('value', instance.value)),\n numerator=ensure_decimal(validated_data.get('numerator', instance.numerator)),\n denominator=ensure_decimal(validated_data.get('denominator', instance.denominator)),\n update=instance\n )\n \"\"\"Over-ridden to handle nested updates.\"\"\"\n files = validated_data.pop('files', [])\n photos = validated_data.pop('photos', [])\n super(IndicatorPeriodDataFrameworkSerializer, self).update(instance, validated_data)\n for disaggregation in self._disaggregations_data:\n disaggregation['update'] = instance.id\n serializer = DisaggregationSerializer(data=disaggregation)\n serializer.is_valid(raise_exception=True)\n disaggregation_instance, _ = instance.disaggregations.get_or_create(\n update=instance,\n dimension_value=serializer.validated_data['dimension_value'],\n )\n serializer.update(disaggregation_instance, serializer.validated_data)\n for file in files:\n IndicatorPeriodDataFile.objects.create(update=instance, file=file)\n for photo in photos:\n IndicatorPeriodDataPhoto.objects.create(update=instance, photo=photo)\n\n return instance._meta.model.objects.select_related(\n 'period',\n 'user',\n 'approved_by',\n ).prefetch_related(\n 'comments',\n 'disaggregations',\n ).get(id=instance.id)\n\n def _validate_disaggregations(self, disaggregations, value, numerator=None, denominator=None, update=None):\n adjustments = {}\n for disaggregation in disaggregations:\n type_id = disaggregation.get('type_id', disaggregation.get('dimension_value', None))\n if type_id is None:\n continue\n if denominator is not None:\n disaggregation_denominator = ensure_decimal(disaggregation.get('denominator', 0))\n if disaggregation_denominator > denominator:\n raise serializers.ValidationError(\"disaggregations denominator should not exceed update denominator\")\n category = IndicatorDimensionValue.objects.get(pk=type_id).name\n if category.id not in adjustments:\n adjustments[category.id] = {'values': 0, 'numerators': 0, 'type_ids': []}\n adjustments[category.id]['values'] += ensure_decimal(disaggregation.get('value', 0))\n adjustments[category.id]['numerators'] += ensure_decimal(disaggregation.get('numerator', 0))\n adjustments[category.id]['type_ids'].append(type_id)\n for key, adjustment in adjustments.items():\n unmodifieds = Disaggregation.objects.filter(update=update, dimension_value__name=key)\\\n .exclude(dimension_value__in=adjustment['type_ids'])\\\n .aggregate(values=Sum('value'))\n total = adjustment['values'] + ensure_decimal(unmodifieds['values'])\n if numerator is not None and adjustment['numerators'] > numerator:\n raise serializers.ValidationError(\"The disaggregation numerator should not exceed update numerator\")\n if total > value:\n raise serializers.ValidationError(\"The accumulated disaggregations value should not exceed update value\")\n\n def is_valid(self, raise_exception=False):\n # HACK to allow nested posting...\n self._disaggregations_data = self.initial_data.pop('disaggregations', [])\n super(IndicatorPeriodDataFrameworkSerializer, self).is_valid(raise_exception=raise_exception)\n", "path": "akvo/rest/serializers/indicator_period_data.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\nfrom rest_framework import serializers\nfrom django.db.models import Sum\n\nfrom akvo.rest.serializers.disaggregation import DisaggregationSerializer, DisaggregationReadOnlySerializer\nfrom akvo.rest.serializers.rsr_serializer import BaseRSRSerializer\nfrom akvo.rest.serializers.user import UserDetailsSerializer\nfrom akvo.rsr.models import (\n IndicatorPeriod, IndicatorPeriodData, IndicatorPeriodDataComment, IndicatorPeriodDataFile, IndicatorPeriodDataPhoto,\n IndicatorDimensionValue, Disaggregation\n)\nfrom akvo.utils import ensure_decimal\n\n\nclass IndicatorPeriodDataCommentSerializer(BaseRSRSerializer):\n\n user_details = UserDetailsSerializer(read_only=True, source='user')\n\n class Meta:\n model = IndicatorPeriodDataComment\n fields = '__all__'\n read_only_fields = ['user']\n\n\nclass IndicatorPeriodDataCommentNestedSerializer(BaseRSRSerializer):\n id = serializers.IntegerField(required=False)\n\n class Meta:\n model = IndicatorPeriodDataComment\n fields = '__all__'\n read_only_fields = ('id', 'data', 'user')\n\n\nclass IndicatorPeriodDataFileSerializer(BaseRSRSerializer):\n class Meta:\n model = IndicatorPeriodDataFile\n fields = '__all__'\n\n\nclass IndicatorPeriodDataPhotoSerializer(BaseRSRSerializer):\n class Meta:\n model = IndicatorPeriodDataPhoto\n fields = '__all__'\n\n\nclass IndicatorPeriodDataSerializer(BaseRSRSerializer):\n\n user_details = UserDetailsSerializer(read_only=True, source='user')\n approver_details = UserDetailsSerializer(read_only=True, source='approved_by')\n status_display = serializers.ReadOnlyField()\n photo_url = serializers.ReadOnlyField()\n file_url = serializers.ReadOnlyField()\n\n class Meta:\n model = IndicatorPeriodData\n fields = '__all__'\n read_only_fields = ['user']\n\n\nclass IndicatorPeriodDataLiteSerializer(BaseRSRSerializer):\n\n user_details = UserDetailsSerializer(required=False, source='user')\n status_display = serializers.ReadOnlyField()\n photo_url = serializers.ReadOnlyField()\n file_url = serializers.ReadOnlyField()\n disaggregations = DisaggregationReadOnlySerializer(many=True, required=False)\n value = serializers.SerializerMethodField()\n file_set = IndicatorPeriodDataFileSerializer(many=True, read_only=True, source='indicatorperioddatafile_set')\n photo_set = IndicatorPeriodDataPhotoSerializer(many=True, read_only=True, source='indicatorperioddataphoto_set')\n comments = IndicatorPeriodDataCommentSerializer(read_only=True, many=True, required=False)\n\n def get_value(self, obj):\n return ensure_decimal(obj.value)\n\n class Meta:\n model = IndicatorPeriodData\n fields = (\n 'id', 'user_details', 'status', 'status_display', 'update_method', 'value', 'numerator', 'denominator', 'text',\n 'disaggregations', 'narrative', 'photo_url', 'file_url', 'period_actual_value', 'created_at', 'last_modified_at',\n 'file_set', 'photo_set', 'review_note', 'comments',\n )\n\n\nclass IndicatorPeriodDataFrameworkSerializer(BaseRSRSerializer):\n\n period = serializers.PrimaryKeyRelatedField(queryset=IndicatorPeriod.objects.all())\n comments = IndicatorPeriodDataCommentNestedSerializer(many=True, required=False)\n disaggregations = DisaggregationSerializer(many=True, required=False)\n user_details = UserDetailsSerializer(read_only=True, source='user')\n approver_details = UserDetailsSerializer(read_only=True, source='approved_by')\n status_display = serializers.ReadOnlyField()\n photo_url = serializers.ReadOnlyField()\n file_url = serializers.ReadOnlyField()\n period_can_add_update = serializers.ReadOnlyField(source='period.can_save_update')\n files = serializers.ListField(child=serializers.FileField(), required=False, write_only=True)\n photos = serializers.ListField(child=serializers.FileField(), required=False, write_only=True)\n file_set = IndicatorPeriodDataFileSerializer(many=True, read_only=True, source='indicatorperioddatafile_set')\n photo_set = IndicatorPeriodDataPhotoSerializer(many=True, read_only=True, source='indicatorperioddataphoto_set')\n\n class Meta:\n model = IndicatorPeriodData\n fields = '__all__'\n read_only_fields = ['user']\n\n def create(self, validated_data):\n self._validate_disaggregations(\n self._disaggregations_data,\n value=ensure_decimal(validated_data.get('value', 0)),\n numerator=ensure_decimal(validated_data.get('numerator', None)),\n denominator=ensure_decimal(validated_data.get('denominator', None))\n )\n \"\"\"Over-ridden to handle nested writes.\"\"\"\n files = validated_data.pop('files', [])\n photos = validated_data.pop('photos', [])\n comments = validated_data.pop('comments', [])\n update = super(IndicatorPeriodDataFrameworkSerializer, self).create(validated_data)\n for disaggregation in self._disaggregations_data:\n disaggregation['update'] = update.id\n if 'type_id' in disaggregation and 'dimension_value' not in disaggregation:\n disaggregation['dimension_value'] = disaggregation['type_id']\n serializer = DisaggregationSerializer(data=disaggregation)\n serializer.is_valid(raise_exception=True)\n serializer.create(serializer.validated_data)\n for file in files:\n IndicatorPeriodDataFile.objects.create(update=update, file=file)\n for photo in photos:\n IndicatorPeriodDataPhoto.objects.create(update=update, photo=photo)\n for comment in comments:\n IndicatorPeriodDataComment.objects.create(data=update, user=update.user, comment=comment['comment'])\n\n return update\n\n def update(self, instance, validated_data):\n self._validate_disaggregations(\n self._disaggregations_data,\n value=ensure_decimal(validated_data.get('value', instance.value)),\n numerator=ensure_decimal(validated_data.get('numerator', instance.numerator)),\n denominator=ensure_decimal(validated_data.get('denominator', instance.denominator)),\n update=instance\n )\n \"\"\"Over-ridden to handle nested updates.\"\"\"\n files = validated_data.pop('files', [])\n photos = validated_data.pop('photos', [])\n comments = validated_data.pop('comments', [])\n super(IndicatorPeriodDataFrameworkSerializer, self).update(instance, validated_data)\n for disaggregation in self._disaggregations_data:\n disaggregation['update'] = instance.id\n serializer = DisaggregationSerializer(data=disaggregation)\n serializer.is_valid(raise_exception=True)\n disaggregation_instance, _ = instance.disaggregations.get_or_create(\n update=instance,\n dimension_value=serializer.validated_data['dimension_value'],\n )\n serializer.update(disaggregation_instance, serializer.validated_data)\n for file in files:\n IndicatorPeriodDataFile.objects.create(update=instance, file=file)\n for photo in photos:\n IndicatorPeriodDataPhoto.objects.create(update=instance, photo=photo)\n for comment in comments:\n comment_id = int(comment.get('id', 0))\n comment_txt = str(comment.get('comment', ''))\n if not comment_id:\n IndicatorPeriodDataComment.objects.create(data=instance, user=instance.user, comment=comment['comment'])\n else:\n comment_obj = IndicatorPeriodDataComment.objects.get(id=comment_id)\n if not comment_txt:\n comment_obj.delete()\n else:\n comment_obj.comment = comment_txt\n comment_obj.save()\n\n return instance._meta.model.objects.select_related(\n 'period',\n 'user',\n 'approved_by',\n ).prefetch_related(\n 'comments',\n 'disaggregations',\n ).get(id=instance.id)\n\n def _validate_disaggregations(self, disaggregations, value, numerator=None, denominator=None, update=None):\n adjustments = {}\n for disaggregation in disaggregations:\n type_id = disaggregation.get('type_id', disaggregation.get('dimension_value', None))\n if type_id is None:\n continue\n if denominator is not None:\n disaggregation_denominator = ensure_decimal(disaggregation.get('denominator', 0))\n if disaggregation_denominator > denominator:\n raise serializers.ValidationError(\"disaggregations denominator should not exceed update denominator\")\n category = IndicatorDimensionValue.objects.get(pk=type_id).name\n if category.id not in adjustments:\n adjustments[category.id] = {'values': 0, 'numerators': 0, 'type_ids': []}\n adjustments[category.id]['values'] += ensure_decimal(disaggregation.get('value', 0))\n adjustments[category.id]['numerators'] += ensure_decimal(disaggregation.get('numerator', 0))\n adjustments[category.id]['type_ids'].append(type_id)\n for key, adjustment in adjustments.items():\n unmodifieds = Disaggregation.objects.filter(update=update, dimension_value__name=key)\\\n .exclude(dimension_value__in=adjustment['type_ids'])\\\n .aggregate(values=Sum('value'))\n total = adjustment['values'] + ensure_decimal(unmodifieds['values'])\n if numerator is not None and adjustment['numerators'] > numerator:\n raise serializers.ValidationError(\"The disaggregation numerator should not exceed update numerator\")\n if total > value:\n raise serializers.ValidationError(\"The accumulated disaggregations value should not exceed update value\")\n\n def is_valid(self, raise_exception=False):\n # HACK to allow nested posting...\n self._disaggregations_data = self.initial_data.pop('disaggregations', [])\n super(IndicatorPeriodDataFrameworkSerializer, self).is_valid(raise_exception=raise_exception)\n", "path": "akvo/rest/serializers/indicator_period_data.py"}]} | 2,620 | 744 |
gh_patches_debug_28785 | rasdani/github-patches | git_diff | matrix-org__synapse-8676 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Events for newly created rooms are not always sent to appservices
Reproduction flow:
- Have an appservice that autojoins rooms when you invite it.
- Invite the appservice bot to a DM.
- Send a message into the DM
- Notice that the appservice doesn't receive a transaction for the event.
Expected behaviour:
- Appservices should get any and all events for rooms of which they are members of.
An example of this not working can be found by looking for the event `$gyPH0FEqlReAQwWGdpPLz-UdpQ5T2YjwRy4Cpa8ZaHQ` in `!BEPhTttjYKNQfBgWvS:matrix.org`.
Additionally, sending new messages into the room after some time will allow those to be bridged so this smells strongly of stale cache.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `synapse/appservice/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2015, 2016 OpenMarket Ltd
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 import logging
16 import re
17 from typing import TYPE_CHECKING, Iterable, List, Match, Optional
18
19 from synapse.api.constants import EventTypes
20 from synapse.events import EventBase
21 from synapse.types import GroupID, JsonDict, UserID, get_domain_from_id
22 from synapse.util.caches.descriptors import cached
23
24 if TYPE_CHECKING:
25 from synapse.appservice.api import ApplicationServiceApi
26 from synapse.storage.databases.main import DataStore
27
28 logger = logging.getLogger(__name__)
29
30
31 class ApplicationServiceState:
32 DOWN = "down"
33 UP = "up"
34
35
36 class ApplicationService:
37 """Defines an application service. This definition is mostly what is
38 provided to the /register AS API.
39
40 Provides methods to check if this service is "interested" in events.
41 """
42
43 NS_USERS = "users"
44 NS_ALIASES = "aliases"
45 NS_ROOMS = "rooms"
46 # The ordering here is important as it is used to map database values (which
47 # are stored as ints representing the position in this list) to namespace
48 # values.
49 NS_LIST = [NS_USERS, NS_ALIASES, NS_ROOMS]
50
51 def __init__(
52 self,
53 token,
54 hostname,
55 url=None,
56 namespaces=None,
57 hs_token=None,
58 sender=None,
59 id=None,
60 protocols=None,
61 rate_limited=True,
62 ip_range_whitelist=None,
63 supports_ephemeral=False,
64 ):
65 self.token = token
66 self.url = (
67 url.rstrip("/") if isinstance(url, str) else None
68 ) # url must not end with a slash
69 self.hs_token = hs_token
70 self.sender = sender
71 self.server_name = hostname
72 self.namespaces = self._check_namespaces(namespaces)
73 self.id = id
74 self.ip_range_whitelist = ip_range_whitelist
75 self.supports_ephemeral = supports_ephemeral
76
77 if "|" in self.id:
78 raise Exception("application service ID cannot contain '|' character")
79
80 # .protocols is a publicly visible field
81 if protocols:
82 self.protocols = set(protocols)
83 else:
84 self.protocols = set()
85
86 self.rate_limited = rate_limited
87
88 def _check_namespaces(self, namespaces):
89 # Sanity check that it is of the form:
90 # {
91 # users: [ {regex: "[A-z]+.*", exclusive: true}, ...],
92 # aliases: [ {regex: "[A-z]+.*", exclusive: true}, ...],
93 # rooms: [ {regex: "[A-z]+.*", exclusive: true}, ...],
94 # }
95 if not namespaces:
96 namespaces = {}
97
98 for ns in ApplicationService.NS_LIST:
99 if ns not in namespaces:
100 namespaces[ns] = []
101 continue
102
103 if type(namespaces[ns]) != list:
104 raise ValueError("Bad namespace value for '%s'" % ns)
105 for regex_obj in namespaces[ns]:
106 if not isinstance(regex_obj, dict):
107 raise ValueError("Expected dict regex for ns '%s'" % ns)
108 if not isinstance(regex_obj.get("exclusive"), bool):
109 raise ValueError("Expected bool for 'exclusive' in ns '%s'" % ns)
110 group_id = regex_obj.get("group_id")
111 if group_id:
112 if not isinstance(group_id, str):
113 raise ValueError(
114 "Expected string for 'group_id' in ns '%s'" % ns
115 )
116 try:
117 GroupID.from_string(group_id)
118 except Exception:
119 raise ValueError(
120 "Expected valid group ID for 'group_id' in ns '%s'" % ns
121 )
122
123 if get_domain_from_id(group_id) != self.server_name:
124 raise ValueError(
125 "Expected 'group_id' to be this host in ns '%s'" % ns
126 )
127
128 regex = regex_obj.get("regex")
129 if isinstance(regex, str):
130 regex_obj["regex"] = re.compile(regex) # Pre-compile regex
131 else:
132 raise ValueError("Expected string for 'regex' in ns '%s'" % ns)
133 return namespaces
134
135 def _matches_regex(self, test_string: str, namespace_key: str) -> Optional[Match]:
136 for regex_obj in self.namespaces[namespace_key]:
137 if regex_obj["regex"].match(test_string):
138 return regex_obj
139 return None
140
141 def _is_exclusive(self, ns_key: str, test_string: str) -> bool:
142 regex_obj = self._matches_regex(test_string, ns_key)
143 if regex_obj:
144 return regex_obj["exclusive"]
145 return False
146
147 async def _matches_user(
148 self, event: Optional[EventBase], store: Optional["DataStore"] = None
149 ) -> bool:
150 if not event:
151 return False
152
153 if self.is_interested_in_user(event.sender):
154 return True
155 # also check m.room.member state key
156 if event.type == EventTypes.Member and self.is_interested_in_user(
157 event.state_key
158 ):
159 return True
160
161 if not store:
162 return False
163
164 does_match = await self.matches_user_in_member_list(event.room_id, store)
165 return does_match
166
167 @cached(num_args=1)
168 async def matches_user_in_member_list(
169 self, room_id: str, store: "DataStore"
170 ) -> bool:
171 """Check if this service is interested a room based upon it's membership
172
173 Args:
174 room_id: The room to check.
175 store: The datastore to query.
176
177 Returns:
178 True if this service would like to know about this room.
179 """
180 member_list = await store.get_users_in_room(room_id)
181
182 # check joined member events
183 for user_id in member_list:
184 if self.is_interested_in_user(user_id):
185 return True
186 return False
187
188 def _matches_room_id(self, event: EventBase) -> bool:
189 if hasattr(event, "room_id"):
190 return self.is_interested_in_room(event.room_id)
191 return False
192
193 async def _matches_aliases(
194 self, event: EventBase, store: Optional["DataStore"] = None
195 ) -> bool:
196 if not store or not event:
197 return False
198
199 alias_list = await store.get_aliases_for_room(event.room_id)
200 for alias in alias_list:
201 if self.is_interested_in_alias(alias):
202 return True
203 return False
204
205 async def is_interested(
206 self, event: EventBase, store: Optional["DataStore"] = None
207 ) -> bool:
208 """Check if this service is interested in this event.
209
210 Args:
211 event: The event to check.
212 store: The datastore to query.
213
214 Returns:
215 True if this service would like to know about this event.
216 """
217 # Do cheap checks first
218 if self._matches_room_id(event):
219 return True
220
221 # This will check the namespaces first before
222 # checking the store, so should be run before _matches_aliases
223 if await self._matches_user(event, store):
224 return True
225
226 # This will check the store, so should be run last
227 if await self._matches_aliases(event, store):
228 return True
229
230 return False
231
232 @cached(num_args=1)
233 async def is_interested_in_presence(
234 self, user_id: UserID, store: "DataStore"
235 ) -> bool:
236 """Check if this service is interested a user's presence
237
238 Args:
239 user_id: The user to check.
240 store: The datastore to query.
241
242 Returns:
243 True if this service would like to know about presence for this user.
244 """
245 # Find all the rooms the sender is in
246 if self.is_interested_in_user(user_id.to_string()):
247 return True
248 room_ids = await store.get_rooms_for_user(user_id.to_string())
249
250 # Then find out if the appservice is interested in any of those rooms
251 for room_id in room_ids:
252 if await self.matches_user_in_member_list(room_id, store):
253 return True
254 return False
255
256 def is_interested_in_user(self, user_id: str) -> bool:
257 return (
258 bool(self._matches_regex(user_id, ApplicationService.NS_USERS))
259 or user_id == self.sender
260 )
261
262 def is_interested_in_alias(self, alias: str) -> bool:
263 return bool(self._matches_regex(alias, ApplicationService.NS_ALIASES))
264
265 def is_interested_in_room(self, room_id: str) -> bool:
266 return bool(self._matches_regex(room_id, ApplicationService.NS_ROOMS))
267
268 def is_exclusive_user(self, user_id: str) -> bool:
269 return (
270 self._is_exclusive(ApplicationService.NS_USERS, user_id)
271 or user_id == self.sender
272 )
273
274 def is_interested_in_protocol(self, protocol: str) -> bool:
275 return protocol in self.protocols
276
277 def is_exclusive_alias(self, alias: str) -> bool:
278 return self._is_exclusive(ApplicationService.NS_ALIASES, alias)
279
280 def is_exclusive_room(self, room_id: str) -> bool:
281 return self._is_exclusive(ApplicationService.NS_ROOMS, room_id)
282
283 def get_exclusive_user_regexes(self):
284 """Get the list of regexes used to determine if a user is exclusively
285 registered by the AS
286 """
287 return [
288 regex_obj["regex"]
289 for regex_obj in self.namespaces[ApplicationService.NS_USERS]
290 if regex_obj["exclusive"]
291 ]
292
293 def get_groups_for_user(self, user_id: str) -> Iterable[str]:
294 """Get the groups that this user is associated with by this AS
295
296 Args:
297 user_id: The ID of the user.
298
299 Returns:
300 An iterable that yields group_id strings.
301 """
302 return (
303 regex_obj["group_id"]
304 for regex_obj in self.namespaces[ApplicationService.NS_USERS]
305 if "group_id" in regex_obj and regex_obj["regex"].match(user_id)
306 )
307
308 def is_rate_limited(self) -> bool:
309 return self.rate_limited
310
311 def __str__(self):
312 # copy dictionary and redact token fields so they don't get logged
313 dict_copy = self.__dict__.copy()
314 dict_copy["token"] = "<redacted>"
315 dict_copy["hs_token"] = "<redacted>"
316 return "ApplicationService: %s" % (dict_copy,)
317
318
319 class AppServiceTransaction:
320 """Represents an application service transaction."""
321
322 def __init__(
323 self,
324 service: ApplicationService,
325 id: int,
326 events: List[EventBase],
327 ephemeral: List[JsonDict],
328 ):
329 self.service = service
330 self.id = id
331 self.events = events
332 self.ephemeral = ephemeral
333
334 async def send(self, as_api: "ApplicationServiceApi") -> bool:
335 """Sends this transaction using the provided AS API interface.
336
337 Args:
338 as_api: The API to use to send.
339 Returns:
340 True if the transaction was sent.
341 """
342 return await as_api.push_bulk(
343 service=self.service,
344 events=self.events,
345 ephemeral=self.ephemeral,
346 txn_id=self.id,
347 )
348
349 async def complete(self, store: "DataStore") -> None:
350 """Completes this transaction as successful.
351
352 Marks this transaction ID on the application service and removes the
353 transaction contents from the database.
354
355 Args:
356 store: The database store to operate on.
357 """
358 await store.complete_appservice_txn(service=self.service, txn_id=self.id)
359
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/synapse/appservice/__init__.py b/synapse/appservice/__init__.py
--- a/synapse/appservice/__init__.py
+++ b/synapse/appservice/__init__.py
@@ -19,7 +19,7 @@
from synapse.api.constants import EventTypes
from synapse.events import EventBase
from synapse.types import GroupID, JsonDict, UserID, get_domain_from_id
-from synapse.util.caches.descriptors import cached
+from synapse.util.caches.descriptors import _CacheContext, cached
if TYPE_CHECKING:
from synapse.appservice.api import ApplicationServiceApi
@@ -164,9 +164,9 @@
does_match = await self.matches_user_in_member_list(event.room_id, store)
return does_match
- @cached(num_args=1)
+ @cached(num_args=1, cache_context=True)
async def matches_user_in_member_list(
- self, room_id: str, store: "DataStore"
+ self, room_id: str, store: "DataStore", cache_context: _CacheContext,
) -> bool:
"""Check if this service is interested a room based upon it's membership
@@ -177,7 +177,9 @@
Returns:
True if this service would like to know about this room.
"""
- member_list = await store.get_users_in_room(room_id)
+ member_list = await store.get_users_in_room(
+ room_id, on_invalidate=cache_context.invalidate
+ )
# check joined member events
for user_id in member_list:
| {"golden_diff": "diff --git a/synapse/appservice/__init__.py b/synapse/appservice/__init__.py\n--- a/synapse/appservice/__init__.py\n+++ b/synapse/appservice/__init__.py\n@@ -19,7 +19,7 @@\n from synapse.api.constants import EventTypes\n from synapse.events import EventBase\n from synapse.types import GroupID, JsonDict, UserID, get_domain_from_id\n-from synapse.util.caches.descriptors import cached\n+from synapse.util.caches.descriptors import _CacheContext, cached\n \n if TYPE_CHECKING:\n from synapse.appservice.api import ApplicationServiceApi\n@@ -164,9 +164,9 @@\n does_match = await self.matches_user_in_member_list(event.room_id, store)\n return does_match\n \n- @cached(num_args=1)\n+ @cached(num_args=1, cache_context=True)\n async def matches_user_in_member_list(\n- self, room_id: str, store: \"DataStore\"\n+ self, room_id: str, store: \"DataStore\", cache_context: _CacheContext,\n ) -> bool:\n \"\"\"Check if this service is interested a room based upon it's membership\n \n@@ -177,7 +177,9 @@\n Returns:\n True if this service would like to know about this room.\n \"\"\"\n- member_list = await store.get_users_in_room(room_id)\n+ member_list = await store.get_users_in_room(\n+ room_id, on_invalidate=cache_context.invalidate\n+ )\n \n # check joined member events\n for user_id in member_list:\n", "issue": "Events for newly created rooms are not always sent to appservices\nReproduction flow:\r\n- Have an appservice that autojoins rooms when you invite it.\r\n- Invite the appservice bot to a DM.\r\n- Send a message into the DM\r\n- Notice that the appservice doesn't receive a transaction for the event.\r\n\r\nExpected behaviour:\r\n- Appservices should get any and all events for rooms of which they are members of.\r\n\r\nAn example of this not working can be found by looking for the event `$gyPH0FEqlReAQwWGdpPLz-UdpQ5T2YjwRy4Cpa8ZaHQ` in `!BEPhTttjYKNQfBgWvS:matrix.org`.\r\n\r\nAdditionally, sending new messages into the room after some time will allow those to be bridged so this smells strongly of stale cache.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015, 2016 OpenMarket Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport logging\nimport re\nfrom typing import TYPE_CHECKING, Iterable, List, Match, Optional\n\nfrom synapse.api.constants import EventTypes\nfrom synapse.events import EventBase\nfrom synapse.types import GroupID, JsonDict, UserID, get_domain_from_id\nfrom synapse.util.caches.descriptors import cached\n\nif TYPE_CHECKING:\n from synapse.appservice.api import ApplicationServiceApi\n from synapse.storage.databases.main import DataStore\n\nlogger = logging.getLogger(__name__)\n\n\nclass ApplicationServiceState:\n DOWN = \"down\"\n UP = \"up\"\n\n\nclass ApplicationService:\n \"\"\"Defines an application service. This definition is mostly what is\n provided to the /register AS API.\n\n Provides methods to check if this service is \"interested\" in events.\n \"\"\"\n\n NS_USERS = \"users\"\n NS_ALIASES = \"aliases\"\n NS_ROOMS = \"rooms\"\n # The ordering here is important as it is used to map database values (which\n # are stored as ints representing the position in this list) to namespace\n # values.\n NS_LIST = [NS_USERS, NS_ALIASES, NS_ROOMS]\n\n def __init__(\n self,\n token,\n hostname,\n url=None,\n namespaces=None,\n hs_token=None,\n sender=None,\n id=None,\n protocols=None,\n rate_limited=True,\n ip_range_whitelist=None,\n supports_ephemeral=False,\n ):\n self.token = token\n self.url = (\n url.rstrip(\"/\") if isinstance(url, str) else None\n ) # url must not end with a slash\n self.hs_token = hs_token\n self.sender = sender\n self.server_name = hostname\n self.namespaces = self._check_namespaces(namespaces)\n self.id = id\n self.ip_range_whitelist = ip_range_whitelist\n self.supports_ephemeral = supports_ephemeral\n\n if \"|\" in self.id:\n raise Exception(\"application service ID cannot contain '|' character\")\n\n # .protocols is a publicly visible field\n if protocols:\n self.protocols = set(protocols)\n else:\n self.protocols = set()\n\n self.rate_limited = rate_limited\n\n def _check_namespaces(self, namespaces):\n # Sanity check that it is of the form:\n # {\n # users: [ {regex: \"[A-z]+.*\", exclusive: true}, ...],\n # aliases: [ {regex: \"[A-z]+.*\", exclusive: true}, ...],\n # rooms: [ {regex: \"[A-z]+.*\", exclusive: true}, ...],\n # }\n if not namespaces:\n namespaces = {}\n\n for ns in ApplicationService.NS_LIST:\n if ns not in namespaces:\n namespaces[ns] = []\n continue\n\n if type(namespaces[ns]) != list:\n raise ValueError(\"Bad namespace value for '%s'\" % ns)\n for regex_obj in namespaces[ns]:\n if not isinstance(regex_obj, dict):\n raise ValueError(\"Expected dict regex for ns '%s'\" % ns)\n if not isinstance(regex_obj.get(\"exclusive\"), bool):\n raise ValueError(\"Expected bool for 'exclusive' in ns '%s'\" % ns)\n group_id = regex_obj.get(\"group_id\")\n if group_id:\n if not isinstance(group_id, str):\n raise ValueError(\n \"Expected string for 'group_id' in ns '%s'\" % ns\n )\n try:\n GroupID.from_string(group_id)\n except Exception:\n raise ValueError(\n \"Expected valid group ID for 'group_id' in ns '%s'\" % ns\n )\n\n if get_domain_from_id(group_id) != self.server_name:\n raise ValueError(\n \"Expected 'group_id' to be this host in ns '%s'\" % ns\n )\n\n regex = regex_obj.get(\"regex\")\n if isinstance(regex, str):\n regex_obj[\"regex\"] = re.compile(regex) # Pre-compile regex\n else:\n raise ValueError(\"Expected string for 'regex' in ns '%s'\" % ns)\n return namespaces\n\n def _matches_regex(self, test_string: str, namespace_key: str) -> Optional[Match]:\n for regex_obj in self.namespaces[namespace_key]:\n if regex_obj[\"regex\"].match(test_string):\n return regex_obj\n return None\n\n def _is_exclusive(self, ns_key: str, test_string: str) -> bool:\n regex_obj = self._matches_regex(test_string, ns_key)\n if regex_obj:\n return regex_obj[\"exclusive\"]\n return False\n\n async def _matches_user(\n self, event: Optional[EventBase], store: Optional[\"DataStore\"] = None\n ) -> bool:\n if not event:\n return False\n\n if self.is_interested_in_user(event.sender):\n return True\n # also check m.room.member state key\n if event.type == EventTypes.Member and self.is_interested_in_user(\n event.state_key\n ):\n return True\n\n if not store:\n return False\n\n does_match = await self.matches_user_in_member_list(event.room_id, store)\n return does_match\n\n @cached(num_args=1)\n async def matches_user_in_member_list(\n self, room_id: str, store: \"DataStore\"\n ) -> bool:\n \"\"\"Check if this service is interested a room based upon it's membership\n\n Args:\n room_id: The room to check.\n store: The datastore to query.\n\n Returns:\n True if this service would like to know about this room.\n \"\"\"\n member_list = await store.get_users_in_room(room_id)\n\n # check joined member events\n for user_id in member_list:\n if self.is_interested_in_user(user_id):\n return True\n return False\n\n def _matches_room_id(self, event: EventBase) -> bool:\n if hasattr(event, \"room_id\"):\n return self.is_interested_in_room(event.room_id)\n return False\n\n async def _matches_aliases(\n self, event: EventBase, store: Optional[\"DataStore\"] = None\n ) -> bool:\n if not store or not event:\n return False\n\n alias_list = await store.get_aliases_for_room(event.room_id)\n for alias in alias_list:\n if self.is_interested_in_alias(alias):\n return True\n return False\n\n async def is_interested(\n self, event: EventBase, store: Optional[\"DataStore\"] = None\n ) -> bool:\n \"\"\"Check if this service is interested in this event.\n\n Args:\n event: The event to check.\n store: The datastore to query.\n\n Returns:\n True if this service would like to know about this event.\n \"\"\"\n # Do cheap checks first\n if self._matches_room_id(event):\n return True\n\n # This will check the namespaces first before\n # checking the store, so should be run before _matches_aliases\n if await self._matches_user(event, store):\n return True\n\n # This will check the store, so should be run last\n if await self._matches_aliases(event, store):\n return True\n\n return False\n\n @cached(num_args=1)\n async def is_interested_in_presence(\n self, user_id: UserID, store: \"DataStore\"\n ) -> bool:\n \"\"\"Check if this service is interested a user's presence\n\n Args:\n user_id: The user to check.\n store: The datastore to query.\n\n Returns:\n True if this service would like to know about presence for this user.\n \"\"\"\n # Find all the rooms the sender is in\n if self.is_interested_in_user(user_id.to_string()):\n return True\n room_ids = await store.get_rooms_for_user(user_id.to_string())\n\n # Then find out if the appservice is interested in any of those rooms\n for room_id in room_ids:\n if await self.matches_user_in_member_list(room_id, store):\n return True\n return False\n\n def is_interested_in_user(self, user_id: str) -> bool:\n return (\n bool(self._matches_regex(user_id, ApplicationService.NS_USERS))\n or user_id == self.sender\n )\n\n def is_interested_in_alias(self, alias: str) -> bool:\n return bool(self._matches_regex(alias, ApplicationService.NS_ALIASES))\n\n def is_interested_in_room(self, room_id: str) -> bool:\n return bool(self._matches_regex(room_id, ApplicationService.NS_ROOMS))\n\n def is_exclusive_user(self, user_id: str) -> bool:\n return (\n self._is_exclusive(ApplicationService.NS_USERS, user_id)\n or user_id == self.sender\n )\n\n def is_interested_in_protocol(self, protocol: str) -> bool:\n return protocol in self.protocols\n\n def is_exclusive_alias(self, alias: str) -> bool:\n return self._is_exclusive(ApplicationService.NS_ALIASES, alias)\n\n def is_exclusive_room(self, room_id: str) -> bool:\n return self._is_exclusive(ApplicationService.NS_ROOMS, room_id)\n\n def get_exclusive_user_regexes(self):\n \"\"\"Get the list of regexes used to determine if a user is exclusively\n registered by the AS\n \"\"\"\n return [\n regex_obj[\"regex\"]\n for regex_obj in self.namespaces[ApplicationService.NS_USERS]\n if regex_obj[\"exclusive\"]\n ]\n\n def get_groups_for_user(self, user_id: str) -> Iterable[str]:\n \"\"\"Get the groups that this user is associated with by this AS\n\n Args:\n user_id: The ID of the user.\n\n Returns:\n An iterable that yields group_id strings.\n \"\"\"\n return (\n regex_obj[\"group_id\"]\n for regex_obj in self.namespaces[ApplicationService.NS_USERS]\n if \"group_id\" in regex_obj and regex_obj[\"regex\"].match(user_id)\n )\n\n def is_rate_limited(self) -> bool:\n return self.rate_limited\n\n def __str__(self):\n # copy dictionary and redact token fields so they don't get logged\n dict_copy = self.__dict__.copy()\n dict_copy[\"token\"] = \"<redacted>\"\n dict_copy[\"hs_token\"] = \"<redacted>\"\n return \"ApplicationService: %s\" % (dict_copy,)\n\n\nclass AppServiceTransaction:\n \"\"\"Represents an application service transaction.\"\"\"\n\n def __init__(\n self,\n service: ApplicationService,\n id: int,\n events: List[EventBase],\n ephemeral: List[JsonDict],\n ):\n self.service = service\n self.id = id\n self.events = events\n self.ephemeral = ephemeral\n\n async def send(self, as_api: \"ApplicationServiceApi\") -> bool:\n \"\"\"Sends this transaction using the provided AS API interface.\n\n Args:\n as_api: The API to use to send.\n Returns:\n True if the transaction was sent.\n \"\"\"\n return await as_api.push_bulk(\n service=self.service,\n events=self.events,\n ephemeral=self.ephemeral,\n txn_id=self.id,\n )\n\n async def complete(self, store: \"DataStore\") -> None:\n \"\"\"Completes this transaction as successful.\n\n Marks this transaction ID on the application service and removes the\n transaction contents from the database.\n\n Args:\n store: The database store to operate on.\n \"\"\"\n await store.complete_appservice_txn(service=self.service, txn_id=self.id)\n", "path": "synapse/appservice/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015, 2016 OpenMarket Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport logging\nimport re\nfrom typing import TYPE_CHECKING, Iterable, List, Match, Optional\n\nfrom synapse.api.constants import EventTypes\nfrom synapse.events import EventBase\nfrom synapse.types import GroupID, JsonDict, UserID, get_domain_from_id\nfrom synapse.util.caches.descriptors import _CacheContext, cached\n\nif TYPE_CHECKING:\n from synapse.appservice.api import ApplicationServiceApi\n from synapse.storage.databases.main import DataStore\n\nlogger = logging.getLogger(__name__)\n\n\nclass ApplicationServiceState:\n DOWN = \"down\"\n UP = \"up\"\n\n\nclass ApplicationService:\n \"\"\"Defines an application service. This definition is mostly what is\n provided to the /register AS API.\n\n Provides methods to check if this service is \"interested\" in events.\n \"\"\"\n\n NS_USERS = \"users\"\n NS_ALIASES = \"aliases\"\n NS_ROOMS = \"rooms\"\n # The ordering here is important as it is used to map database values (which\n # are stored as ints representing the position in this list) to namespace\n # values.\n NS_LIST = [NS_USERS, NS_ALIASES, NS_ROOMS]\n\n def __init__(\n self,\n token,\n hostname,\n url=None,\n namespaces=None,\n hs_token=None,\n sender=None,\n id=None,\n protocols=None,\n rate_limited=True,\n ip_range_whitelist=None,\n supports_ephemeral=False,\n ):\n self.token = token\n self.url = (\n url.rstrip(\"/\") if isinstance(url, str) else None\n ) # url must not end with a slash\n self.hs_token = hs_token\n self.sender = sender\n self.server_name = hostname\n self.namespaces = self._check_namespaces(namespaces)\n self.id = id\n self.ip_range_whitelist = ip_range_whitelist\n self.supports_ephemeral = supports_ephemeral\n\n if \"|\" in self.id:\n raise Exception(\"application service ID cannot contain '|' character\")\n\n # .protocols is a publicly visible field\n if protocols:\n self.protocols = set(protocols)\n else:\n self.protocols = set()\n\n self.rate_limited = rate_limited\n\n def _check_namespaces(self, namespaces):\n # Sanity check that it is of the form:\n # {\n # users: [ {regex: \"[A-z]+.*\", exclusive: true}, ...],\n # aliases: [ {regex: \"[A-z]+.*\", exclusive: true}, ...],\n # rooms: [ {regex: \"[A-z]+.*\", exclusive: true}, ...],\n # }\n if not namespaces:\n namespaces = {}\n\n for ns in ApplicationService.NS_LIST:\n if ns not in namespaces:\n namespaces[ns] = []\n continue\n\n if type(namespaces[ns]) != list:\n raise ValueError(\"Bad namespace value for '%s'\" % ns)\n for regex_obj in namespaces[ns]:\n if not isinstance(regex_obj, dict):\n raise ValueError(\"Expected dict regex for ns '%s'\" % ns)\n if not isinstance(regex_obj.get(\"exclusive\"), bool):\n raise ValueError(\"Expected bool for 'exclusive' in ns '%s'\" % ns)\n group_id = regex_obj.get(\"group_id\")\n if group_id:\n if not isinstance(group_id, str):\n raise ValueError(\n \"Expected string for 'group_id' in ns '%s'\" % ns\n )\n try:\n GroupID.from_string(group_id)\n except Exception:\n raise ValueError(\n \"Expected valid group ID for 'group_id' in ns '%s'\" % ns\n )\n\n if get_domain_from_id(group_id) != self.server_name:\n raise ValueError(\n \"Expected 'group_id' to be this host in ns '%s'\" % ns\n )\n\n regex = regex_obj.get(\"regex\")\n if isinstance(regex, str):\n regex_obj[\"regex\"] = re.compile(regex) # Pre-compile regex\n else:\n raise ValueError(\"Expected string for 'regex' in ns '%s'\" % ns)\n return namespaces\n\n def _matches_regex(self, test_string: str, namespace_key: str) -> Optional[Match]:\n for regex_obj in self.namespaces[namespace_key]:\n if regex_obj[\"regex\"].match(test_string):\n return regex_obj\n return None\n\n def _is_exclusive(self, ns_key: str, test_string: str) -> bool:\n regex_obj = self._matches_regex(test_string, ns_key)\n if regex_obj:\n return regex_obj[\"exclusive\"]\n return False\n\n async def _matches_user(\n self, event: Optional[EventBase], store: Optional[\"DataStore\"] = None\n ) -> bool:\n if not event:\n return False\n\n if self.is_interested_in_user(event.sender):\n return True\n # also check m.room.member state key\n if event.type == EventTypes.Member and self.is_interested_in_user(\n event.state_key\n ):\n return True\n\n if not store:\n return False\n\n does_match = await self.matches_user_in_member_list(event.room_id, store)\n return does_match\n\n @cached(num_args=1, cache_context=True)\n async def matches_user_in_member_list(\n self, room_id: str, store: \"DataStore\", cache_context: _CacheContext,\n ) -> bool:\n \"\"\"Check if this service is interested a room based upon it's membership\n\n Args:\n room_id: The room to check.\n store: The datastore to query.\n\n Returns:\n True if this service would like to know about this room.\n \"\"\"\n member_list = await store.get_users_in_room(\n room_id, on_invalidate=cache_context.invalidate\n )\n\n # check joined member events\n for user_id in member_list:\n if self.is_interested_in_user(user_id):\n return True\n return False\n\n def _matches_room_id(self, event: EventBase) -> bool:\n if hasattr(event, \"room_id\"):\n return self.is_interested_in_room(event.room_id)\n return False\n\n async def _matches_aliases(\n self, event: EventBase, store: Optional[\"DataStore\"] = None\n ) -> bool:\n if not store or not event:\n return False\n\n alias_list = await store.get_aliases_for_room(event.room_id)\n for alias in alias_list:\n if self.is_interested_in_alias(alias):\n return True\n return False\n\n async def is_interested(\n self, event: EventBase, store: Optional[\"DataStore\"] = None\n ) -> bool:\n \"\"\"Check if this service is interested in this event.\n\n Args:\n event: The event to check.\n store: The datastore to query.\n\n Returns:\n True if this service would like to know about this event.\n \"\"\"\n # Do cheap checks first\n if self._matches_room_id(event):\n return True\n\n # This will check the namespaces first before\n # checking the store, so should be run before _matches_aliases\n if await self._matches_user(event, store):\n return True\n\n # This will check the store, so should be run last\n if await self._matches_aliases(event, store):\n return True\n\n return False\n\n @cached(num_args=1)\n async def is_interested_in_presence(\n self, user_id: UserID, store: \"DataStore\"\n ) -> bool:\n \"\"\"Check if this service is interested a user's presence\n\n Args:\n user_id: The user to check.\n store: The datastore to query.\n\n Returns:\n True if this service would like to know about presence for this user.\n \"\"\"\n # Find all the rooms the sender is in\n if self.is_interested_in_user(user_id.to_string()):\n return True\n room_ids = await store.get_rooms_for_user(user_id.to_string())\n\n # Then find out if the appservice is interested in any of those rooms\n for room_id in room_ids:\n if await self.matches_user_in_member_list(room_id, store):\n return True\n return False\n\n def is_interested_in_user(self, user_id: str) -> bool:\n return (\n bool(self._matches_regex(user_id, ApplicationService.NS_USERS))\n or user_id == self.sender\n )\n\n def is_interested_in_alias(self, alias: str) -> bool:\n return bool(self._matches_regex(alias, ApplicationService.NS_ALIASES))\n\n def is_interested_in_room(self, room_id: str) -> bool:\n return bool(self._matches_regex(room_id, ApplicationService.NS_ROOMS))\n\n def is_exclusive_user(self, user_id: str) -> bool:\n return (\n self._is_exclusive(ApplicationService.NS_USERS, user_id)\n or user_id == self.sender\n )\n\n def is_interested_in_protocol(self, protocol: str) -> bool:\n return protocol in self.protocols\n\n def is_exclusive_alias(self, alias: str) -> bool:\n return self._is_exclusive(ApplicationService.NS_ALIASES, alias)\n\n def is_exclusive_room(self, room_id: str) -> bool:\n return self._is_exclusive(ApplicationService.NS_ROOMS, room_id)\n\n def get_exclusive_user_regexes(self):\n \"\"\"Get the list of regexes used to determine if a user is exclusively\n registered by the AS\n \"\"\"\n return [\n regex_obj[\"regex\"]\n for regex_obj in self.namespaces[ApplicationService.NS_USERS]\n if regex_obj[\"exclusive\"]\n ]\n\n def get_groups_for_user(self, user_id: str) -> Iterable[str]:\n \"\"\"Get the groups that this user is associated with by this AS\n\n Args:\n user_id: The ID of the user.\n\n Returns:\n An iterable that yields group_id strings.\n \"\"\"\n return (\n regex_obj[\"group_id\"]\n for regex_obj in self.namespaces[ApplicationService.NS_USERS]\n if \"group_id\" in regex_obj and regex_obj[\"regex\"].match(user_id)\n )\n\n def is_rate_limited(self) -> bool:\n return self.rate_limited\n\n def __str__(self):\n # copy dictionary and redact token fields so they don't get logged\n dict_copy = self.__dict__.copy()\n dict_copy[\"token\"] = \"<redacted>\"\n dict_copy[\"hs_token\"] = \"<redacted>\"\n return \"ApplicationService: %s\" % (dict_copy,)\n\n\nclass AppServiceTransaction:\n \"\"\"Represents an application service transaction.\"\"\"\n\n def __init__(\n self,\n service: ApplicationService,\n id: int,\n events: List[EventBase],\n ephemeral: List[JsonDict],\n ):\n self.service = service\n self.id = id\n self.events = events\n self.ephemeral = ephemeral\n\n async def send(self, as_api: \"ApplicationServiceApi\") -> bool:\n \"\"\"Sends this transaction using the provided AS API interface.\n\n Args:\n as_api: The API to use to send.\n Returns:\n True if the transaction was sent.\n \"\"\"\n return await as_api.push_bulk(\n service=self.service,\n events=self.events,\n ephemeral=self.ephemeral,\n txn_id=self.id,\n )\n\n async def complete(self, store: \"DataStore\") -> None:\n \"\"\"Completes this transaction as successful.\n\n Marks this transaction ID on the application service and removes the\n transaction contents from the database.\n\n Args:\n store: The database store to operate on.\n \"\"\"\n await store.complete_appservice_txn(service=self.service, txn_id=self.id)\n", "path": "synapse/appservice/__init__.py"}]} | 4,080 | 354 |
gh_patches_debug_16628 | rasdani/github-patches | git_diff | jazzband__pip-tools-595 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
README broken on PyPI (must be reStructuredText)
The [package description](https://pypi.python.org/pypi/pip-tools/) on PyPI is unreadable since PyPI expects the README in [reStructuredText](http://www.sphinx-doc.org/en/stable/rest.html) file format and we use MarkDown.
Solution A: Convert to reST
---------------------
1. Rename the current `README.md` to `README.rst`
1. Replace the markdown of the badges and the code samples ([example](https://github.com/Organice/djangocms-maps/blob/master/README.rst))
1. Add a `long_description=read_file('README.rst')` line to `setup.py` ([example](https://github.com/Organice/djangocms-maps/blob/master/setup.py#L50))
Solution B: Process before Upload
-------------------
1. Integrate [pypandoc](https://pypi.python.org/pypi/pypandoc) in `setup.py` ([example](https://github.com/jrief/djangocms-cascade/blob/master/setup.py#L7-L14))
1. Add a `long_description=convert('README.md', 'rst')` line to `setup.py` ([example](https://github.com/jrief/djangocms-cascade/blob/master/setup.py#L49))
------------
Both solutions above will render a nicely formatted, HTML-styled package description on PyPI.
Quality Assurance
--------------
Optionally, you may check your README with [checkdocs](https://github.com/Organice/djangocms-maps/blob/master/tox.ini#L13-L19) before uploading the package to PyPI, because sometimes the reST-to-HTML conversion that PyPI uses fails -- and renders a still hard-to-read, broken, unformatted package description.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 """
2 pip-tools keeps your pinned dependencies fresh.
3 """
4 from setuptools import find_packages, setup
5
6 setup(
7 name='pip-tools',
8 use_scm_version=True,
9 url='https://github.com/jazzband/pip-tools/',
10 license='BSD',
11 author='Vincent Driessen',
12 author_email='[email protected]',
13 description=__doc__,
14 packages=find_packages(exclude=['tests']),
15 setup_requires=['setuptools_scm'],
16 install_requires=[
17 'click>=6',
18 'first',
19 'six',
20 'setuptools'
21 ],
22 extras_require={
23 ':python_version < "3.0"': ['contextlib2']
24 },
25 zip_safe=False,
26 entry_points={
27 'console_scripts': [
28 'pip-compile = piptools.scripts.compile:cli',
29 'pip-sync = piptools.scripts.sync:cli',
30 ],
31 },
32 platforms='any',
33 classifiers=[
34 'Development Status :: 5 - Production/Stable',
35 'Intended Audience :: Developers',
36 'Intended Audience :: System Administrators',
37 'License :: OSI Approved :: BSD License',
38 'Operating System :: OS Independent',
39 'Programming Language :: Python',
40 'Programming Language :: Python :: 2',
41 'Programming Language :: Python :: 2.7',
42 'Programming Language :: Python :: 3',
43 'Programming Language :: Python :: 3.4',
44 'Programming Language :: Python :: 3.5',
45 'Programming Language :: Python :: 3.6',
46 'Topic :: System :: Systems Administration',
47 ]
48 )
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,8 +1,14 @@
"""
pip-tools keeps your pinned dependencies fresh.
"""
+from os.path import abspath, dirname, join
from setuptools import find_packages, setup
+def read_file(filename):
+ """Read the contents of a file located relative to setup.py"""
+ with open(join(abspath(dirname(__file__)), filename)) as thefile:
+ return thefile.read()
+
setup(
name='pip-tools',
use_scm_version=True,
@@ -11,6 +17,7 @@
author='Vincent Driessen',
author_email='[email protected]',
description=__doc__,
+ long_description=read_file('README.rst'),
packages=find_packages(exclude=['tests']),
setup_requires=['setuptools_scm'],
install_requires=[
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,8 +1,14 @@\n \"\"\"\n pip-tools keeps your pinned dependencies fresh.\n \"\"\"\n+from os.path import abspath, dirname, join\n from setuptools import find_packages, setup\n \n+def read_file(filename):\n+ \"\"\"Read the contents of a file located relative to setup.py\"\"\"\n+ with open(join(abspath(dirname(__file__)), filename)) as thefile:\n+ return thefile.read()\n+\n setup(\n name='pip-tools',\n use_scm_version=True,\n@@ -11,6 +17,7 @@\n author='Vincent Driessen',\n author_email='[email protected]',\n description=__doc__,\n+ long_description=read_file('README.rst'),\n packages=find_packages(exclude=['tests']),\n setup_requires=['setuptools_scm'],\n install_requires=[\n", "issue": "README broken on PyPI (must be reStructuredText)\nThe [package description](https://pypi.python.org/pypi/pip-tools/) on PyPI is unreadable since PyPI expects the README in [reStructuredText](http://www.sphinx-doc.org/en/stable/rest.html) file format and we use MarkDown.\r\n\r\nSolution A: Convert to reST\r\n---------------------\r\n\r\n1. Rename the current `README.md` to `README.rst`\r\n1. Replace the markdown of the badges and the code samples ([example](https://github.com/Organice/djangocms-maps/blob/master/README.rst))\r\n1. Add a `long_description=read_file('README.rst')` line to `setup.py` ([example](https://github.com/Organice/djangocms-maps/blob/master/setup.py#L50))\r\n\r\nSolution B: Process before Upload\r\n-------------------\r\n\r\n1. Integrate [pypandoc](https://pypi.python.org/pypi/pypandoc) in `setup.py` ([example](https://github.com/jrief/djangocms-cascade/blob/master/setup.py#L7-L14))\r\n1. Add a `long_description=convert('README.md', 'rst')` line to `setup.py` ([example](https://github.com/jrief/djangocms-cascade/blob/master/setup.py#L49))\r\n\r\n------------\r\n\r\nBoth solutions above will render a nicely formatted, HTML-styled package description on PyPI.\r\n\r\nQuality Assurance\r\n--------------\r\n\r\nOptionally, you may check your README with [checkdocs](https://github.com/Organice/djangocms-maps/blob/master/tox.ini#L13-L19) before uploading the package to PyPI, because sometimes the reST-to-HTML conversion that PyPI uses fails -- and renders a still hard-to-read, broken, unformatted package description.\n", "before_files": [{"content": "\"\"\"\npip-tools keeps your pinned dependencies fresh.\n\"\"\"\nfrom setuptools import find_packages, setup\n\nsetup(\n name='pip-tools',\n use_scm_version=True,\n url='https://github.com/jazzband/pip-tools/',\n license='BSD',\n author='Vincent Driessen',\n author_email='[email protected]',\n description=__doc__,\n packages=find_packages(exclude=['tests']),\n setup_requires=['setuptools_scm'],\n install_requires=[\n 'click>=6',\n 'first',\n 'six',\n 'setuptools'\n ],\n extras_require={\n ':python_version < \"3.0\"': ['contextlib2']\n },\n zip_safe=False,\n entry_points={\n 'console_scripts': [\n 'pip-compile = piptools.scripts.compile:cli',\n 'pip-sync = piptools.scripts.sync:cli',\n ],\n },\n platforms='any',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: System :: Systems Administration',\n ]\n)\n", "path": "setup.py"}], "after_files": [{"content": "\"\"\"\npip-tools keeps your pinned dependencies fresh.\n\"\"\"\nfrom os.path import abspath, dirname, join\nfrom setuptools import find_packages, setup\n\ndef read_file(filename):\n \"\"\"Read the contents of a file located relative to setup.py\"\"\"\n with open(join(abspath(dirname(__file__)), filename)) as thefile:\n return thefile.read()\n\nsetup(\n name='pip-tools',\n use_scm_version=True,\n url='https://github.com/jazzband/pip-tools/',\n license='BSD',\n author='Vincent Driessen',\n author_email='[email protected]',\n description=__doc__,\n long_description=read_file('README.rst'),\n packages=find_packages(exclude=['tests']),\n setup_requires=['setuptools_scm'],\n install_requires=[\n 'click>=6',\n 'first',\n 'six',\n 'setuptools'\n ],\n extras_require={\n ':python_version < \"3.0\"': ['contextlib2']\n },\n zip_safe=False,\n entry_points={\n 'console_scripts': [\n 'pip-compile = piptools.scripts.compile:cli',\n 'pip-sync = piptools.scripts.sync:cli',\n ],\n },\n platforms='any',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: System :: Systems Administration',\n ]\n)\n", "path": "setup.py"}]} | 1,066 | 192 |
gh_patches_debug_21303 | rasdani/github-patches | git_diff | nltk__nltk-2819 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
WordNetLemmatizer in nltk.stem module
What's the parameter of WordNetLemmatizer.lemmatize() in nltk.stem module?
Turn to the document, what are the candidate value of the parameter **'pos'**?

The default value is 'Noun'. But use the function pos_tag() to get the pos of the word, the value appears to come from several options.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nltk/stem/wordnet.py`
Content:
```
1 # Natural Language Toolkit: WordNet stemmer interface
2 #
3 # Copyright (C) 2001-2021 NLTK Project
4 # Author: Steven Bird <[email protected]>
5 # Edward Loper <[email protected]>
6 # URL: <http://nltk.org/>
7 # For license information, see LICENSE.TXT
8
9 from nltk.corpus import wordnet
10 from nltk.corpus.reader.wordnet import NOUN
11
12
13 class WordNetLemmatizer:
14 """
15 WordNet Lemmatizer
16
17 Lemmatize using WordNet's built-in morphy function.
18 Returns the input word unchanged if it cannot be found in WordNet.
19
20 >>> from nltk.stem import WordNetLemmatizer
21 >>> wnl = WordNetLemmatizer()
22 >>> print(wnl.lemmatize('dogs'))
23 dog
24 >>> print(wnl.lemmatize('churches'))
25 church
26 >>> print(wnl.lemmatize('aardwolves'))
27 aardwolf
28 >>> print(wnl.lemmatize('abaci'))
29 abacus
30 >>> print(wnl.lemmatize('hardrock'))
31 hardrock
32 """
33
34 def __init__(self):
35 pass
36
37 def lemmatize(self, word, pos=NOUN):
38 lemmas = wordnet._morphy(word, pos)
39 return min(lemmas, key=len) if lemmas else word
40
41 def __repr__(self):
42 return "<WordNetLemmatizer>"
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nltk/stem/wordnet.py b/nltk/stem/wordnet.py
--- a/nltk/stem/wordnet.py
+++ b/nltk/stem/wordnet.py
@@ -6,8 +6,7 @@
# URL: <http://nltk.org/>
# For license information, see LICENSE.TXT
-from nltk.corpus import wordnet
-from nltk.corpus.reader.wordnet import NOUN
+from nltk.corpus import wordnet as wn
class WordNetLemmatizer:
@@ -31,11 +30,19 @@
hardrock
"""
- def __init__(self):
- pass
-
- def lemmatize(self, word, pos=NOUN):
- lemmas = wordnet._morphy(word, pos)
+ def lemmatize(self, word: str, pos: str = wn.NOUN) -> str:
+ """Lemmatize `word` using WordNet's built-in morphy function.
+ Returns the input word unchanged if it cannot be found in WordNet.
+
+ :param word: The input word to lemmatize.
+ :type word: str
+ :param pos: The Part Of Speech tag. Valid options are `"n"` for nouns,
+ `"v"` for verbs, `"a"` for adjectives, `"r"` for adverbs and `"s"`
+ for satellite adjectives.
+ :param pos: str
+ :return: The lemma of `word`, for the given `pos`.
+ """
+ lemmas = wn._morphy(word, pos)
return min(lemmas, key=len) if lemmas else word
def __repr__(self):
| {"golden_diff": "diff --git a/nltk/stem/wordnet.py b/nltk/stem/wordnet.py\n--- a/nltk/stem/wordnet.py\n+++ b/nltk/stem/wordnet.py\n@@ -6,8 +6,7 @@\n # URL: <http://nltk.org/>\n # For license information, see LICENSE.TXT\n \n-from nltk.corpus import wordnet\n-from nltk.corpus.reader.wordnet import NOUN\n+from nltk.corpus import wordnet as wn\n \n \n class WordNetLemmatizer:\n@@ -31,11 +30,19 @@\n hardrock\n \"\"\"\n \n- def __init__(self):\n- pass\n-\n- def lemmatize(self, word, pos=NOUN):\n- lemmas = wordnet._morphy(word, pos)\n+ def lemmatize(self, word: str, pos: str = wn.NOUN) -> str:\n+ \"\"\"Lemmatize `word` using WordNet's built-in morphy function.\n+ Returns the input word unchanged if it cannot be found in WordNet.\n+\n+ :param word: The input word to lemmatize.\n+ :type word: str\n+ :param pos: The Part Of Speech tag. Valid options are `\"n\"` for nouns,\n+ `\"v\"` for verbs, `\"a\"` for adjectives, `\"r\"` for adverbs and `\"s\"`\n+ for satellite adjectives.\n+ :param pos: str\n+ :return: The lemma of `word`, for the given `pos`.\n+ \"\"\"\n+ lemmas = wn._morphy(word, pos)\n return min(lemmas, key=len) if lemmas else word\n \n def __repr__(self):\n", "issue": "WordNetLemmatizer in nltk.stem module\nWhat's the parameter of WordNetLemmatizer.lemmatize() in nltk.stem module?\r\nTurn to the document, what are the candidate value of the parameter **'pos'**?\r\n\r\nThe default value is 'Noun'. But use the function pos_tag() to get the pos of the word, the value appears to come from several options.\n", "before_files": [{"content": "# Natural Language Toolkit: WordNet stemmer interface\n#\n# Copyright (C) 2001-2021 NLTK Project\n# Author: Steven Bird <[email protected]>\n# Edward Loper <[email protected]>\n# URL: <http://nltk.org/>\n# For license information, see LICENSE.TXT\n\nfrom nltk.corpus import wordnet\nfrom nltk.corpus.reader.wordnet import NOUN\n\n\nclass WordNetLemmatizer:\n \"\"\"\n WordNet Lemmatizer\n\n Lemmatize using WordNet's built-in morphy function.\n Returns the input word unchanged if it cannot be found in WordNet.\n\n >>> from nltk.stem import WordNetLemmatizer\n >>> wnl = WordNetLemmatizer()\n >>> print(wnl.lemmatize('dogs'))\n dog\n >>> print(wnl.lemmatize('churches'))\n church\n >>> print(wnl.lemmatize('aardwolves'))\n aardwolf\n >>> print(wnl.lemmatize('abaci'))\n abacus\n >>> print(wnl.lemmatize('hardrock'))\n hardrock\n \"\"\"\n\n def __init__(self):\n pass\n\n def lemmatize(self, word, pos=NOUN):\n lemmas = wordnet._morphy(word, pos)\n return min(lemmas, key=len) if lemmas else word\n\n def __repr__(self):\n return \"<WordNetLemmatizer>\"\n", "path": "nltk/stem/wordnet.py"}], "after_files": [{"content": "# Natural Language Toolkit: WordNet stemmer interface\n#\n# Copyright (C) 2001-2021 NLTK Project\n# Author: Steven Bird <[email protected]>\n# Edward Loper <[email protected]>\n# URL: <http://nltk.org/>\n# For license information, see LICENSE.TXT\n\nfrom nltk.corpus import wordnet as wn\n\n\nclass WordNetLemmatizer:\n \"\"\"\n WordNet Lemmatizer\n\n Lemmatize using WordNet's built-in morphy function.\n Returns the input word unchanged if it cannot be found in WordNet.\n\n >>> from nltk.stem import WordNetLemmatizer\n >>> wnl = WordNetLemmatizer()\n >>> print(wnl.lemmatize('dogs'))\n dog\n >>> print(wnl.lemmatize('churches'))\n church\n >>> print(wnl.lemmatize('aardwolves'))\n aardwolf\n >>> print(wnl.lemmatize('abaci'))\n abacus\n >>> print(wnl.lemmatize('hardrock'))\n hardrock\n \"\"\"\n\n def lemmatize(self, word: str, pos: str = wn.NOUN) -> str:\n \"\"\"Lemmatize `word` using WordNet's built-in morphy function.\n Returns the input word unchanged if it cannot be found in WordNet.\n\n :param word: The input word to lemmatize.\n :type word: str\n :param pos: The Part Of Speech tag. Valid options are `\"n\"` for nouns,\n `\"v\"` for verbs, `\"a\"` for adjectives, `\"r\"` for adverbs and `\"s\"`\n for satellite adjectives.\n :param pos: str\n :return: The lemma of `word`, for the given `pos`.\n \"\"\"\n lemmas = wn._morphy(word, pos)\n return min(lemmas, key=len) if lemmas else word\n\n def __repr__(self):\n return \"<WordNetLemmatizer>\"\n", "path": "nltk/stem/wordnet.py"}]} | 817 | 376 |
gh_patches_debug_21029 | rasdani/github-patches | git_diff | PlasmaPy__PlasmaPy-655 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create classes to represent ionization state distributions
My plan for this PR is to create classes to represent the ionization state distributions of one or more elements. I am going to add in a bunch of dunder methods like `__getitem__` and maybe `__call__` to help making access to the ionization states more straightfoward and intuitive. Any suggestions on the naming convention will be helpful so that we can maximize readability.
Eventually we'll need a way to calculate ionization state distributions assuming collisional ionization equilibrium, but that will be for a different PR. The purpose of this PR is to set up how to store and access the ionization distributions. This will be discussed in #352.
This will address some of #352. It will probably be best to wait until after the `0.1.0` release to merge this, since this PR is only for a partial implementation anyway.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plasmapy/examples/plot_dispersion_function.py`
Content:
```
1 """
2 The plasma dispersion function
3 ==============================
4
5 Let's import some basics (and `PlasmaPy`!)
6 """
7
8
9 import numpy as np
10 import matplotlib.pyplot as plt
11 import plasmapy
12
13
14 #######################################################################
15 help(plasmapy.mathematics.plasma_dispersion_func)
16
17
18 #######################################################################
19 # We'll now make some sample data to visualize the dispersion function:
20
21 x = np.linspace(-1, 1, 1000)
22 X, Y = np.meshgrid(x, x)
23 Z = X + 1j * Y
24 print(Z.shape)
25
26 #######################################################################
27 # Before we start plotting, let's make a visualization function first:
28
29
30 def plot_complex(X, Y, Z, N=50):
31 fig, (real_axis, imag_axis) = plt.subplots(1, 2)
32 real_axis.contourf(X, Y, Z.real, N)
33 imag_axis.contourf(X, Y, Z.imag, N)
34 real_axis.set_title("Real values")
35 imag_axis.set_title("Imaginary values")
36 for ax in [real_axis, imag_axis]:
37 ax.set_xlabel("Real values")
38 ax.set_ylabel("Imaginary values")
39 fig.tight_layout()
40
41
42 plot_complex(X, Y, Z)
43
44 #######################################################################
45 # We can now apply our visualization function to our simple
46
47 F = plasmapy.mathematics.plasma_dispersion_func(Z)
48 plot_complex(X, Y, F)
49
50
51 #######################################################################
52 # So this is going to be a hack and I'm not 100% sure the dispersion function
53 # is quite what I think it is, but let's find the area where the dispersion
54 # function has a lesser than zero real part because I think it may be important
55 # (brb reading Fried and Conte):
56
57 plot_complex(X, Y, F.real < 0)
58
59
60 #######################################################################
61 # We can also visualize the derivative:
62
63 F = plasmapy.mathematics.plasma_dispersion_func_deriv(Z)
64 plot_complex(X, Y, F)
65
66 #######################################################################
67 # Plotting the same function on a larger area:
68
69 x = np.linspace(-2, 2, 2000)
70 X, Y = np.meshgrid(x, x)
71 Z = X + 1j * Y
72 print(Z.shape)
73
74 #######################################################################
75
76 F = plasmapy.mathematics.plasma_dispersion_func(Z)
77 plot_complex(X, Y, F, 100)
78
79 #######################################################################
80 # Now we examine the derivative of the dispersion function as a function
81 # of the phase velocity of an electromagnetic wave propagating through
82 # the plasma. This is recreating figure 5.1 in:
83 # J. Sheffield, D. Froula, S. H. Glenzer, and N. C. Luhmann Jr,
84 # Plasma scattering of electromagnetic radiation: theory and measurement
85 # techniques. Chapter 5 Pg 106 (Academic press, 2010).
86
87 xs = np.linspace(0, 4, 100)
88 ws = (-1 / 2) * plasmapy.mathematics.plasma_dispersion_func_deriv(xs)
89 wRe = np.real(ws)
90 wIm = np.imag(ws)
91
92 plt.plot(xs, wRe, label="Re")
93 plt.plot(xs, wIm, label="Im")
94 plt.axis([0, 4, -0.3, 1])
95 plt.legend(loc='upper right',
96 frameon=False,
97 labelspacing=0.001,
98 fontsize=14,
99 borderaxespad=0.1)
100 plt.show()
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plasmapy/examples/plot_dispersion_function.py b/plasmapy/examples/plot_dispersion_function.py
--- a/plasmapy/examples/plot_dispersion_function.py
+++ b/plasmapy/examples/plot_dispersion_function.py
@@ -10,7 +10,6 @@
import matplotlib.pyplot as plt
import plasmapy
-
#######################################################################
help(plasmapy.mathematics.plasma_dispersion_func)
@@ -41,9 +40,10 @@
plot_complex(X, Y, Z)
-#######################################################################
-# We can now apply our visualization function to our simple
+###############################################################################
+# We can now apply our visualization function to our simple dispersion relation
+# sphinx_gallery_thumbnail_number = 2
F = plasmapy.mathematics.plasma_dispersion_func(Z)
plot_complex(X, Y, F)
@@ -97,4 +97,4 @@
labelspacing=0.001,
fontsize=14,
borderaxespad=0.1)
-plt.show()
\ No newline at end of file
+plt.show()
| {"golden_diff": "diff --git a/plasmapy/examples/plot_dispersion_function.py b/plasmapy/examples/plot_dispersion_function.py\n--- a/plasmapy/examples/plot_dispersion_function.py\n+++ b/plasmapy/examples/plot_dispersion_function.py\n@@ -10,7 +10,6 @@\n import matplotlib.pyplot as plt\n import plasmapy\n \n-\n #######################################################################\n help(plasmapy.mathematics.plasma_dispersion_func)\n \n@@ -41,9 +40,10 @@\n \n plot_complex(X, Y, Z)\n \n-#######################################################################\n-# We can now apply our visualization function to our simple\n+###############################################################################\n+# We can now apply our visualization function to our simple dispersion relation\n \n+# sphinx_gallery_thumbnail_number = 2\n F = plasmapy.mathematics.plasma_dispersion_func(Z)\n plot_complex(X, Y, F)\n \n@@ -97,4 +97,4 @@\n labelspacing=0.001,\n fontsize=14,\n borderaxespad=0.1)\n-plt.show()\n\\ No newline at end of file\n+plt.show()\n", "issue": "Create classes to represent ionization state distributions\nMy plan for this PR is to create classes to represent the ionization state distributions of one or more elements. I am going to add in a bunch of dunder methods like `__getitem__` and maybe `__call__` to help making access to the ionization states more straightfoward and intuitive. Any suggestions on the naming convention will be helpful so that we can maximize readability. \r\n\r\nEventually we'll need a way to calculate ionization state distributions assuming collisional ionization equilibrium, but that will be for a different PR. The purpose of this PR is to set up how to store and access the ionization distributions. This will be discussed in #352.\r\n\r\nThis will address some of #352. It will probably be best to wait until after the `0.1.0` release to merge this, since this PR is only for a partial implementation anyway.\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nThe plasma dispersion function\n==============================\n\nLet's import some basics (and `PlasmaPy`!)\n\"\"\"\n\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport plasmapy\n\n\n#######################################################################\nhelp(plasmapy.mathematics.plasma_dispersion_func)\n\n\n#######################################################################\n# We'll now make some sample data to visualize the dispersion function:\n\nx = np.linspace(-1, 1, 1000)\nX, Y = np.meshgrid(x, x)\nZ = X + 1j * Y\nprint(Z.shape)\n\n#######################################################################\n# Before we start plotting, let's make a visualization function first:\n\n\ndef plot_complex(X, Y, Z, N=50):\n fig, (real_axis, imag_axis) = plt.subplots(1, 2)\n real_axis.contourf(X, Y, Z.real, N)\n imag_axis.contourf(X, Y, Z.imag, N)\n real_axis.set_title(\"Real values\")\n imag_axis.set_title(\"Imaginary values\")\n for ax in [real_axis, imag_axis]:\n ax.set_xlabel(\"Real values\")\n ax.set_ylabel(\"Imaginary values\")\n fig.tight_layout()\n\n\nplot_complex(X, Y, Z)\n\n#######################################################################\n# We can now apply our visualization function to our simple\n\nF = plasmapy.mathematics.plasma_dispersion_func(Z)\nplot_complex(X, Y, F)\n\n\n#######################################################################\n# So this is going to be a hack and I'm not 100% sure the dispersion function\n# is quite what I think it is, but let's find the area where the dispersion\n# function has a lesser than zero real part because I think it may be important\n# (brb reading Fried and Conte):\n\nplot_complex(X, Y, F.real < 0)\n\n\n#######################################################################\n# We can also visualize the derivative:\n\nF = plasmapy.mathematics.plasma_dispersion_func_deriv(Z)\nplot_complex(X, Y, F)\n\n#######################################################################\n# Plotting the same function on a larger area:\n\nx = np.linspace(-2, 2, 2000)\nX, Y = np.meshgrid(x, x)\nZ = X + 1j * Y\nprint(Z.shape)\n\n#######################################################################\n\nF = plasmapy.mathematics.plasma_dispersion_func(Z)\nplot_complex(X, Y, F, 100)\n\n#######################################################################\n# Now we examine the derivative of the dispersion function as a function\n# of the phase velocity of an electromagnetic wave propagating through\n# the plasma. This is recreating figure 5.1 in:\n# J. Sheffield, D. Froula, S. H. Glenzer, and N. C. Luhmann Jr,\n# Plasma scattering of electromagnetic radiation: theory and measurement\n# techniques. Chapter 5 Pg 106 (Academic press, 2010).\n\nxs = np.linspace(0, 4, 100)\nws = (-1 / 2) * plasmapy.mathematics.plasma_dispersion_func_deriv(xs)\nwRe = np.real(ws)\nwIm = np.imag(ws)\n\nplt.plot(xs, wRe, label=\"Re\")\nplt.plot(xs, wIm, label=\"Im\")\nplt.axis([0, 4, -0.3, 1])\nplt.legend(loc='upper right',\n frameon=False,\n labelspacing=0.001,\n fontsize=14,\n borderaxespad=0.1)\nplt.show()", "path": "plasmapy/examples/plot_dispersion_function.py"}], "after_files": [{"content": "\"\"\"\nThe plasma dispersion function\n==============================\n\nLet's import some basics (and `PlasmaPy`!)\n\"\"\"\n\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport plasmapy\n\n#######################################################################\nhelp(plasmapy.mathematics.plasma_dispersion_func)\n\n\n#######################################################################\n# We'll now make some sample data to visualize the dispersion function:\n\nx = np.linspace(-1, 1, 1000)\nX, Y = np.meshgrid(x, x)\nZ = X + 1j * Y\nprint(Z.shape)\n\n#######################################################################\n# Before we start plotting, let's make a visualization function first:\n\n\ndef plot_complex(X, Y, Z, N=50):\n fig, (real_axis, imag_axis) = plt.subplots(1, 2)\n real_axis.contourf(X, Y, Z.real, N)\n imag_axis.contourf(X, Y, Z.imag, N)\n real_axis.set_title(\"Real values\")\n imag_axis.set_title(\"Imaginary values\")\n for ax in [real_axis, imag_axis]:\n ax.set_xlabel(\"Real values\")\n ax.set_ylabel(\"Imaginary values\")\n fig.tight_layout()\n\n\nplot_complex(X, Y, Z)\n\n###############################################################################\n# We can now apply our visualization function to our simple dispersion relation\n\n# sphinx_gallery_thumbnail_number = 2\nF = plasmapy.mathematics.plasma_dispersion_func(Z)\nplot_complex(X, Y, F)\n\n\n#######################################################################\n# So this is going to be a hack and I'm not 100% sure the dispersion function\n# is quite what I think it is, but let's find the area where the dispersion\n# function has a lesser than zero real part because I think it may be important\n# (brb reading Fried and Conte):\n\nplot_complex(X, Y, F.real < 0)\n\n\n#######################################################################\n# We can also visualize the derivative:\n\nF = plasmapy.mathematics.plasma_dispersion_func_deriv(Z)\nplot_complex(X, Y, F)\n\n#######################################################################\n# Plotting the same function on a larger area:\n\nx = np.linspace(-2, 2, 2000)\nX, Y = np.meshgrid(x, x)\nZ = X + 1j * Y\nprint(Z.shape)\n\n#######################################################################\n\nF = plasmapy.mathematics.plasma_dispersion_func(Z)\nplot_complex(X, Y, F, 100)\n\n#######################################################################\n# Now we examine the derivative of the dispersion function as a function\n# of the phase velocity of an electromagnetic wave propagating through\n# the plasma. This is recreating figure 5.1 in:\n# J. Sheffield, D. Froula, S. H. Glenzer, and N. C. Luhmann Jr,\n# Plasma scattering of electromagnetic radiation: theory and measurement\n# techniques. Chapter 5 Pg 106 (Academic press, 2010).\n\nxs = np.linspace(0, 4, 100)\nws = (-1 / 2) * plasmapy.mathematics.plasma_dispersion_func_deriv(xs)\nwRe = np.real(ws)\nwIm = np.imag(ws)\n\nplt.plot(xs, wRe, label=\"Re\")\nplt.plot(xs, wIm, label=\"Im\")\nplt.axis([0, 4, -0.3, 1])\nplt.legend(loc='upper right',\n frameon=False,\n labelspacing=0.001,\n fontsize=14,\n borderaxespad=0.1)\nplt.show()\n", "path": "plasmapy/examples/plot_dispersion_function.py"}]} | 1,404 | 239 |
gh_patches_debug_10872 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1887 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exception processing E3037 for AWS::S3::Bucket.Transition.TransitionDate
```
$ cfn-lint --version
cfn-lint 0.44.5
```
The `TransitionDate` property is defined with `PrimitiveType: "Timestamp"`:
```yaml
AWSTemplateFormatVersion: 2010-09-09
Resources:
Bucket:
Type: AWS::S3::Bucket
Properties:
LifecycleConfiguration:
Rules:
- Status: Enabled
Transitions:
- StorageClass: INTELLIGENT_TIERING
TransitionDate: 2021-01-01T00:00:00.000Z
```
This is a valid template and can be successfully deployed, but `cfn-lint` fails with:
```
$ cfn-lint scratch.yml
E0002 Unknown exception while processing rule E3037: Object of type datetime is not JSON serializable
scratch.yml:1:1
```
Running with `--debug` shows the exception is generated at https://github.com/aws-cloudformation/cfn-python-lint/blob/c7658511bd7066417682103f21f71983c67ea6d0/src/cfnlint/rules/resources/properties/ListDuplicates.py#L36
Quoting the TransitionDate value suppresses this error, e.g. `TransitionDate: "2021-01-01T00:00:00.000Z"`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/resources/properties/ListDuplicates.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import hashlib
6 import json
7 from cfnlint.rules import CloudFormationLintRule
8 from cfnlint.rules import RuleMatch
9
10 from cfnlint.helpers import RESOURCE_SPECS
11
12
13 class ListDuplicates(CloudFormationLintRule):
14 """Check if duplicates exist in a List"""
15 id = 'E3037'
16 shortdesc = 'Check if a list has duplicate values'
17 description = 'Certain lists don\'t support duplicate items. ' \
18 'Check when duplicates are provided but not supported.'
19 source_url = 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/cfn-resource-specification.md#allowedvalue'
20 tags = ['resources', 'property', 'list']
21
22 def initialize(self, cfn):
23 """Initialize the rule"""
24 for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):
25 self.resource_property_types.append(resource_type_spec)
26 for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):
27 self.resource_sub_property_types.append(property_type_spec)
28
29 def _check_duplicates(self, values, path, scenario=None):
30 """ Check for Duplicates """
31 matches = []
32
33 list_items = []
34 if isinstance(values, list):
35 for index, value in enumerate(values):
36 value_hash = hashlib.sha1(json.dumps(
37 value, sort_keys=True).encode('utf-8')).hexdigest()
38 if value_hash in list_items:
39 if not scenario:
40 message = 'List has a duplicate value at {0}'
41 matches.append(
42 RuleMatch(path + [index], message.format('/'.join(map(str, path + [index])))))
43 else:
44 scenario_text = ' and '.join(
45 ['condition "%s" is %s' % (k, v) for (k, v) in scenario.items()])
46 message = 'List has a duplicate value at {0} when {1}'
47 matches.append(RuleMatch(path, message.format(
48 '/'.join(map(str, path)), scenario_text)))
49
50 list_items.append(value_hash)
51
52 return matches
53
54 def check_duplicates(self, values, path, cfn):
55 """ Check for duplicates """
56 matches = []
57
58 if isinstance(values, list):
59 matches.extend(self._check_duplicates(values, path))
60 elif isinstance(values, dict):
61 props = cfn.get_object_without_conditions(values)
62 for prop in props:
63 matches.extend(self._check_duplicates(
64 prop.get('Object'), path, prop.get('Scenario')))
65
66 return matches
67
68 def check(self, cfn, properties, value_specs, path):
69 """Check itself"""
70 matches = list()
71 for p_value, p_path in properties.items_safe(path[:]):
72 for prop in p_value:
73 if prop in value_specs:
74 property_type = value_specs.get(prop).get('Type')
75 duplicates_allowed = value_specs.get(prop).get('DuplicatesAllowed', True)
76 if property_type == 'List' and not duplicates_allowed:
77 matches.extend(
78 self.check_duplicates(
79 p_value[prop], p_path + [prop], cfn
80 )
81 )
82
83 return matches
84
85 def match_resource_sub_properties(self, properties, property_type, path, cfn):
86 """Match for sub properties"""
87 matches = list()
88
89 specs = RESOURCE_SPECS.get(cfn.regions[0]).get(
90 'PropertyTypes').get(property_type, {}).get('Properties', {})
91 matches.extend(self.check(cfn, properties, specs, path))
92
93 return matches
94
95 def match_resource_properties(self, properties, resource_type, path, cfn):
96 """Check CloudFormation Properties"""
97 matches = list()
98
99 specs = RESOURCE_SPECS.get(cfn.regions[0]).get(
100 'ResourceTypes').get(resource_type, {}).get('Properties', {})
101 matches.extend(self.check(cfn, properties, specs, path))
102
103 return matches
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/resources/properties/ListDuplicates.py b/src/cfnlint/rules/resources/properties/ListDuplicates.py
--- a/src/cfnlint/rules/resources/properties/ListDuplicates.py
+++ b/src/cfnlint/rules/resources/properties/ListDuplicates.py
@@ -34,7 +34,7 @@
if isinstance(values, list):
for index, value in enumerate(values):
value_hash = hashlib.sha1(json.dumps(
- value, sort_keys=True).encode('utf-8')).hexdigest()
+ value, sort_keys=True, default=str).encode('utf-8')).hexdigest()
if value_hash in list_items:
if not scenario:
message = 'List has a duplicate value at {0}'
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/properties/ListDuplicates.py b/src/cfnlint/rules/resources/properties/ListDuplicates.py\n--- a/src/cfnlint/rules/resources/properties/ListDuplicates.py\n+++ b/src/cfnlint/rules/resources/properties/ListDuplicates.py\n@@ -34,7 +34,7 @@\n if isinstance(values, list):\n for index, value in enumerate(values):\n value_hash = hashlib.sha1(json.dumps(\n- value, sort_keys=True).encode('utf-8')).hexdigest()\n+ value, sort_keys=True, default=str).encode('utf-8')).hexdigest()\n if value_hash in list_items:\n if not scenario:\n message = 'List has a duplicate value at {0}'\n", "issue": "Exception processing E3037 for AWS::S3::Bucket.Transition.TransitionDate\n```\r\n$ cfn-lint --version\r\ncfn-lint 0.44.5\r\n```\r\n\r\nThe `TransitionDate` property is defined with `PrimitiveType: \"Timestamp\"`:\r\n\r\n```yaml\r\nAWSTemplateFormatVersion: 2010-09-09\r\n\r\nResources:\r\n Bucket:\r\n Type: AWS::S3::Bucket\r\n Properties:\r\n LifecycleConfiguration:\r\n Rules:\r\n - Status: Enabled\r\n Transitions:\r\n - StorageClass: INTELLIGENT_TIERING\r\n TransitionDate: 2021-01-01T00:00:00.000Z\r\n```\r\n\r\nThis is a valid template and can be successfully deployed, but `cfn-lint` fails with:\r\n\r\n```\r\n$ cfn-lint scratch.yml\r\nE0002 Unknown exception while processing rule E3037: Object of type datetime is not JSON serializable\r\nscratch.yml:1:1\r\n```\r\n\r\nRunning with `--debug` shows the exception is generated at https://github.com/aws-cloudformation/cfn-python-lint/blob/c7658511bd7066417682103f21f71983c67ea6d0/src/cfnlint/rules/resources/properties/ListDuplicates.py#L36\r\n\r\nQuoting the TransitionDate value suppresses this error, e.g. `TransitionDate: \"2021-01-01T00:00:00.000Z\"`\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport hashlib\nimport json\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\nfrom cfnlint.helpers import RESOURCE_SPECS\n\n\nclass ListDuplicates(CloudFormationLintRule):\n \"\"\"Check if duplicates exist in a List\"\"\"\n id = 'E3037'\n shortdesc = 'Check if a list has duplicate values'\n description = 'Certain lists don\\'t support duplicate items. ' \\\n 'Check when duplicates are provided but not supported.'\n source_url = 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/cfn-resource-specification.md#allowedvalue'\n tags = ['resources', 'property', 'list']\n\n def initialize(self, cfn):\n \"\"\"Initialize the rule\"\"\"\n for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):\n self.resource_sub_property_types.append(property_type_spec)\n\n def _check_duplicates(self, values, path, scenario=None):\n \"\"\" Check for Duplicates \"\"\"\n matches = []\n\n list_items = []\n if isinstance(values, list):\n for index, value in enumerate(values):\n value_hash = hashlib.sha1(json.dumps(\n value, sort_keys=True).encode('utf-8')).hexdigest()\n if value_hash in list_items:\n if not scenario:\n message = 'List has a duplicate value at {0}'\n matches.append(\n RuleMatch(path + [index], message.format('/'.join(map(str, path + [index])))))\n else:\n scenario_text = ' and '.join(\n ['condition \"%s\" is %s' % (k, v) for (k, v) in scenario.items()])\n message = 'List has a duplicate value at {0} when {1}'\n matches.append(RuleMatch(path, message.format(\n '/'.join(map(str, path)), scenario_text)))\n\n list_items.append(value_hash)\n\n return matches\n\n def check_duplicates(self, values, path, cfn):\n \"\"\" Check for duplicates \"\"\"\n matches = []\n\n if isinstance(values, list):\n matches.extend(self._check_duplicates(values, path))\n elif isinstance(values, dict):\n props = cfn.get_object_without_conditions(values)\n for prop in props:\n matches.extend(self._check_duplicates(\n prop.get('Object'), path, prop.get('Scenario')))\n\n return matches\n\n def check(self, cfn, properties, value_specs, path):\n \"\"\"Check itself\"\"\"\n matches = list()\n for p_value, p_path in properties.items_safe(path[:]):\n for prop in p_value:\n if prop in value_specs:\n property_type = value_specs.get(prop).get('Type')\n duplicates_allowed = value_specs.get(prop).get('DuplicatesAllowed', True)\n if property_type == 'List' and not duplicates_allowed:\n matches.extend(\n self.check_duplicates(\n p_value[prop], p_path + [prop], cfn\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'PropertyTypes').get(property_type, {}).get('Properties', {})\n matches.extend(self.check(cfn, properties, specs, path))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'ResourceTypes').get(resource_type, {}).get('Properties', {})\n matches.extend(self.check(cfn, properties, specs, path))\n\n return matches\n", "path": "src/cfnlint/rules/resources/properties/ListDuplicates.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport hashlib\nimport json\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\nfrom cfnlint.helpers import RESOURCE_SPECS\n\n\nclass ListDuplicates(CloudFormationLintRule):\n \"\"\"Check if duplicates exist in a List\"\"\"\n id = 'E3037'\n shortdesc = 'Check if a list has duplicate values'\n description = 'Certain lists don\\'t support duplicate items. ' \\\n 'Check when duplicates are provided but not supported.'\n source_url = 'https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/cfn-resource-specification.md#allowedvalue'\n tags = ['resources', 'property', 'list']\n\n def initialize(self, cfn):\n \"\"\"Initialize the rule\"\"\"\n for resource_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('ResourceTypes'):\n self.resource_property_types.append(resource_type_spec)\n for property_type_spec in RESOURCE_SPECS.get(cfn.regions[0]).get('PropertyTypes'):\n self.resource_sub_property_types.append(property_type_spec)\n\n def _check_duplicates(self, values, path, scenario=None):\n \"\"\" Check for Duplicates \"\"\"\n matches = []\n\n list_items = []\n if isinstance(values, list):\n for index, value in enumerate(values):\n value_hash = hashlib.sha1(json.dumps(\n value, sort_keys=True, default=str).encode('utf-8')).hexdigest()\n if value_hash in list_items:\n if not scenario:\n message = 'List has a duplicate value at {0}'\n matches.append(\n RuleMatch(path + [index], message.format('/'.join(map(str, path + [index])))))\n else:\n scenario_text = ' and '.join(\n ['condition \"%s\" is %s' % (k, v) for (k, v) in scenario.items()])\n message = 'List has a duplicate value at {0} when {1}'\n matches.append(RuleMatch(path, message.format(\n '/'.join(map(str, path)), scenario_text)))\n\n list_items.append(value_hash)\n\n return matches\n\n def check_duplicates(self, values, path, cfn):\n \"\"\" Check for duplicates \"\"\"\n matches = []\n\n if isinstance(values, list):\n matches.extend(self._check_duplicates(values, path))\n elif isinstance(values, dict):\n props = cfn.get_object_without_conditions(values)\n for prop in props:\n matches.extend(self._check_duplicates(\n prop.get('Object'), path, prop.get('Scenario')))\n\n return matches\n\n def check(self, cfn, properties, value_specs, path):\n \"\"\"Check itself\"\"\"\n matches = list()\n for p_value, p_path in properties.items_safe(path[:]):\n for prop in p_value:\n if prop in value_specs:\n property_type = value_specs.get(prop).get('Type')\n duplicates_allowed = value_specs.get(prop).get('DuplicatesAllowed', True)\n if property_type == 'List' and not duplicates_allowed:\n matches.extend(\n self.check_duplicates(\n p_value[prop], p_path + [prop], cfn\n )\n )\n\n return matches\n\n def match_resource_sub_properties(self, properties, property_type, path, cfn):\n \"\"\"Match for sub properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'PropertyTypes').get(property_type, {}).get('Properties', {})\n matches.extend(self.check(cfn, properties, specs, path))\n\n return matches\n\n def match_resource_properties(self, properties, resource_type, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = list()\n\n specs = RESOURCE_SPECS.get(cfn.regions[0]).get(\n 'ResourceTypes').get(resource_type, {}).get('Properties', {})\n matches.extend(self.check(cfn, properties, specs, path))\n\n return matches\n", "path": "src/cfnlint/rules/resources/properties/ListDuplicates.py"}]} | 1,674 | 155 |
gh_patches_debug_33558 | rasdani/github-patches | git_diff | wagtail__wagtail-170 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Broken URL for jquery.ui.datepicker when 'en-US' used as lang
This isn't a big deal at all, but wanted to post just in case anyone wants to take a look.
When loading a page with `jquery.ui.datepicker.js`, I notice in console that a call to http://jquery-ui.googlecode.com/svn/tags/latest/ui/i18n/jquery.ui.datepicker-en-US.js returns a 404.
I searched out the CDN for the directory in which the file is attempting to be called:
http://jquery-ui.googlecode.com/svn/tags/latest/ui/i18n/
As you can see, there is no `../jquery.ui.datepicker-en-US.js` present (not that there necessarily ought to be)
The call stems from:
https://github.com/torchbox/wagtail/blob/master/wagtail/wagtailadmin/templatetags/localize.py#L42
The interpolation inserts `en-US` into the URI
Again, no big deal... just FYI
Cheers, all!
Edit:
I should add, this issue does _not_ break usability - a fallback seems to be in place.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/wagtailadmin/templatetags/localize.py`
Content:
```
1 from django import template
2 from django.conf import settings
3 from django.utils import formats
4 from django.utils.translation import get_language
5
6 register = template.Library()
7
8 # For reasons unkown, the el (greek) locale in django/conf/locale/el/formats.py
9 # *did not* contain a DATE_INPUT_FORMATS -- so it fell back to using the US
10 # date format (mm/dd/yy) which is not the correct one for Greece (dd/mm/yy).
11 # This means that if we used a localized datepicker django *won't* be able to
12 # parse the dates! So a test here checks if DATE_INPUT_FORMATS is actually
13 # defined in a format module. If yes then it will just return an empty string
14 # so that the normal, localized date format from datepicker will be used.
15 # If DATE_INPUT_FORMATS is not defined then it will return
16 @register.assignment_tag
17 def get_date_format_override():
18 if hasattr(settings, 'USE_I18N') and settings.USE_I18N==True:
19
20 for m in formats.get_format_modules():
21 if hasattr(m, 'DATE_INPUT_FORMATS'):
22 return ''
23 else: # fall back to the ISO to be sure date will be parsed
24 return 'yy-mm-dd'
25 else: # Fall back to ISO if I18N is *not* used
26 return 'yy-mm-dd'
27
28 # Get the correct i18n + l10n settings for datepicker depending on current
29 # thread language
30 @register.simple_tag
31 def get_localized_datepicker_js():
32 if hasattr(settings, 'USE_I18N') and settings.USE_I18N==True and \
33 hasattr(settings, 'USE_L10N') and settings.USE_L10N==True:
34
35 lang = get_language()
36
37 if '-' in lang:
38 lang_parts = lang.split('-')
39 lang = lang_parts[0].lower() +'-'+ lang_parts[1].upper()
40 else:
41 lang=lang.lower()
42 return '<script src="//jquery-ui.googlecode.com/svn/tags/latest/ui/i18n/jquery.ui.datepicker-{0}.js"></script>'.format(
43 lang
44 )
45
46 else: # Don't write anything if we don't use I18N and L10N
47 return ''
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/wagtailadmin/templatetags/localize.py b/wagtail/wagtailadmin/templatetags/localize.py
--- a/wagtail/wagtailadmin/templatetags/localize.py
+++ b/wagtail/wagtailadmin/templatetags/localize.py
@@ -1,5 +1,6 @@
from django import template
from django.conf import settings
+from django.templatetags.static import static
from django.utils import formats
from django.utils.translation import get_language
@@ -25,6 +26,15 @@
else: # Fall back to ISO if I18N is *not* used
return 'yy-mm-dd'
+# This is a list of all supported langs for jquery-ui datepicker which exist in
+# wagtailadmin/js/venor/i18n/. In case any new translations are added there the
+# language code should also be added in this list.
+SUPPORTED_DATEPICKER_LANGS = ['af', 'ar-DZ', 'ar', 'az', 'be', 'bg', 'bs', 'ca', 'cs', 'cy-GB', 'da', 'de',
+ 'el', 'en-AU', 'en-GB', 'en-NZ', 'eo', 'es', 'et', 'eu', 'fa', 'fi', 'fo', 'fr-CA', 'fr-CH', 'fr', 'gl',
+ 'he', 'hi', 'hr', 'hu', 'hy', 'id', 'is', 'it', 'ja', 'ka', 'kk', 'km', 'ko', 'ky', 'lb', 'lt', 'lv',
+ 'mk', 'ml', 'ms', 'nb', 'nl-BE', 'nl', 'nn', 'no', 'pl', 'pt-BR', 'pt', 'rm', 'ro', 'ru', 'sk', 'sl', 'sq',
+ 'sr-SR', 'sr', 'sv', 'ta', 'th', 'tj', 'tr', 'uk', 'vi', 'zh-CN', 'zh-HK', 'zh-TW'
+]
# Get the correct i18n + l10n settings for datepicker depending on current
# thread language
@register.simple_tag
@@ -39,10 +49,14 @@
lang = lang_parts[0].lower() +'-'+ lang_parts[1].upper()
else:
lang=lang.lower()
- return '<script src="//jquery-ui.googlecode.com/svn/tags/latest/ui/i18n/jquery.ui.datepicker-{0}.js"></script>'.format(
- lang
- )
+ if lang in SUPPORTED_DATEPICKER_LANGS:
+ translation_file = static("wagtailadmin/js/vendor/i18n/jquery.ui.datepicker-{0}.js".format(
+ lang
+ ))
+ return '<script src="{0}"></script>'.format(translation_file)
+ else: # Don't return anything if language is not supported
+ return ''
- else: # Don't write anything if we don't use I18N and L10N
+ else: # Don't return anything if we don't use I18N and L10N
return ''
\ No newline at end of file
| {"golden_diff": "diff --git a/wagtail/wagtailadmin/templatetags/localize.py b/wagtail/wagtailadmin/templatetags/localize.py\n--- a/wagtail/wagtailadmin/templatetags/localize.py\n+++ b/wagtail/wagtailadmin/templatetags/localize.py\n@@ -1,5 +1,6 @@\n from django import template\n from django.conf import settings\n+from django.templatetags.static import static\n from django.utils import formats\n from django.utils.translation import get_language\n \n@@ -25,6 +26,15 @@\n else: # Fall back to ISO if I18N is *not* used\n return 'yy-mm-dd'\n \n+# This is a list of all supported langs for jquery-ui datepicker which exist in\n+# wagtailadmin/js/venor/i18n/. In case any new translations are added there the\n+# language code should also be added in this list.\n+SUPPORTED_DATEPICKER_LANGS = ['af', 'ar-DZ', 'ar', 'az', 'be', 'bg', 'bs', 'ca', 'cs', 'cy-GB', 'da', 'de',\n+ 'el', 'en-AU', 'en-GB', 'en-NZ', 'eo', 'es', 'et', 'eu', 'fa', 'fi', 'fo', 'fr-CA', 'fr-CH', 'fr', 'gl',\n+ 'he', 'hi', 'hr', 'hu', 'hy', 'id', 'is', 'it', 'ja', 'ka', 'kk', 'km', 'ko', 'ky', 'lb', 'lt', 'lv',\n+ 'mk', 'ml', 'ms', 'nb', 'nl-BE', 'nl', 'nn', 'no', 'pl', 'pt-BR', 'pt', 'rm', 'ro', 'ru', 'sk', 'sl', 'sq',\n+ 'sr-SR', 'sr', 'sv', 'ta', 'th', 'tj', 'tr', 'uk', 'vi', 'zh-CN', 'zh-HK', 'zh-TW'\n+]\n # Get the correct i18n + l10n settings for datepicker depending on current \n # thread language \n @register.simple_tag\n@@ -39,10 +49,14 @@\n lang = lang_parts[0].lower() +'-'+ lang_parts[1].upper()\n else:\n lang=lang.lower()\n- return '<script src=\"//jquery-ui.googlecode.com/svn/tags/latest/ui/i18n/jquery.ui.datepicker-{0}.js\"></script>'.format(\n- lang\n- )\n+ if lang in SUPPORTED_DATEPICKER_LANGS:\n+ translation_file = static(\"wagtailadmin/js/vendor/i18n/jquery.ui.datepicker-{0}.js\".format(\n+ lang\n+ ))\n+ return '<script src=\"{0}\"></script>'.format(translation_file)\n+ else: # Don't return anything if language is not supported\n+ return ''\n \n- else: # Don't write anything if we don't use I18N and L10N\n+ else: # Don't return anything if we don't use I18N and L10N\n return '' \n \n\\ No newline at end of file\n", "issue": "Broken URL for jquery.ui.datepicker when 'en-US' used as lang \nThis isn't a big deal at all, but wanted to post just in case anyone wants to take a look.\n\nWhen loading a page with `jquery.ui.datepicker.js`, I notice in console that a call to http://jquery-ui.googlecode.com/svn/tags/latest/ui/i18n/jquery.ui.datepicker-en-US.js returns a 404.\n\nI searched out the CDN for the directory in which the file is attempting to be called:\nhttp://jquery-ui.googlecode.com/svn/tags/latest/ui/i18n/\n\nAs you can see, there is no `../jquery.ui.datepicker-en-US.js` present (not that there necessarily ought to be)\n\nThe call stems from:\nhttps://github.com/torchbox/wagtail/blob/master/wagtail/wagtailadmin/templatetags/localize.py#L42\n\nThe interpolation inserts `en-US` into the URI\n\nAgain, no big deal... just FYI\n\nCheers, all!\n\nEdit:\n\nI should add, this issue does _not_ break usability - a fallback seems to be in place.\n\n", "before_files": [{"content": "from django import template\nfrom django.conf import settings\nfrom django.utils import formats\nfrom django.utils.translation import get_language\n\nregister = template.Library()\n\n# For reasons unkown, the el (greek) locale in django/conf/locale/el/formats.py \n# *did not* contain a DATE_INPUT_FORMATS -- so it fell back to using the US \n# date format (mm/dd/yy) which is not the correct one for Greece (dd/mm/yy). \n# This means that if we used a localized datepicker django *won't* be able to\n# parse the dates! So a test here checks if DATE_INPUT_FORMATS is actually \n# defined in a format module. If yes then it will just return an empty string \n# so that the normal, localized date format from datepicker will be used.\n# If DATE_INPUT_FORMATS is not defined then it will return\[email protected]_tag\ndef get_date_format_override():\n if hasattr(settings, 'USE_I18N') and settings.USE_I18N==True:\n \n for m in formats.get_format_modules():\n if hasattr(m, 'DATE_INPUT_FORMATS'):\n return ''\n else: # fall back to the ISO to be sure date will be parsed\n return 'yy-mm-dd'\n else: # Fall back to ISO if I18N is *not* used\n return 'yy-mm-dd'\n\n# Get the correct i18n + l10n settings for datepicker depending on current \n# thread language \[email protected]_tag\ndef get_localized_datepicker_js():\n if hasattr(settings, 'USE_I18N') and settings.USE_I18N==True and \\\n hasattr(settings, 'USE_L10N') and settings.USE_L10N==True:\n \n lang = get_language()\n \n if '-' in lang:\n lang_parts = lang.split('-')\n lang = lang_parts[0].lower() +'-'+ lang_parts[1].upper()\n else:\n lang=lang.lower()\n return '<script src=\"//jquery-ui.googlecode.com/svn/tags/latest/ui/i18n/jquery.ui.datepicker-{0}.js\"></script>'.format(\n lang\n )\n \n else: # Don't write anything if we don't use I18N and L10N\n return '' \n ", "path": "wagtail/wagtailadmin/templatetags/localize.py"}], "after_files": [{"content": "from django import template\nfrom django.conf import settings\nfrom django.templatetags.static import static\nfrom django.utils import formats\nfrom django.utils.translation import get_language\n\nregister = template.Library()\n\n# For reasons unkown, the el (greek) locale in django/conf/locale/el/formats.py \n# *did not* contain a DATE_INPUT_FORMATS -- so it fell back to using the US \n# date format (mm/dd/yy) which is not the correct one for Greece (dd/mm/yy). \n# This means that if we used a localized datepicker django *won't* be able to\n# parse the dates! So a test here checks if DATE_INPUT_FORMATS is actually \n# defined in a format module. If yes then it will just return an empty string \n# so that the normal, localized date format from datepicker will be used.\n# If DATE_INPUT_FORMATS is not defined then it will return\[email protected]_tag\ndef get_date_format_override():\n if hasattr(settings, 'USE_I18N') and settings.USE_I18N==True:\n \n for m in formats.get_format_modules():\n if hasattr(m, 'DATE_INPUT_FORMATS'):\n return ''\n else: # fall back to the ISO to be sure date will be parsed\n return 'yy-mm-dd'\n else: # Fall back to ISO if I18N is *not* used\n return 'yy-mm-dd'\n\n# This is a list of all supported langs for jquery-ui datepicker which exist in\n# wagtailadmin/js/venor/i18n/. In case any new translations are added there the\n# language code should also be added in this list.\nSUPPORTED_DATEPICKER_LANGS = ['af', 'ar-DZ', 'ar', 'az', 'be', 'bg', 'bs', 'ca', 'cs', 'cy-GB', 'da', 'de',\n 'el', 'en-AU', 'en-GB', 'en-NZ', 'eo', 'es', 'et', 'eu', 'fa', 'fi', 'fo', 'fr-CA', 'fr-CH', 'fr', 'gl',\n 'he', 'hi', 'hr', 'hu', 'hy', 'id', 'is', 'it', 'ja', 'ka', 'kk', 'km', 'ko', 'ky', 'lb', 'lt', 'lv',\n 'mk', 'ml', 'ms', 'nb', 'nl-BE', 'nl', 'nn', 'no', 'pl', 'pt-BR', 'pt', 'rm', 'ro', 'ru', 'sk', 'sl', 'sq',\n 'sr-SR', 'sr', 'sv', 'ta', 'th', 'tj', 'tr', 'uk', 'vi', 'zh-CN', 'zh-HK', 'zh-TW'\n]\n# Get the correct i18n + l10n settings for datepicker depending on current \n# thread language \[email protected]_tag\ndef get_localized_datepicker_js():\n if hasattr(settings, 'USE_I18N') and settings.USE_I18N==True and \\\n hasattr(settings, 'USE_L10N') and settings.USE_L10N==True:\n \n lang = get_language()\n \n if '-' in lang:\n lang_parts = lang.split('-')\n lang = lang_parts[0].lower() +'-'+ lang_parts[1].upper()\n else:\n lang=lang.lower()\n if lang in SUPPORTED_DATEPICKER_LANGS:\n translation_file = static(\"wagtailadmin/js/vendor/i18n/jquery.ui.datepicker-{0}.js\".format(\n lang\n ))\n return '<script src=\"{0}\"></script>'.format(translation_file)\n else: # Don't return anything if language is not supported\n return ''\n \n else: # Don't return anything if we don't use I18N and L10N\n return '' \n ", "path": "wagtail/wagtailadmin/templatetags/localize.py"}]} | 1,086 | 735 |
gh_patches_debug_6828 | rasdani/github-patches | git_diff | kartoza__prj.app-162 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Landing page gives a 404
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django_project/base/views/error_views.py`
Content:
```
1 # coding=utf-8
2 """Our custom error views"""
3 from django.shortcuts import render_to_response
4 from django.template import RequestContext
5 from base.models.project import Project
6
7
8 def custom_404(request, template_name='404.html'):
9 """Our custom 404 view
10
11 We want to include a list of all public and approved Projects in the 404
12 view
13 :param request: Request obj
14 :type request: HttpRequest
15
16 :param template_name: The template to render
17 :type template_name: str
18
19 :return: Response obj
20 :rtype: HttpResponse
21
22 """
23 public_projects = Project.objects.filter(approved=True, private=False)
24 return render_to_response(template_name, {
25 'request_path': request.path,
26 'projects': public_projects
27 }, context_instance=RequestContext(request))
28
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django_project/base/views/error_views.py b/django_project/base/views/error_views.py
--- a/django_project/base/views/error_views.py
+++ b/django_project/base/views/error_views.py
@@ -21,7 +21,11 @@
"""
public_projects = Project.objects.filter(approved=True, private=False)
- return render_to_response(template_name, {
- 'request_path': request.path,
- 'projects': public_projects
- }, context_instance=RequestContext(request))
+
+ response = render_to_response(
+ template_name, {
+ 'request_path': request.path,
+ 'projects': public_projects},
+ context_instance=RequestContext(request))
+ response.status_code = 404
+ return response
| {"golden_diff": "diff --git a/django_project/base/views/error_views.py b/django_project/base/views/error_views.py\n--- a/django_project/base/views/error_views.py\n+++ b/django_project/base/views/error_views.py\n@@ -21,7 +21,11 @@\n \n \"\"\"\n public_projects = Project.objects.filter(approved=True, private=False)\n- return render_to_response(template_name, {\n- 'request_path': request.path,\n- 'projects': public_projects\n- }, context_instance=RequestContext(request))\n+\n+ response = render_to_response(\n+ template_name, {\n+ 'request_path': request.path,\n+ 'projects': public_projects},\n+ context_instance=RequestContext(request))\n+ response.status_code = 404\n+ return response\n", "issue": "Landing page gives a 404\n\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"Our custom error views\"\"\"\nfrom django.shortcuts import render_to_response\nfrom django.template import RequestContext\nfrom base.models.project import Project\n\n\ndef custom_404(request, template_name='404.html'):\n \"\"\"Our custom 404 view\n\n We want to include a list of all public and approved Projects in the 404\n view\n :param request: Request obj\n :type request: HttpRequest\n\n :param template_name: The template to render\n :type template_name: str\n\n :return: Response obj\n :rtype: HttpResponse\n\n \"\"\"\n public_projects = Project.objects.filter(approved=True, private=False)\n return render_to_response(template_name, {\n 'request_path': request.path,\n 'projects': public_projects\n }, context_instance=RequestContext(request))\n", "path": "django_project/base/views/error_views.py"}], "after_files": [{"content": "# coding=utf-8\n\"\"\"Our custom error views\"\"\"\nfrom django.shortcuts import render_to_response\nfrom django.template import RequestContext\nfrom base.models.project import Project\n\n\ndef custom_404(request, template_name='404.html'):\n \"\"\"Our custom 404 view\n\n We want to include a list of all public and approved Projects in the 404\n view\n :param request: Request obj\n :type request: HttpRequest\n\n :param template_name: The template to render\n :type template_name: str\n\n :return: Response obj\n :rtype: HttpResponse\n\n \"\"\"\n public_projects = Project.objects.filter(approved=True, private=False)\n\n response = render_to_response(\n template_name, {\n 'request_path': request.path,\n 'projects': public_projects},\n context_instance=RequestContext(request))\n response.status_code = 404\n return response\n", "path": "django_project/base/views/error_views.py"}]} | 503 | 169 |
gh_patches_debug_38348 | rasdani/github-patches | git_diff | PaddlePaddle__models-312 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
resnet模型配置的问题
目前resnet的配置有一些问题,可见 https://github.com/PaddlePaddle/models/issues/308#issuecomment-331384031
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `image_classification/resnet.py`
Content:
```
1 import paddle.v2 as paddle
2
3 __all__ = ['resnet_imagenet', 'resnet_cifar10']
4
5
6 def conv_bn_layer(input,
7 ch_out,
8 filter_size,
9 stride,
10 padding,
11 active_type=paddle.activation.Relu(),
12 ch_in=None):
13 tmp = paddle.layer.img_conv(
14 input=input,
15 filter_size=filter_size,
16 num_channels=ch_in,
17 num_filters=ch_out,
18 stride=stride,
19 padding=padding,
20 act=paddle.activation.Linear(),
21 bias_attr=False)
22 return paddle.layer.batch_norm(input=tmp, act=active_type)
23
24
25 def shortcut(input, ch_in, ch_out, stride):
26 if ch_in != ch_out:
27 return conv_bn_layer(input, ch_out, 1, stride, 0,
28 paddle.activation.Linear())
29 else:
30 return input
31
32
33 def basicblock(input, ch_in, ch_out, stride):
34 short = shortcut(input, ch_in, ch_out, stride)
35 conv1 = conv_bn_layer(input, ch_out, 3, stride, 1)
36 conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1, paddle.activation.Linear())
37 return paddle.layer.addto(
38 input=[short, conv2], act=paddle.activation.Relu())
39
40
41 def bottleneck(input, ch_in, ch_out, stride):
42 short = shortcut(input, ch_in, ch_out * 4, stride)
43 conv1 = conv_bn_layer(input, ch_out, 1, stride, 0)
44 conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1)
45 conv3 = conv_bn_layer(conv2, ch_out * 4, 1, 1, 0,
46 paddle.activation.Linear())
47 return paddle.layer.addto(
48 input=[short, conv3], act=paddle.activation.Relu())
49
50
51 def layer_warp(block_func, input, ch_in, ch_out, count, stride):
52 conv = block_func(input, ch_in, ch_out, stride)
53 for i in range(1, count):
54 conv = block_func(conv, ch_out, ch_out, 1)
55 return conv
56
57
58 def resnet_imagenet(input, class_dim, depth=50):
59 cfg = {
60 18: ([2, 2, 2, 1], basicblock),
61 34: ([3, 4, 6, 3], basicblock),
62 50: ([3, 4, 6, 3], bottleneck),
63 101: ([3, 4, 23, 3], bottleneck),
64 152: ([3, 8, 36, 3], bottleneck)
65 }
66 stages, block_func = cfg[depth]
67 conv1 = conv_bn_layer(
68 input, ch_in=3, ch_out=64, filter_size=7, stride=2, padding=3)
69 pool1 = paddle.layer.img_pool(input=conv1, pool_size=3, stride=2)
70 res1 = layer_warp(block_func, pool1, 64, 64, stages[0], 1)
71 res2 = layer_warp(block_func, res1, 64, 128, stages[1], 2)
72 res3 = layer_warp(block_func, res2, 128, 256, stages[2], 2)
73 res4 = layer_warp(block_func, res3, 256, 512, stages[3], 2)
74 pool2 = paddle.layer.img_pool(
75 input=res4, pool_size=7, stride=1, pool_type=paddle.pooling.Avg())
76 out = paddle.layer.fc(
77 input=pool2, size=class_dim, act=paddle.activation.Softmax())
78 return out
79
80
81 def resnet_cifar10(input, class_dim, depth=32):
82 # depth should be one of 20, 32, 44, 56, 110, 1202
83 assert (depth - 2) % 6 == 0
84 n = (depth - 2) / 6
85 nStages = {16, 64, 128}
86 conv1 = conv_bn_layer(
87 input, ch_in=3, ch_out=16, filter_size=3, stride=1, padding=1)
88 res1 = layer_warp(basicblock, conv1, 16, 16, n, 1)
89 res2 = layer_warp(basicblock, res1, 16, 32, n, 2)
90 res3 = layer_warp(basicblock, res2, 32, 64, n, 2)
91 pool = paddle.layer.img_pool(
92 input=res3, pool_size=8, stride=1, pool_type=paddle.pooling.Avg())
93 out = paddle.layer.fc(
94 input=pool, size=class_dim, act=paddle.activation.Softmax())
95 return out
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/image_classification/resnet.py b/image_classification/resnet.py
--- a/image_classification/resnet.py
+++ b/image_classification/resnet.py
@@ -22,24 +22,24 @@
return paddle.layer.batch_norm(input=tmp, act=active_type)
-def shortcut(input, ch_in, ch_out, stride):
- if ch_in != ch_out:
+def shortcut(input, ch_out, stride):
+ if input.num_filters != ch_out:
return conv_bn_layer(input, ch_out, 1, stride, 0,
paddle.activation.Linear())
else:
return input
-def basicblock(input, ch_in, ch_out, stride):
- short = shortcut(input, ch_in, ch_out, stride)
+def basicblock(input, ch_out, stride):
+ short = shortcut(input, ch_out, stride)
conv1 = conv_bn_layer(input, ch_out, 3, stride, 1)
conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1, paddle.activation.Linear())
return paddle.layer.addto(
input=[short, conv2], act=paddle.activation.Relu())
-def bottleneck(input, ch_in, ch_out, stride):
- short = shortcut(input, ch_in, ch_out * 4, stride)
+def bottleneck(input, ch_out, stride):
+ short = shortcut(input, ch_out * 4, stride)
conv1 = conv_bn_layer(input, ch_out, 1, stride, 0)
conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1)
conv3 = conv_bn_layer(conv2, ch_out * 4, 1, 1, 0,
@@ -48,10 +48,10 @@
input=[short, conv3], act=paddle.activation.Relu())
-def layer_warp(block_func, input, ch_in, ch_out, count, stride):
- conv = block_func(input, ch_in, ch_out, stride)
+def layer_warp(block_func, input, ch_out, count, stride):
+ conv = block_func(input, ch_out, stride)
for i in range(1, count):
- conv = block_func(conv, ch_out, ch_out, 1)
+ conv = block_func(conv, ch_out, 1)
return conv
@@ -67,10 +67,10 @@
conv1 = conv_bn_layer(
input, ch_in=3, ch_out=64, filter_size=7, stride=2, padding=3)
pool1 = paddle.layer.img_pool(input=conv1, pool_size=3, stride=2)
- res1 = layer_warp(block_func, pool1, 64, 64, stages[0], 1)
- res2 = layer_warp(block_func, res1, 64, 128, stages[1], 2)
- res3 = layer_warp(block_func, res2, 128, 256, stages[2], 2)
- res4 = layer_warp(block_func, res3, 256, 512, stages[3], 2)
+ res1 = layer_warp(block_func, pool1, 64, stages[0], 1)
+ res2 = layer_warp(block_func, res1, 128, stages[1], 2)
+ res3 = layer_warp(block_func, res2, 256, stages[2], 2)
+ res4 = layer_warp(block_func, res3, 512, stages[3], 2)
pool2 = paddle.layer.img_pool(
input=res4, pool_size=7, stride=1, pool_type=paddle.pooling.Avg())
out = paddle.layer.fc(
| {"golden_diff": "diff --git a/image_classification/resnet.py b/image_classification/resnet.py\n--- a/image_classification/resnet.py\n+++ b/image_classification/resnet.py\n@@ -22,24 +22,24 @@\n return paddle.layer.batch_norm(input=tmp, act=active_type)\n \n \n-def shortcut(input, ch_in, ch_out, stride):\n- if ch_in != ch_out:\n+def shortcut(input, ch_out, stride):\n+ if input.num_filters != ch_out:\n return conv_bn_layer(input, ch_out, 1, stride, 0,\n paddle.activation.Linear())\n else:\n return input\n \n \n-def basicblock(input, ch_in, ch_out, stride):\n- short = shortcut(input, ch_in, ch_out, stride)\n+def basicblock(input, ch_out, stride):\n+ short = shortcut(input, ch_out, stride)\n conv1 = conv_bn_layer(input, ch_out, 3, stride, 1)\n conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1, paddle.activation.Linear())\n return paddle.layer.addto(\n input=[short, conv2], act=paddle.activation.Relu())\n \n \n-def bottleneck(input, ch_in, ch_out, stride):\n- short = shortcut(input, ch_in, ch_out * 4, stride)\n+def bottleneck(input, ch_out, stride):\n+ short = shortcut(input, ch_out * 4, stride)\n conv1 = conv_bn_layer(input, ch_out, 1, stride, 0)\n conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1)\n conv3 = conv_bn_layer(conv2, ch_out * 4, 1, 1, 0,\n@@ -48,10 +48,10 @@\n input=[short, conv3], act=paddle.activation.Relu())\n \n \n-def layer_warp(block_func, input, ch_in, ch_out, count, stride):\n- conv = block_func(input, ch_in, ch_out, stride)\n+def layer_warp(block_func, input, ch_out, count, stride):\n+ conv = block_func(input, ch_out, stride)\n for i in range(1, count):\n- conv = block_func(conv, ch_out, ch_out, 1)\n+ conv = block_func(conv, ch_out, 1)\n return conv\n \n \n@@ -67,10 +67,10 @@\n conv1 = conv_bn_layer(\n input, ch_in=3, ch_out=64, filter_size=7, stride=2, padding=3)\n pool1 = paddle.layer.img_pool(input=conv1, pool_size=3, stride=2)\n- res1 = layer_warp(block_func, pool1, 64, 64, stages[0], 1)\n- res2 = layer_warp(block_func, res1, 64, 128, stages[1], 2)\n- res3 = layer_warp(block_func, res2, 128, 256, stages[2], 2)\n- res4 = layer_warp(block_func, res3, 256, 512, stages[3], 2)\n+ res1 = layer_warp(block_func, pool1, 64, stages[0], 1)\n+ res2 = layer_warp(block_func, res1, 128, stages[1], 2)\n+ res3 = layer_warp(block_func, res2, 256, stages[2], 2)\n+ res4 = layer_warp(block_func, res3, 512, stages[3], 2)\n pool2 = paddle.layer.img_pool(\n input=res4, pool_size=7, stride=1, pool_type=paddle.pooling.Avg())\n out = paddle.layer.fc(\n", "issue": "resnet\u6a21\u578b\u914d\u7f6e\u7684\u95ee\u9898\n\u76ee\u524dresnet\u7684\u914d\u7f6e\u6709\u4e00\u4e9b\u95ee\u9898\uff0c\u53ef\u89c1 https://github.com/PaddlePaddle/models/issues/308#issuecomment-331384031\n", "before_files": [{"content": "import paddle.v2 as paddle\n\n__all__ = ['resnet_imagenet', 'resnet_cifar10']\n\n\ndef conv_bn_layer(input,\n ch_out,\n filter_size,\n stride,\n padding,\n active_type=paddle.activation.Relu(),\n ch_in=None):\n tmp = paddle.layer.img_conv(\n input=input,\n filter_size=filter_size,\n num_channels=ch_in,\n num_filters=ch_out,\n stride=stride,\n padding=padding,\n act=paddle.activation.Linear(),\n bias_attr=False)\n return paddle.layer.batch_norm(input=tmp, act=active_type)\n\n\ndef shortcut(input, ch_in, ch_out, stride):\n if ch_in != ch_out:\n return conv_bn_layer(input, ch_out, 1, stride, 0,\n paddle.activation.Linear())\n else:\n return input\n\n\ndef basicblock(input, ch_in, ch_out, stride):\n short = shortcut(input, ch_in, ch_out, stride)\n conv1 = conv_bn_layer(input, ch_out, 3, stride, 1)\n conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1, paddle.activation.Linear())\n return paddle.layer.addto(\n input=[short, conv2], act=paddle.activation.Relu())\n\n\ndef bottleneck(input, ch_in, ch_out, stride):\n short = shortcut(input, ch_in, ch_out * 4, stride)\n conv1 = conv_bn_layer(input, ch_out, 1, stride, 0)\n conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1)\n conv3 = conv_bn_layer(conv2, ch_out * 4, 1, 1, 0,\n paddle.activation.Linear())\n return paddle.layer.addto(\n input=[short, conv3], act=paddle.activation.Relu())\n\n\ndef layer_warp(block_func, input, ch_in, ch_out, count, stride):\n conv = block_func(input, ch_in, ch_out, stride)\n for i in range(1, count):\n conv = block_func(conv, ch_out, ch_out, 1)\n return conv\n\n\ndef resnet_imagenet(input, class_dim, depth=50):\n cfg = {\n 18: ([2, 2, 2, 1], basicblock),\n 34: ([3, 4, 6, 3], basicblock),\n 50: ([3, 4, 6, 3], bottleneck),\n 101: ([3, 4, 23, 3], bottleneck),\n 152: ([3, 8, 36, 3], bottleneck)\n }\n stages, block_func = cfg[depth]\n conv1 = conv_bn_layer(\n input, ch_in=3, ch_out=64, filter_size=7, stride=2, padding=3)\n pool1 = paddle.layer.img_pool(input=conv1, pool_size=3, stride=2)\n res1 = layer_warp(block_func, pool1, 64, 64, stages[0], 1)\n res2 = layer_warp(block_func, res1, 64, 128, stages[1], 2)\n res3 = layer_warp(block_func, res2, 128, 256, stages[2], 2)\n res4 = layer_warp(block_func, res3, 256, 512, stages[3], 2)\n pool2 = paddle.layer.img_pool(\n input=res4, pool_size=7, stride=1, pool_type=paddle.pooling.Avg())\n out = paddle.layer.fc(\n input=pool2, size=class_dim, act=paddle.activation.Softmax())\n return out\n\n\ndef resnet_cifar10(input, class_dim, depth=32):\n # depth should be one of 20, 32, 44, 56, 110, 1202\n assert (depth - 2) % 6 == 0\n n = (depth - 2) / 6\n nStages = {16, 64, 128}\n conv1 = conv_bn_layer(\n input, ch_in=3, ch_out=16, filter_size=3, stride=1, padding=1)\n res1 = layer_warp(basicblock, conv1, 16, 16, n, 1)\n res2 = layer_warp(basicblock, res1, 16, 32, n, 2)\n res3 = layer_warp(basicblock, res2, 32, 64, n, 2)\n pool = paddle.layer.img_pool(\n input=res3, pool_size=8, stride=1, pool_type=paddle.pooling.Avg())\n out = paddle.layer.fc(\n input=pool, size=class_dim, act=paddle.activation.Softmax())\n return out\n", "path": "image_classification/resnet.py"}], "after_files": [{"content": "import paddle.v2 as paddle\n\n__all__ = ['resnet_imagenet', 'resnet_cifar10']\n\n\ndef conv_bn_layer(input,\n ch_out,\n filter_size,\n stride,\n padding,\n active_type=paddle.activation.Relu(),\n ch_in=None):\n tmp = paddle.layer.img_conv(\n input=input,\n filter_size=filter_size,\n num_channels=ch_in,\n num_filters=ch_out,\n stride=stride,\n padding=padding,\n act=paddle.activation.Linear(),\n bias_attr=False)\n return paddle.layer.batch_norm(input=tmp, act=active_type)\n\n\ndef shortcut(input, ch_out, stride):\n if input.num_filters != ch_out:\n return conv_bn_layer(input, ch_out, 1, stride, 0,\n paddle.activation.Linear())\n else:\n return input\n\n\ndef basicblock(input, ch_out, stride):\n short = shortcut(input, ch_out, stride)\n conv1 = conv_bn_layer(input, ch_out, 3, stride, 1)\n conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1, paddle.activation.Linear())\n return paddle.layer.addto(\n input=[short, conv2], act=paddle.activation.Relu())\n\n\ndef bottleneck(input, ch_out, stride):\n short = shortcut(input, ch_out * 4, stride)\n conv1 = conv_bn_layer(input, ch_out, 1, stride, 0)\n conv2 = conv_bn_layer(conv1, ch_out, 3, 1, 1)\n conv3 = conv_bn_layer(conv2, ch_out * 4, 1, 1, 0,\n paddle.activation.Linear())\n return paddle.layer.addto(\n input=[short, conv3], act=paddle.activation.Relu())\n\n\ndef layer_warp(block_func, input, ch_out, count, stride):\n conv = block_func(input, ch_out, stride)\n for i in range(1, count):\n conv = block_func(conv, ch_out, 1)\n return conv\n\n\ndef resnet_imagenet(input, class_dim, depth=50):\n cfg = {\n 18: ([2, 2, 2, 1], basicblock),\n 34: ([3, 4, 6, 3], basicblock),\n 50: ([3, 4, 6, 3], bottleneck),\n 101: ([3, 4, 23, 3], bottleneck),\n 152: ([3, 8, 36, 3], bottleneck)\n }\n stages, block_func = cfg[depth]\n conv1 = conv_bn_layer(\n input, ch_in=3, ch_out=64, filter_size=7, stride=2, padding=3)\n pool1 = paddle.layer.img_pool(input=conv1, pool_size=3, stride=2)\n res1 = layer_warp(block_func, pool1, 64, stages[0], 1)\n res2 = layer_warp(block_func, res1, 128, stages[1], 2)\n res3 = layer_warp(block_func, res2, 256, stages[2], 2)\n res4 = layer_warp(block_func, res3, 512, stages[3], 2)\n pool2 = paddle.layer.img_pool(\n input=res4, pool_size=7, stride=1, pool_type=paddle.pooling.Avg())\n out = paddle.layer.fc(\n input=pool2, size=class_dim, act=paddle.activation.Softmax())\n return out\n\n\ndef resnet_cifar10(input, class_dim, depth=32):\n # depth should be one of 20, 32, 44, 56, 110, 1202\n assert (depth - 2) % 6 == 0\n n = (depth - 2) / 6\n nStages = {16, 64, 128}\n conv1 = conv_bn_layer(\n input, ch_in=3, ch_out=16, filter_size=3, stride=1, padding=1)\n res1 = layer_warp(basicblock, conv1, 16, 16, n, 1)\n res2 = layer_warp(basicblock, res1, 16, 32, n, 2)\n res3 = layer_warp(basicblock, res2, 32, 64, n, 2)\n pool = paddle.layer.img_pool(\n input=res3, pool_size=8, stride=1, pool_type=paddle.pooling.Avg())\n out = paddle.layer.fc(\n input=pool, size=class_dim, act=paddle.activation.Softmax())\n return out\n", "path": "image_classification/resnet.py"}]} | 1,603 | 850 |
gh_patches_debug_3551 | rasdani/github-patches | git_diff | kornia__kornia-677 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
_adapted_uniform is broken for tensors > 1 dimension and same_on_batch=True
## 🐛 Bug
`kornia.augmentation.utils.helpers._adapted_uniform` is broken for `len(shape) > 1` and `same_on_batch=True`
## To Reproduce
```python
from kornia.augmentation.utils.helpers import _adapted_uniform
shape = (1, 2)
_adapted_uniform(shape, 0.0, 1.0, same_on_batch=True)
```
```
RuntimeError: Number of dimensions of repeat dims can not be smaller than number of dimensions of tensor
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kornia/augmentation/utils/helpers.py`
Content:
```
1 from typing import Tuple, Union, List, cast, Optional
2
3 import torch
4 from torch.distributions import Uniform, Beta
5
6
7 def _infer_batch_shape(input: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]) -> torch.Size:
8 r"""Infer input shape. Input may be either (tensor,) or (tensor, transform_matrix)
9 """
10 if isinstance(input, tuple):
11 tensor = _transform_input(input[0])
12 else:
13 tensor = _transform_input(input)
14 return tensor.shape
15
16
17 def _infer_batch_shape3d(input: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]) -> torch.Size:
18 r"""Infer input shape. Input may be either (tensor,) or (tensor, transform_matrix)
19 """
20 if isinstance(input, tuple):
21 tensor = _transform_input3d(input[0])
22 else:
23 tensor = _transform_input3d(input)
24 return tensor.shape
25
26
27 def _transform_input(input: torch.Tensor) -> torch.Tensor:
28 r"""Reshape an input tensor to be (*, C, H, W). Accept either (H, W), (C, H, W) or (*, C, H, W).
29 Args:
30 input: torch.Tensor
31
32 Returns:
33 torch.Tensor
34 """
35 if not torch.is_tensor(input):
36 raise TypeError(f"Input type is not a torch.Tensor. Got {type(input)}")
37
38 if len(input.shape) not in [2, 3, 4]:
39 raise ValueError(
40 f"Input size must have a shape of either (H, W), (C, H, W) or (*, C, H, W). Got {input.shape}")
41
42 if len(input.shape) == 2:
43 input = input.unsqueeze(0)
44
45 if len(input.shape) == 3:
46 input = input.unsqueeze(0)
47
48 return input
49
50
51 def _transform_input3d(input: torch.Tensor) -> torch.Tensor:
52 r"""Reshape an input tensor to be (*, C, D, H, W). Accept either (D, H, W), (C, D, H, W) or (*, C, D, H, W).
53 Args:
54 input: torch.Tensor
55
56 Returns:
57 torch.Tensor
58 """
59 if not torch.is_tensor(input):
60 raise TypeError(f"Input type is not a torch.Tensor. Got {type(input)}")
61
62 if len(input.shape) not in [3, 4, 5]:
63 raise ValueError(
64 f"Input size must have a shape of either (D, H, W), (C, D, H, W) or (*, C, D, H, W). Got {input.shape}")
65
66 if len(input.shape) == 3:
67 input = input.unsqueeze(0)
68
69 if len(input.shape) == 4:
70 input = input.unsqueeze(0)
71
72 return input
73
74
75 def _validate_input_dtype(input: torch.Tensor, accepted_dtypes: List) -> None:
76 r"""Check if the dtype of the input tensor is in the range of accepted_dtypes
77 Args:
78 input: torch.Tensor
79 accepted_dtypes: List. e.g. [torch.float32, torch.float64]
80 """
81 if input.dtype not in accepted_dtypes:
82 raise TypeError(f"Expected input of {accepted_dtypes}. Got {input.dtype}")
83
84
85 def _validate_shape(shape: Union[Tuple, torch.Size], required_shapes: List[str] = ["BCHW"]) -> None:
86 r"""Check if the dtype of the input tensor is in the range of accepted_dtypes
87 Args:
88 input: torch.Tensor
89 required_shapes: List. e.g. ["BCHW", "BCDHW"]
90 """
91 passed = False
92 for required_shape in required_shapes:
93 if len(shape) == len(required_shape):
94 passed = True
95 break
96 if not passed:
97 raise TypeError(f"Expected input shape in {required_shape}. Got {shape}.")
98
99
100 def _validate_input_shape(input: torch.Tensor, channel_index: int, number: int) -> bool:
101 r"""Validate if an input has the right shape. e.g. to check if an input is channel first.
102 If channel first, the second channel of an RGB input shall be fixed to 3. To verify using:
103 _validate_input_shape(input, 1, 3)
104 Args:
105 input: torch.Tensor
106 channel_index: int
107 number: int
108 Returns:
109 bool
110 """
111 return input.shape[channel_index] == number
112
113
114 def _adapted_uniform(
115 shape: Union[Tuple, torch.Size],
116 low: Union[float, int, torch.Tensor],
117 high: Union[float, int, torch.Tensor],
118 same_on_batch=False
119 ) -> torch.Tensor:
120 r""" The uniform sampling function that accepts 'same_on_batch'.
121 If same_on_batch is True, all values generated will be exactly same given a batch_size (shape[0]).
122 By default, same_on_batch is set to False.
123 """
124 if not isinstance(low, torch.Tensor):
125 low = torch.tensor(low, dtype=torch.float32)
126 if not isinstance(high, torch.Tensor):
127 high = torch.tensor(high, dtype=torch.float32)
128 dist = Uniform(low, high)
129 if same_on_batch:
130 return dist.rsample((1, *shape[1:])).repeat(shape[0])
131 else:
132 return dist.rsample(shape)
133
134
135 def _adapted_beta(
136 shape: Union[Tuple, torch.Size],
137 a: Union[float, int, torch.Tensor],
138 b: Union[float, int, torch.Tensor],
139 same_on_batch=False
140 ) -> torch.Tensor:
141 r""" The beta sampling function that accepts 'same_on_batch'.
142 If same_on_batch is True, all values generated will be exactly same given a batch_size (shape[0]).
143 By default, same_on_batch is set to False.
144 """
145 if not isinstance(a, torch.Tensor):
146 a = torch.tensor(a, dtype=torch.float32)
147 if not isinstance(b, torch.Tensor):
148 b = torch.tensor(b, dtype=torch.float32)
149 dist = Beta(a, b)
150 if same_on_batch:
151 return dist.rsample((1, *shape[1:])).repeat(shape[0])
152 else:
153 return dist.rsample(shape)
154
155
156 def _check_and_bound(factor: Union[torch.Tensor, float, Tuple[float, float], List[float]], name: str,
157 center: float = 0., bounds: Tuple[float, float] = (0, float('inf'))) -> torch.Tensor:
158 r"""Check inputs and compute the corresponding factor bounds
159 """
160 factor_bound: torch.Tensor
161 if not isinstance(factor, torch.Tensor):
162 factor = torch.tensor(factor, dtype=torch.float32)
163
164 if factor.dim() == 0:
165 _center = torch.tensor(center, dtype=torch.float32)
166
167 if factor < 0:
168 raise ValueError(f"If {name} is a single number number, it must be non negative. Got {factor.item()}")
169
170 factor_bound = torch.tensor([_center - factor, _center + factor], dtype=torch.float32)
171 # Should be something other than clamp
172 # Currently, single value factor will not out of scope as long as the user provided it.
173 factor_bound = torch.clamp(factor_bound, bounds[0], bounds[1])
174
175 elif factor.shape[0] == 2 and factor.dim() == 1:
176
177 if not bounds[0] <= factor[0] or not bounds[1] >= factor[1]:
178 raise ValueError(f"{name} out of bounds. Expected inside {bounds}, got {factor}.")
179
180 if not bounds[0] <= factor[0] <= factor[1] <= bounds[1]:
181 raise ValueError(f"{name}[0] should be smaller than {name}[1] got {factor}")
182
183 factor_bound = factor
184
185 else:
186
187 raise TypeError(
188 f"The {name} should be a float number or a tuple with length 2 whose values move between {bounds}.")
189
190 return factor_bound
191
192
193 def _shape_validation(param: torch.Tensor, shape: Union[tuple, list], name: str) -> None:
194 assert param.shape == torch.Size(shape), f"Invalid shape for {name}. Expected {shape}. Got {param.shape}"
195
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kornia/augmentation/utils/helpers.py b/kornia/augmentation/utils/helpers.py
--- a/kornia/augmentation/utils/helpers.py
+++ b/kornia/augmentation/utils/helpers.py
@@ -127,7 +127,7 @@
high = torch.tensor(high, dtype=torch.float32)
dist = Uniform(low, high)
if same_on_batch:
- return dist.rsample((1, *shape[1:])).repeat(shape[0])
+ return dist.rsample((1, *shape[1:])).repeat(shape[0], *[1] * (len(shape) - 1))
else:
return dist.rsample(shape)
| {"golden_diff": "diff --git a/kornia/augmentation/utils/helpers.py b/kornia/augmentation/utils/helpers.py\n--- a/kornia/augmentation/utils/helpers.py\n+++ b/kornia/augmentation/utils/helpers.py\n@@ -127,7 +127,7 @@\n high = torch.tensor(high, dtype=torch.float32)\n dist = Uniform(low, high)\n if same_on_batch:\n- return dist.rsample((1, *shape[1:])).repeat(shape[0])\n+ return dist.rsample((1, *shape[1:])).repeat(shape[0], *[1] * (len(shape) - 1))\n else:\n return dist.rsample(shape)\n", "issue": "_adapted_uniform is broken for tensors > 1 dimension and same_on_batch=True\n## \ud83d\udc1b Bug\r\n\r\n`kornia.augmentation.utils.helpers._adapted_uniform` is broken for `len(shape) > 1` and `same_on_batch=True`\r\n\r\n## To Reproduce\r\n\r\n```python\r\nfrom kornia.augmentation.utils.helpers import _adapted_uniform\r\n\r\nshape = (1, 2)\r\n_adapted_uniform(shape, 0.0, 1.0, same_on_batch=True)\r\n```\r\n\r\n```\r\nRuntimeError: Number of dimensions of repeat dims can not be smaller than number of dimensions of tensor\r\n```\r\n\n", "before_files": [{"content": "from typing import Tuple, Union, List, cast, Optional\n\nimport torch\nfrom torch.distributions import Uniform, Beta\n\n\ndef _infer_batch_shape(input: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]) -> torch.Size:\n r\"\"\"Infer input shape. Input may be either (tensor,) or (tensor, transform_matrix)\n \"\"\"\n if isinstance(input, tuple):\n tensor = _transform_input(input[0])\n else:\n tensor = _transform_input(input)\n return tensor.shape\n\n\ndef _infer_batch_shape3d(input: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]) -> torch.Size:\n r\"\"\"Infer input shape. Input may be either (tensor,) or (tensor, transform_matrix)\n \"\"\"\n if isinstance(input, tuple):\n tensor = _transform_input3d(input[0])\n else:\n tensor = _transform_input3d(input)\n return tensor.shape\n\n\ndef _transform_input(input: torch.Tensor) -> torch.Tensor:\n r\"\"\"Reshape an input tensor to be (*, C, H, W). Accept either (H, W), (C, H, W) or (*, C, H, W).\n Args:\n input: torch.Tensor\n\n Returns:\n torch.Tensor\n \"\"\"\n if not torch.is_tensor(input):\n raise TypeError(f\"Input type is not a torch.Tensor. Got {type(input)}\")\n\n if len(input.shape) not in [2, 3, 4]:\n raise ValueError(\n f\"Input size must have a shape of either (H, W), (C, H, W) or (*, C, H, W). Got {input.shape}\")\n\n if len(input.shape) == 2:\n input = input.unsqueeze(0)\n\n if len(input.shape) == 3:\n input = input.unsqueeze(0)\n\n return input\n\n\ndef _transform_input3d(input: torch.Tensor) -> torch.Tensor:\n r\"\"\"Reshape an input tensor to be (*, C, D, H, W). Accept either (D, H, W), (C, D, H, W) or (*, C, D, H, W).\n Args:\n input: torch.Tensor\n\n Returns:\n torch.Tensor\n \"\"\"\n if not torch.is_tensor(input):\n raise TypeError(f\"Input type is not a torch.Tensor. Got {type(input)}\")\n\n if len(input.shape) not in [3, 4, 5]:\n raise ValueError(\n f\"Input size must have a shape of either (D, H, W), (C, D, H, W) or (*, C, D, H, W). Got {input.shape}\")\n\n if len(input.shape) == 3:\n input = input.unsqueeze(0)\n\n if len(input.shape) == 4:\n input = input.unsqueeze(0)\n\n return input\n\n\ndef _validate_input_dtype(input: torch.Tensor, accepted_dtypes: List) -> None:\n r\"\"\"Check if the dtype of the input tensor is in the range of accepted_dtypes\n Args:\n input: torch.Tensor\n accepted_dtypes: List. e.g. [torch.float32, torch.float64]\n \"\"\"\n if input.dtype not in accepted_dtypes:\n raise TypeError(f\"Expected input of {accepted_dtypes}. Got {input.dtype}\")\n\n\ndef _validate_shape(shape: Union[Tuple, torch.Size], required_shapes: List[str] = [\"BCHW\"]) -> None:\n r\"\"\"Check if the dtype of the input tensor is in the range of accepted_dtypes\n Args:\n input: torch.Tensor\n required_shapes: List. e.g. [\"BCHW\", \"BCDHW\"]\n \"\"\"\n passed = False\n for required_shape in required_shapes:\n if len(shape) == len(required_shape):\n passed = True\n break\n if not passed:\n raise TypeError(f\"Expected input shape in {required_shape}. Got {shape}.\")\n\n\ndef _validate_input_shape(input: torch.Tensor, channel_index: int, number: int) -> bool:\n r\"\"\"Validate if an input has the right shape. e.g. to check if an input is channel first.\n If channel first, the second channel of an RGB input shall be fixed to 3. To verify using:\n _validate_input_shape(input, 1, 3)\n Args:\n input: torch.Tensor\n channel_index: int\n number: int\n Returns:\n bool\n \"\"\"\n return input.shape[channel_index] == number\n\n\ndef _adapted_uniform(\n shape: Union[Tuple, torch.Size],\n low: Union[float, int, torch.Tensor],\n high: Union[float, int, torch.Tensor],\n same_on_batch=False\n) -> torch.Tensor:\n r\"\"\" The uniform sampling function that accepts 'same_on_batch'.\n If same_on_batch is True, all values generated will be exactly same given a batch_size (shape[0]).\n By default, same_on_batch is set to False.\n \"\"\"\n if not isinstance(low, torch.Tensor):\n low = torch.tensor(low, dtype=torch.float32)\n if not isinstance(high, torch.Tensor):\n high = torch.tensor(high, dtype=torch.float32)\n dist = Uniform(low, high)\n if same_on_batch:\n return dist.rsample((1, *shape[1:])).repeat(shape[0])\n else:\n return dist.rsample(shape)\n\n\ndef _adapted_beta(\n shape: Union[Tuple, torch.Size],\n a: Union[float, int, torch.Tensor],\n b: Union[float, int, torch.Tensor],\n same_on_batch=False\n) -> torch.Tensor:\n r\"\"\" The beta sampling function that accepts 'same_on_batch'.\n If same_on_batch is True, all values generated will be exactly same given a batch_size (shape[0]).\n By default, same_on_batch is set to False.\n \"\"\"\n if not isinstance(a, torch.Tensor):\n a = torch.tensor(a, dtype=torch.float32)\n if not isinstance(b, torch.Tensor):\n b = torch.tensor(b, dtype=torch.float32)\n dist = Beta(a, b)\n if same_on_batch:\n return dist.rsample((1, *shape[1:])).repeat(shape[0])\n else:\n return dist.rsample(shape)\n\n\ndef _check_and_bound(factor: Union[torch.Tensor, float, Tuple[float, float], List[float]], name: str,\n center: float = 0., bounds: Tuple[float, float] = (0, float('inf'))) -> torch.Tensor:\n r\"\"\"Check inputs and compute the corresponding factor bounds\n \"\"\"\n factor_bound: torch.Tensor\n if not isinstance(factor, torch.Tensor):\n factor = torch.tensor(factor, dtype=torch.float32)\n\n if factor.dim() == 0:\n _center = torch.tensor(center, dtype=torch.float32)\n\n if factor < 0:\n raise ValueError(f\"If {name} is a single number number, it must be non negative. Got {factor.item()}\")\n\n factor_bound = torch.tensor([_center - factor, _center + factor], dtype=torch.float32)\n # Should be something other than clamp\n # Currently, single value factor will not out of scope as long as the user provided it.\n factor_bound = torch.clamp(factor_bound, bounds[0], bounds[1])\n\n elif factor.shape[0] == 2 and factor.dim() == 1:\n\n if not bounds[0] <= factor[0] or not bounds[1] >= factor[1]:\n raise ValueError(f\"{name} out of bounds. Expected inside {bounds}, got {factor}.\")\n\n if not bounds[0] <= factor[0] <= factor[1] <= bounds[1]:\n raise ValueError(f\"{name}[0] should be smaller than {name}[1] got {factor}\")\n\n factor_bound = factor\n\n else:\n\n raise TypeError(\n f\"The {name} should be a float number or a tuple with length 2 whose values move between {bounds}.\")\n\n return factor_bound\n\n\ndef _shape_validation(param: torch.Tensor, shape: Union[tuple, list], name: str) -> None:\n assert param.shape == torch.Size(shape), f\"Invalid shape for {name}. Expected {shape}. Got {param.shape}\"\n", "path": "kornia/augmentation/utils/helpers.py"}], "after_files": [{"content": "from typing import Tuple, Union, List, cast, Optional\n\nimport torch\nfrom torch.distributions import Uniform, Beta\n\n\ndef _infer_batch_shape(input: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]) -> torch.Size:\n r\"\"\"Infer input shape. Input may be either (tensor,) or (tensor, transform_matrix)\n \"\"\"\n if isinstance(input, tuple):\n tensor = _transform_input(input[0])\n else:\n tensor = _transform_input(input)\n return tensor.shape\n\n\ndef _infer_batch_shape3d(input: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]) -> torch.Size:\n r\"\"\"Infer input shape. Input may be either (tensor,) or (tensor, transform_matrix)\n \"\"\"\n if isinstance(input, tuple):\n tensor = _transform_input3d(input[0])\n else:\n tensor = _transform_input3d(input)\n return tensor.shape\n\n\ndef _transform_input(input: torch.Tensor) -> torch.Tensor:\n r\"\"\"Reshape an input tensor to be (*, C, H, W). Accept either (H, W), (C, H, W) or (*, C, H, W).\n Args:\n input: torch.Tensor\n\n Returns:\n torch.Tensor\n \"\"\"\n if not torch.is_tensor(input):\n raise TypeError(f\"Input type is not a torch.Tensor. Got {type(input)}\")\n\n if len(input.shape) not in [2, 3, 4]:\n raise ValueError(\n f\"Input size must have a shape of either (H, W), (C, H, W) or (*, C, H, W). Got {input.shape}\")\n\n if len(input.shape) == 2:\n input = input.unsqueeze(0)\n\n if len(input.shape) == 3:\n input = input.unsqueeze(0)\n\n return input\n\n\ndef _transform_input3d(input: torch.Tensor) -> torch.Tensor:\n r\"\"\"Reshape an input tensor to be (*, C, D, H, W). Accept either (D, H, W), (C, D, H, W) or (*, C, D, H, W).\n Args:\n input: torch.Tensor\n\n Returns:\n torch.Tensor\n \"\"\"\n if not torch.is_tensor(input):\n raise TypeError(f\"Input type is not a torch.Tensor. Got {type(input)}\")\n\n if len(input.shape) not in [3, 4, 5]:\n raise ValueError(\n f\"Input size must have a shape of either (D, H, W), (C, D, H, W) or (*, C, D, H, W). Got {input.shape}\")\n\n if len(input.shape) == 3:\n input = input.unsqueeze(0)\n\n if len(input.shape) == 4:\n input = input.unsqueeze(0)\n\n return input\n\n\ndef _validate_input_dtype(input: torch.Tensor, accepted_dtypes: List) -> None:\n r\"\"\"Check if the dtype of the input tensor is in the range of accepted_dtypes\n Args:\n input: torch.Tensor\n accepted_dtypes: List. e.g. [torch.float32, torch.float64]\n \"\"\"\n if input.dtype not in accepted_dtypes:\n raise TypeError(f\"Expected input of {accepted_dtypes}. Got {input.dtype}\")\n\n\ndef _validate_shape(shape: Union[Tuple, torch.Size], required_shapes: List[str] = [\"BCHW\"]) -> None:\n r\"\"\"Check if the dtype of the input tensor is in the range of accepted_dtypes\n Args:\n input: torch.Tensor\n required_shapes: List. e.g. [\"BCHW\", \"BCDHW\"]\n \"\"\"\n passed = False\n for required_shape in required_shapes:\n if len(shape) == len(required_shape):\n passed = True\n break\n if not passed:\n raise TypeError(f\"Expected input shape in {required_shape}. Got {shape}.\")\n\n\ndef _validate_input_shape(input: torch.Tensor, channel_index: int, number: int) -> bool:\n r\"\"\"Validate if an input has the right shape. e.g. to check if an input is channel first.\n If channel first, the second channel of an RGB input shall be fixed to 3. To verify using:\n _validate_input_shape(input, 1, 3)\n Args:\n input: torch.Tensor\n channel_index: int\n number: int\n Returns:\n bool\n \"\"\"\n return input.shape[channel_index] == number\n\n\ndef _adapted_uniform(\n shape: Union[Tuple, torch.Size],\n low: Union[float, int, torch.Tensor],\n high: Union[float, int, torch.Tensor],\n same_on_batch=False\n) -> torch.Tensor:\n r\"\"\" The uniform sampling function that accepts 'same_on_batch'.\n If same_on_batch is True, all values generated will be exactly same given a batch_size (shape[0]).\n By default, same_on_batch is set to False.\n \"\"\"\n if not isinstance(low, torch.Tensor):\n low = torch.tensor(low, dtype=torch.float32)\n if not isinstance(high, torch.Tensor):\n high = torch.tensor(high, dtype=torch.float32)\n dist = Uniform(low, high)\n if same_on_batch:\n return dist.rsample((1, *shape[1:])).repeat(shape[0], *[1] * (len(shape) - 1))\n else:\n return dist.rsample(shape)\n\n\ndef _adapted_beta(\n shape: Union[Tuple, torch.Size],\n a: Union[float, int, torch.Tensor],\n b: Union[float, int, torch.Tensor],\n same_on_batch=False\n) -> torch.Tensor:\n r\"\"\" The beta sampling function that accepts 'same_on_batch'.\n If same_on_batch is True, all values generated will be exactly same given a batch_size (shape[0]).\n By default, same_on_batch is set to False.\n \"\"\"\n if not isinstance(a, torch.Tensor):\n a = torch.tensor(a, dtype=torch.float32)\n if not isinstance(b, torch.Tensor):\n b = torch.tensor(b, dtype=torch.float32)\n dist = Beta(a, b)\n if same_on_batch:\n return dist.rsample((1, *shape[1:])).repeat(shape[0])\n else:\n return dist.rsample(shape)\n\n\ndef _check_and_bound(factor: Union[torch.Tensor, float, Tuple[float, float], List[float]], name: str,\n center: float = 0., bounds: Tuple[float, float] = (0, float('inf'))) -> torch.Tensor:\n r\"\"\"Check inputs and compute the corresponding factor bounds\n \"\"\"\n factor_bound: torch.Tensor\n if not isinstance(factor, torch.Tensor):\n factor = torch.tensor(factor, dtype=torch.float32)\n\n if factor.dim() == 0:\n _center = torch.tensor(center, dtype=torch.float32)\n\n if factor < 0:\n raise ValueError(f\"If {name} is a single number number, it must be non negative. Got {factor.item()}\")\n\n factor_bound = torch.tensor([_center - factor, _center + factor], dtype=torch.float32)\n # Should be something other than clamp\n # Currently, single value factor will not out of scope as long as the user provided it.\n factor_bound = torch.clamp(factor_bound, bounds[0], bounds[1])\n\n elif factor.shape[0] == 2 and factor.dim() == 1:\n\n if not bounds[0] <= factor[0] or not bounds[1] >= factor[1]:\n raise ValueError(f\"{name} out of bounds. Expected inside {bounds}, got {factor}.\")\n\n if not bounds[0] <= factor[0] <= factor[1] <= bounds[1]:\n raise ValueError(f\"{name}[0] should be smaller than {name}[1] got {factor}\")\n\n factor_bound = factor\n\n else:\n\n raise TypeError(\n f\"The {name} should be a float number or a tuple with length 2 whose values move between {bounds}.\")\n\n return factor_bound\n\n\ndef _shape_validation(param: torch.Tensor, shape: Union[tuple, list], name: str) -> None:\n assert param.shape == torch.Size(shape), f\"Invalid shape for {name}. Expected {shape}. Got {param.shape}\"\n", "path": "kornia/augmentation/utils/helpers.py"}]} | 2,671 | 151 |
gh_patches_debug_36716 | rasdani/github-patches | git_diff | e-valuation__EvaP-2216 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Export Reward Point Summary
A new button `Export reward points` should be added to the right of the text showing the number of available reward points on the staff reward points redemption events page. Clicking the button should download a CSV file containing a summary of the reward points.
This file should contain the two columns `Email` and `Points` for each user, listing the number of points currently available for that user (grantings minus redemptions) next to the user's email address. A line should only be added for users where this number of available points is not zero.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/rewards/views.py`
Content:
```
1 from datetime import datetime
2
3 from django.contrib import messages
4 from django.contrib.messages.views import SuccessMessageMixin
5 from django.core.exceptions import BadRequest, SuspiciousOperation
6 from django.db.models import Sum
7 from django.http import HttpResponse
8 from django.shortcuts import get_object_or_404, redirect, render
9 from django.urls import reverse_lazy
10 from django.utils.translation import get_language
11 from django.utils.translation import gettext as _
12 from django.utils.translation import gettext_lazy
13 from django.views.decorators.http import require_POST
14 from django.views.generic import CreateView, UpdateView
15
16 from evap.evaluation.auth import manager_required, reward_user_required
17 from evap.evaluation.models import Semester
18 from evap.evaluation.tools import AttachmentResponse, get_object_from_dict_pk_entry_or_logged_40x
19 from evap.rewards.exporters import RewardsExporter
20 from evap.rewards.forms import RewardPointRedemptionEventForm
21 from evap.rewards.models import (
22 NoPointsSelectedError,
23 NotEnoughPointsError,
24 OutdatedRedemptionDataError,
25 RedemptionEventExpiredError,
26 RewardPointGranting,
27 RewardPointRedemption,
28 RewardPointRedemptionEvent,
29 SemesterActivation,
30 )
31 from evap.rewards.tools import grant_eligible_reward_points_for_semester, reward_points_of_user, save_redemptions
32
33
34 def redeem_reward_points(request):
35 redemptions = {}
36 try:
37 for key, value in request.POST.items():
38 if key.startswith("points-"):
39 event_id = int(key.rpartition("-")[2])
40 redemptions[event_id] = int(value)
41 previous_redeemed_points = int(request.POST["previous_redeemed_points"])
42 except (ValueError, KeyError, TypeError) as e:
43 raise BadRequest from e
44
45 try:
46 save_redemptions(request, redemptions, previous_redeemed_points)
47 messages.success(request, _("You successfully redeemed your points."))
48 except (
49 NoPointsSelectedError,
50 NotEnoughPointsError,
51 RedemptionEventExpiredError,
52 OutdatedRedemptionDataError,
53 ) as error:
54 status_code = 400
55 if isinstance(error, NoPointsSelectedError):
56 error_string = _("You cannot redeem 0 points.")
57 elif isinstance(error, NotEnoughPointsError):
58 error_string = _("You don't have enough reward points.")
59 elif isinstance(error, RedemptionEventExpiredError):
60 error_string = _("Sorry, the deadline for this event expired already.")
61 elif isinstance(error, OutdatedRedemptionDataError):
62 status_code = 409
63 error_string = _(
64 "It appears that your browser sent multiple redemption requests. You can see all successful redemptions below."
65 )
66 messages.error(request, error_string)
67 return status_code
68 return 200
69
70
71 @reward_user_required
72 def index(request):
73 status = 200
74 if request.method == "POST":
75 status = redeem_reward_points(request)
76 total_points_available = reward_points_of_user(request.user)
77 reward_point_grantings = RewardPointGranting.objects.filter(user_profile=request.user)
78 reward_point_redemptions = RewardPointRedemption.objects.filter(user_profile=request.user)
79 events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by("date")
80
81 granted_point_actions = [
82 (granting.granting_time, _("Reward for") + " " + granting.semester.name, granting.value, "")
83 for granting in reward_point_grantings
84 ]
85 redemption_point_actions = [
86 (redemption.redemption_time, redemption.event.name, "", redemption.value)
87 for redemption in reward_point_redemptions
88 ]
89
90 reward_point_actions = sorted(
91 granted_point_actions + redemption_point_actions, key=lambda action: action[0], reverse=True
92 )
93
94 template_data = {
95 "reward_point_actions": reward_point_actions,
96 "total_points_available": total_points_available,
97 "total_points_spent": sum(redemption.value for redemption in reward_point_redemptions),
98 "events": events,
99 }
100 return render(request, "rewards_index.html", template_data, status=status)
101
102
103 @manager_required
104 def reward_point_redemption_events(request):
105 upcoming_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by("date")
106 past_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__lt=datetime.now()).order_by("-date")
107 total_points_granted = RewardPointGranting.objects.aggregate(Sum("value"))["value__sum"] or 0
108 total_points_redeemed = RewardPointRedemption.objects.aggregate(Sum("value"))["value__sum"] or 0
109 total_points_available = total_points_granted - total_points_redeemed
110 template_data = {
111 "upcoming_events": upcoming_events,
112 "past_events": past_events,
113 "total_points_available": total_points_available,
114 }
115 return render(request, "rewards_reward_point_redemption_events.html", template_data)
116
117
118 @manager_required
119 class RewardPointRedemptionEventCreateView(SuccessMessageMixin, CreateView):
120 model = RewardPointRedemptionEvent
121 form_class = RewardPointRedemptionEventForm
122 template_name = "rewards_reward_point_redemption_event_form.html"
123 success_url = reverse_lazy("rewards:reward_point_redemption_events")
124 success_message = gettext_lazy("Successfully created event.")
125
126
127 @manager_required
128 class RewardPointRedemptionEventEditView(SuccessMessageMixin, UpdateView):
129 model = RewardPointRedemptionEvent
130 form_class = RewardPointRedemptionEventForm
131 template_name = "rewards_reward_point_redemption_event_form.html"
132 success_url = reverse_lazy("rewards:reward_point_redemption_events")
133 success_message = gettext_lazy("Successfully updated event.")
134 pk_url_kwarg = "event_id"
135 context_object_name = "event"
136
137
138 @require_POST
139 @manager_required
140 def reward_point_redemption_event_delete(request):
141 event = get_object_from_dict_pk_entry_or_logged_40x(RewardPointRedemptionEvent, request.POST, "event_id")
142
143 if not event.can_delete:
144 raise SuspiciousOperation("Deleting redemption event not allowed")
145 event.delete()
146 return HttpResponse() # 200 OK
147
148
149 @manager_required
150 def reward_point_redemption_event_export(request, event_id):
151 event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)
152
153 filename = _("RewardPoints") + f"-{event.date}-{event.name}-{get_language()}.xls"
154 response = AttachmentResponse(filename, content_type="application/vnd.ms-excel")
155
156 RewardsExporter().export(response, event.redemptions_by_user())
157
158 return response
159
160
161 @require_POST
162 @manager_required
163 def semester_activation_edit(request, semester_id):
164 semester = get_object_or_404(Semester, id=semester_id)
165 status = request.POST.get("activation_status")
166 if status == "on":
167 active = True
168 elif status == "off":
169 active = False
170 else:
171 raise SuspiciousOperation("Invalid activation keyword")
172 SemesterActivation.objects.update_or_create(semester=semester, defaults={"is_active": active})
173 if active:
174 grant_eligible_reward_points_for_semester(request, semester)
175 return redirect("staff:semester_view", semester_id)
176
```
Path: `evap/rewards/urls.py`
Content:
```
1 from django.urls import path
2
3 from evap.rewards import views
4
5 app_name = "rewards"
6
7 urlpatterns = [
8 path("", views.index, name="index"),
9
10 path("reward_point_redemption_events/", views.reward_point_redemption_events, name="reward_point_redemption_events"),
11 path("reward_point_redemption_event/create", views.RewardPointRedemptionEventCreateView.as_view(), name="reward_point_redemption_event_create"),
12 path("reward_point_redemption_event/<int:event_id>/edit", views.RewardPointRedemptionEventEditView.as_view(), name="reward_point_redemption_event_edit"),
13 path("reward_point_redemption_event/<int:event_id>/export", views.reward_point_redemption_event_export, name="reward_point_redemption_event_export"),
14 path("reward_point_redemption_event/delete", views.reward_point_redemption_event_delete, name="reward_point_redemption_event_delete"),
15
16 path("semester_activation/<int:semester_id>/edit", views.semester_activation_edit, name="semester_activation_edit"),
17 ]
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/evap/rewards/urls.py b/evap/rewards/urls.py
--- a/evap/rewards/urls.py
+++ b/evap/rewards/urls.py
@@ -7,6 +7,7 @@
urlpatterns = [
path("", views.index, name="index"),
+ path("reward_points_export", views.reward_points_export, name="reward_points_export"),
path("reward_point_redemption_events/", views.reward_point_redemption_events, name="reward_point_redemption_events"),
path("reward_point_redemption_event/create", views.RewardPointRedemptionEventCreateView.as_view(), name="reward_point_redemption_event_create"),
path("reward_point_redemption_event/<int:event_id>/edit", views.RewardPointRedemptionEventEditView.as_view(), name="reward_point_redemption_event_edit"),
diff --git a/evap/rewards/views.py b/evap/rewards/views.py
--- a/evap/rewards/views.py
+++ b/evap/rewards/views.py
@@ -1,3 +1,4 @@
+import csv
from datetime import datetime
from django.contrib import messages
@@ -14,7 +15,7 @@
from django.views.generic import CreateView, UpdateView
from evap.evaluation.auth import manager_required, reward_user_required
-from evap.evaluation.models import Semester
+from evap.evaluation.models import Semester, UserProfile
from evap.evaluation.tools import AttachmentResponse, get_object_from_dict_pk_entry_or_logged_40x
from evap.rewards.exporters import RewardsExporter
from evap.rewards.forms import RewardPointRedemptionEventForm
@@ -158,6 +159,32 @@
return response
+@manager_required
+def reward_points_export(request):
+ filename = _("RewardPoints") + f"-{get_language()}.csv"
+ response = AttachmentResponse(filename, content_type="text/csv")
+
+ writer = csv.writer(response, delimiter=";", lineterminator="\n")
+ writer.writerow([_("Email address"), _("Number of points")])
+ profiles_with_points = (
+ UserProfile.objects.annotate(
+ points=Sum("reward_point_grantings__value", default=0) - Sum("reward_point_redemptions__value", default=0)
+ )
+ .filter(points__gt=0)
+ .order_by("-points")
+ )
+
+ for profile in profiles_with_points.all():
+ writer.writerow(
+ [
+ profile.email,
+ profile.points,
+ ]
+ )
+
+ return response
+
+
@require_POST
@manager_required
def semester_activation_edit(request, semester_id):
| {"golden_diff": "diff --git a/evap/rewards/urls.py b/evap/rewards/urls.py\n--- a/evap/rewards/urls.py\n+++ b/evap/rewards/urls.py\n@@ -7,6 +7,7 @@\n urlpatterns = [\n path(\"\", views.index, name=\"index\"),\n \n+ path(\"reward_points_export\", views.reward_points_export, name=\"reward_points_export\"),\n path(\"reward_point_redemption_events/\", views.reward_point_redemption_events, name=\"reward_point_redemption_events\"),\n path(\"reward_point_redemption_event/create\", views.RewardPointRedemptionEventCreateView.as_view(), name=\"reward_point_redemption_event_create\"),\n path(\"reward_point_redemption_event/<int:event_id>/edit\", views.RewardPointRedemptionEventEditView.as_view(), name=\"reward_point_redemption_event_edit\"),\ndiff --git a/evap/rewards/views.py b/evap/rewards/views.py\n--- a/evap/rewards/views.py\n+++ b/evap/rewards/views.py\n@@ -1,3 +1,4 @@\n+import csv\n from datetime import datetime\n \n from django.contrib import messages\n@@ -14,7 +15,7 @@\n from django.views.generic import CreateView, UpdateView\n \n from evap.evaluation.auth import manager_required, reward_user_required\n-from evap.evaluation.models import Semester\n+from evap.evaluation.models import Semester, UserProfile\n from evap.evaluation.tools import AttachmentResponse, get_object_from_dict_pk_entry_or_logged_40x\n from evap.rewards.exporters import RewardsExporter\n from evap.rewards.forms import RewardPointRedemptionEventForm\n@@ -158,6 +159,32 @@\n return response\n \n \n+@manager_required\n+def reward_points_export(request):\n+ filename = _(\"RewardPoints\") + f\"-{get_language()}.csv\"\n+ response = AttachmentResponse(filename, content_type=\"text/csv\")\n+\n+ writer = csv.writer(response, delimiter=\";\", lineterminator=\"\\n\")\n+ writer.writerow([_(\"Email address\"), _(\"Number of points\")])\n+ profiles_with_points = (\n+ UserProfile.objects.annotate(\n+ points=Sum(\"reward_point_grantings__value\", default=0) - Sum(\"reward_point_redemptions__value\", default=0)\n+ )\n+ .filter(points__gt=0)\n+ .order_by(\"-points\")\n+ )\n+\n+ for profile in profiles_with_points.all():\n+ writer.writerow(\n+ [\n+ profile.email,\n+ profile.points,\n+ ]\n+ )\n+\n+ return response\n+\n+\n @require_POST\n @manager_required\n def semester_activation_edit(request, semester_id):\n", "issue": "Export Reward Point Summary\nA new button `Export reward points` should be added to the right of the text showing the number of available reward points on the staff reward points redemption events page. Clicking the button should download a CSV file containing a summary of the reward points.\r\n\r\nThis file should contain the two columns `Email` and `Points` for each user, listing the number of points currently available for that user (grantings minus redemptions) next to the user's email address. A line should only be added for users where this number of available points is not zero.\n", "before_files": [{"content": "from datetime import datetime\n\nfrom django.contrib import messages\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.core.exceptions import BadRequest, SuspiciousOperation\nfrom django.db.models import Sum\nfrom django.http import HttpResponse\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.urls import reverse_lazy\nfrom django.utils.translation import get_language\nfrom django.utils.translation import gettext as _\nfrom django.utils.translation import gettext_lazy\nfrom django.views.decorators.http import require_POST\nfrom django.views.generic import CreateView, UpdateView\n\nfrom evap.evaluation.auth import manager_required, reward_user_required\nfrom evap.evaluation.models import Semester\nfrom evap.evaluation.tools import AttachmentResponse, get_object_from_dict_pk_entry_or_logged_40x\nfrom evap.rewards.exporters import RewardsExporter\nfrom evap.rewards.forms import RewardPointRedemptionEventForm\nfrom evap.rewards.models import (\n NoPointsSelectedError,\n NotEnoughPointsError,\n OutdatedRedemptionDataError,\n RedemptionEventExpiredError,\n RewardPointGranting,\n RewardPointRedemption,\n RewardPointRedemptionEvent,\n SemesterActivation,\n)\nfrom evap.rewards.tools import grant_eligible_reward_points_for_semester, reward_points_of_user, save_redemptions\n\n\ndef redeem_reward_points(request):\n redemptions = {}\n try:\n for key, value in request.POST.items():\n if key.startswith(\"points-\"):\n event_id = int(key.rpartition(\"-\")[2])\n redemptions[event_id] = int(value)\n previous_redeemed_points = int(request.POST[\"previous_redeemed_points\"])\n except (ValueError, KeyError, TypeError) as e:\n raise BadRequest from e\n\n try:\n save_redemptions(request, redemptions, previous_redeemed_points)\n messages.success(request, _(\"You successfully redeemed your points.\"))\n except (\n NoPointsSelectedError,\n NotEnoughPointsError,\n RedemptionEventExpiredError,\n OutdatedRedemptionDataError,\n ) as error:\n status_code = 400\n if isinstance(error, NoPointsSelectedError):\n error_string = _(\"You cannot redeem 0 points.\")\n elif isinstance(error, NotEnoughPointsError):\n error_string = _(\"You don't have enough reward points.\")\n elif isinstance(error, RedemptionEventExpiredError):\n error_string = _(\"Sorry, the deadline for this event expired already.\")\n elif isinstance(error, OutdatedRedemptionDataError):\n status_code = 409\n error_string = _(\n \"It appears that your browser sent multiple redemption requests. You can see all successful redemptions below.\"\n )\n messages.error(request, error_string)\n return status_code\n return 200\n\n\n@reward_user_required\ndef index(request):\n status = 200\n if request.method == \"POST\":\n status = redeem_reward_points(request)\n total_points_available = reward_points_of_user(request.user)\n reward_point_grantings = RewardPointGranting.objects.filter(user_profile=request.user)\n reward_point_redemptions = RewardPointRedemption.objects.filter(user_profile=request.user)\n events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by(\"date\")\n\n granted_point_actions = [\n (granting.granting_time, _(\"Reward for\") + \" \" + granting.semester.name, granting.value, \"\")\n for granting in reward_point_grantings\n ]\n redemption_point_actions = [\n (redemption.redemption_time, redemption.event.name, \"\", redemption.value)\n for redemption in reward_point_redemptions\n ]\n\n reward_point_actions = sorted(\n granted_point_actions + redemption_point_actions, key=lambda action: action[0], reverse=True\n )\n\n template_data = {\n \"reward_point_actions\": reward_point_actions,\n \"total_points_available\": total_points_available,\n \"total_points_spent\": sum(redemption.value for redemption in reward_point_redemptions),\n \"events\": events,\n }\n return render(request, \"rewards_index.html\", template_data, status=status)\n\n\n@manager_required\ndef reward_point_redemption_events(request):\n upcoming_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by(\"date\")\n past_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__lt=datetime.now()).order_by(\"-date\")\n total_points_granted = RewardPointGranting.objects.aggregate(Sum(\"value\"))[\"value__sum\"] or 0\n total_points_redeemed = RewardPointRedemption.objects.aggregate(Sum(\"value\"))[\"value__sum\"] or 0\n total_points_available = total_points_granted - total_points_redeemed\n template_data = {\n \"upcoming_events\": upcoming_events,\n \"past_events\": past_events,\n \"total_points_available\": total_points_available,\n }\n return render(request, \"rewards_reward_point_redemption_events.html\", template_data)\n\n\n@manager_required\nclass RewardPointRedemptionEventCreateView(SuccessMessageMixin, CreateView):\n model = RewardPointRedemptionEvent\n form_class = RewardPointRedemptionEventForm\n template_name = \"rewards_reward_point_redemption_event_form.html\"\n success_url = reverse_lazy(\"rewards:reward_point_redemption_events\")\n success_message = gettext_lazy(\"Successfully created event.\")\n\n\n@manager_required\nclass RewardPointRedemptionEventEditView(SuccessMessageMixin, UpdateView):\n model = RewardPointRedemptionEvent\n form_class = RewardPointRedemptionEventForm\n template_name = \"rewards_reward_point_redemption_event_form.html\"\n success_url = reverse_lazy(\"rewards:reward_point_redemption_events\")\n success_message = gettext_lazy(\"Successfully updated event.\")\n pk_url_kwarg = \"event_id\"\n context_object_name = \"event\"\n\n\n@require_POST\n@manager_required\ndef reward_point_redemption_event_delete(request):\n event = get_object_from_dict_pk_entry_or_logged_40x(RewardPointRedemptionEvent, request.POST, \"event_id\")\n\n if not event.can_delete:\n raise SuspiciousOperation(\"Deleting redemption event not allowed\")\n event.delete()\n return HttpResponse() # 200 OK\n\n\n@manager_required\ndef reward_point_redemption_event_export(request, event_id):\n event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)\n\n filename = _(\"RewardPoints\") + f\"-{event.date}-{event.name}-{get_language()}.xls\"\n response = AttachmentResponse(filename, content_type=\"application/vnd.ms-excel\")\n\n RewardsExporter().export(response, event.redemptions_by_user())\n\n return response\n\n\n@require_POST\n@manager_required\ndef semester_activation_edit(request, semester_id):\n semester = get_object_or_404(Semester, id=semester_id)\n status = request.POST.get(\"activation_status\")\n if status == \"on\":\n active = True\n elif status == \"off\":\n active = False\n else:\n raise SuspiciousOperation(\"Invalid activation keyword\")\n SemesterActivation.objects.update_or_create(semester=semester, defaults={\"is_active\": active})\n if active:\n grant_eligible_reward_points_for_semester(request, semester)\n return redirect(\"staff:semester_view\", semester_id)\n", "path": "evap/rewards/views.py"}, {"content": "from django.urls import path\n\nfrom evap.rewards import views\n\napp_name = \"rewards\"\n\nurlpatterns = [\n path(\"\", views.index, name=\"index\"),\n\n path(\"reward_point_redemption_events/\", views.reward_point_redemption_events, name=\"reward_point_redemption_events\"),\n path(\"reward_point_redemption_event/create\", views.RewardPointRedemptionEventCreateView.as_view(), name=\"reward_point_redemption_event_create\"),\n path(\"reward_point_redemption_event/<int:event_id>/edit\", views.RewardPointRedemptionEventEditView.as_view(), name=\"reward_point_redemption_event_edit\"),\n path(\"reward_point_redemption_event/<int:event_id>/export\", views.reward_point_redemption_event_export, name=\"reward_point_redemption_event_export\"),\n path(\"reward_point_redemption_event/delete\", views.reward_point_redemption_event_delete, name=\"reward_point_redemption_event_delete\"),\n\n path(\"semester_activation/<int:semester_id>/edit\", views.semester_activation_edit, name=\"semester_activation_edit\"),\n]\n", "path": "evap/rewards/urls.py"}], "after_files": [{"content": "import csv\nfrom datetime import datetime\n\nfrom django.contrib import messages\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.core.exceptions import BadRequest, SuspiciousOperation\nfrom django.db.models import Sum\nfrom django.http import HttpResponse\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.urls import reverse_lazy\nfrom django.utils.translation import get_language\nfrom django.utils.translation import gettext as _\nfrom django.utils.translation import gettext_lazy\nfrom django.views.decorators.http import require_POST\nfrom django.views.generic import CreateView, UpdateView\n\nfrom evap.evaluation.auth import manager_required, reward_user_required\nfrom evap.evaluation.models import Semester, UserProfile\nfrom evap.evaluation.tools import AttachmentResponse, get_object_from_dict_pk_entry_or_logged_40x\nfrom evap.rewards.exporters import RewardsExporter\nfrom evap.rewards.forms import RewardPointRedemptionEventForm\nfrom evap.rewards.models import (\n NoPointsSelectedError,\n NotEnoughPointsError,\n OutdatedRedemptionDataError,\n RedemptionEventExpiredError,\n RewardPointGranting,\n RewardPointRedemption,\n RewardPointRedemptionEvent,\n SemesterActivation,\n)\nfrom evap.rewards.tools import grant_eligible_reward_points_for_semester, reward_points_of_user, save_redemptions\n\n\ndef redeem_reward_points(request):\n redemptions = {}\n try:\n for key, value in request.POST.items():\n if key.startswith(\"points-\"):\n event_id = int(key.rpartition(\"-\")[2])\n redemptions[event_id] = int(value)\n previous_redeemed_points = int(request.POST[\"previous_redeemed_points\"])\n except (ValueError, KeyError, TypeError) as e:\n raise BadRequest from e\n\n try:\n save_redemptions(request, redemptions, previous_redeemed_points)\n messages.success(request, _(\"You successfully redeemed your points.\"))\n except (\n NoPointsSelectedError,\n NotEnoughPointsError,\n RedemptionEventExpiredError,\n OutdatedRedemptionDataError,\n ) as error:\n status_code = 400\n if isinstance(error, NoPointsSelectedError):\n error_string = _(\"You cannot redeem 0 points.\")\n elif isinstance(error, NotEnoughPointsError):\n error_string = _(\"You don't have enough reward points.\")\n elif isinstance(error, RedemptionEventExpiredError):\n error_string = _(\"Sorry, the deadline for this event expired already.\")\n elif isinstance(error, OutdatedRedemptionDataError):\n status_code = 409\n error_string = _(\n \"It appears that your browser sent multiple redemption requests. You can see all successful redemptions below.\"\n )\n messages.error(request, error_string)\n return status_code\n return 200\n\n\n@reward_user_required\ndef index(request):\n status = 200\n if request.method == \"POST\":\n status = redeem_reward_points(request)\n total_points_available = reward_points_of_user(request.user)\n reward_point_grantings = RewardPointGranting.objects.filter(user_profile=request.user)\n reward_point_redemptions = RewardPointRedemption.objects.filter(user_profile=request.user)\n events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by(\"date\")\n\n granted_point_actions = [\n (granting.granting_time, _(\"Reward for\") + \" \" + granting.semester.name, granting.value, \"\")\n for granting in reward_point_grantings\n ]\n redemption_point_actions = [\n (redemption.redemption_time, redemption.event.name, \"\", redemption.value)\n for redemption in reward_point_redemptions\n ]\n\n reward_point_actions = sorted(\n granted_point_actions + redemption_point_actions, key=lambda action: action[0], reverse=True\n )\n\n template_data = {\n \"reward_point_actions\": reward_point_actions,\n \"total_points_available\": total_points_available,\n \"total_points_spent\": sum(redemption.value for redemption in reward_point_redemptions),\n \"events\": events,\n }\n return render(request, \"rewards_index.html\", template_data, status=status)\n\n\n@manager_required\ndef reward_point_redemption_events(request):\n upcoming_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__gte=datetime.now()).order_by(\"date\")\n past_events = RewardPointRedemptionEvent.objects.filter(redeem_end_date__lt=datetime.now()).order_by(\"-date\")\n total_points_granted = RewardPointGranting.objects.aggregate(Sum(\"value\"))[\"value__sum\"] or 0\n total_points_redeemed = RewardPointRedemption.objects.aggregate(Sum(\"value\"))[\"value__sum\"] or 0\n total_points_available = total_points_granted - total_points_redeemed\n template_data = {\n \"upcoming_events\": upcoming_events,\n \"past_events\": past_events,\n \"total_points_available\": total_points_available,\n }\n return render(request, \"rewards_reward_point_redemption_events.html\", template_data)\n\n\n@manager_required\nclass RewardPointRedemptionEventCreateView(SuccessMessageMixin, CreateView):\n model = RewardPointRedemptionEvent\n form_class = RewardPointRedemptionEventForm\n template_name = \"rewards_reward_point_redemption_event_form.html\"\n success_url = reverse_lazy(\"rewards:reward_point_redemption_events\")\n success_message = gettext_lazy(\"Successfully created event.\")\n\n\n@manager_required\nclass RewardPointRedemptionEventEditView(SuccessMessageMixin, UpdateView):\n model = RewardPointRedemptionEvent\n form_class = RewardPointRedemptionEventForm\n template_name = \"rewards_reward_point_redemption_event_form.html\"\n success_url = reverse_lazy(\"rewards:reward_point_redemption_events\")\n success_message = gettext_lazy(\"Successfully updated event.\")\n pk_url_kwarg = \"event_id\"\n context_object_name = \"event\"\n\n\n@require_POST\n@manager_required\ndef reward_point_redemption_event_delete(request):\n event = get_object_from_dict_pk_entry_or_logged_40x(RewardPointRedemptionEvent, request.POST, \"event_id\")\n\n if not event.can_delete:\n raise SuspiciousOperation(\"Deleting redemption event not allowed\")\n event.delete()\n return HttpResponse() # 200 OK\n\n\n@manager_required\ndef reward_point_redemption_event_export(request, event_id):\n event = get_object_or_404(RewardPointRedemptionEvent, id=event_id)\n\n filename = _(\"RewardPoints\") + f\"-{event.date}-{event.name}-{get_language()}.xls\"\n response = AttachmentResponse(filename, content_type=\"application/vnd.ms-excel\")\n\n RewardsExporter().export(response, event.redemptions_by_user())\n\n return response\n\n\n@manager_required\ndef reward_points_export(request):\n filename = _(\"RewardPoints\") + f\"-{get_language()}.csv\"\n response = AttachmentResponse(filename, content_type=\"text/csv\")\n\n writer = csv.writer(response, delimiter=\";\", lineterminator=\"\\n\")\n writer.writerow([_(\"Email address\"), _(\"Number of points\")])\n profiles_with_points = (\n UserProfile.objects.annotate(\n points=Sum(\"reward_point_grantings__value\", default=0) - Sum(\"reward_point_redemptions__value\", default=0)\n )\n .filter(points__gt=0)\n .order_by(\"-points\")\n )\n\n for profile in profiles_with_points.all():\n writer.writerow(\n [\n profile.email,\n profile.points,\n ]\n )\n\n return response\n\n\n@require_POST\n@manager_required\ndef semester_activation_edit(request, semester_id):\n semester = get_object_or_404(Semester, id=semester_id)\n status = request.POST.get(\"activation_status\")\n if status == \"on\":\n active = True\n elif status == \"off\":\n active = False\n else:\n raise SuspiciousOperation(\"Invalid activation keyword\")\n SemesterActivation.objects.update_or_create(semester=semester, defaults={\"is_active\": active})\n if active:\n grant_eligible_reward_points_for_semester(request, semester)\n return redirect(\"staff:semester_view\", semester_id)\n", "path": "evap/rewards/views.py"}, {"content": "from django.urls import path\n\nfrom evap.rewards import views\n\napp_name = \"rewards\"\n\nurlpatterns = [\n path(\"\", views.index, name=\"index\"),\n\n path(\"reward_points_export\", views.reward_points_export, name=\"reward_points_export\"),\n path(\"reward_point_redemption_events/\", views.reward_point_redemption_events, name=\"reward_point_redemption_events\"),\n path(\"reward_point_redemption_event/create\", views.RewardPointRedemptionEventCreateView.as_view(), name=\"reward_point_redemption_event_create\"),\n path(\"reward_point_redemption_event/<int:event_id>/edit\", views.RewardPointRedemptionEventEditView.as_view(), name=\"reward_point_redemption_event_edit\"),\n path(\"reward_point_redemption_event/<int:event_id>/export\", views.reward_point_redemption_event_export, name=\"reward_point_redemption_event_export\"),\n path(\"reward_point_redemption_event/delete\", views.reward_point_redemption_event_delete, name=\"reward_point_redemption_event_delete\"),\n\n path(\"semester_activation/<int:semester_id>/edit\", views.semester_activation_edit, name=\"semester_activation_edit\"),\n]\n", "path": "evap/rewards/urls.py"}]} | 2,617 | 576 |
gh_patches_debug_12696 | rasdani/github-patches | git_diff | cloudtools__troposphere-1968 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for Application Load Balancer to TargetGroup's valid types
Currently the valid TargetTypes include Instance, IP and Lambda.
ALB should also be added as a valid target group type - [ref](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticloadbalancingv2-targetgroup.html#cfn-elasticloadbalancingv2-targetgroup-targettype).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `troposphere/elasticloadbalancingv2.py`
Content:
```
1 # Copyright (c) 2012-2013, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6 from . import AWSObject, AWSProperty, If, Tags
7 from .validators import (
8 boolean,
9 elb_name,
10 exactly_one,
11 integer,
12 network_port,
13 one_of,
14 tg_healthcheck_port,
15 )
16
17
18 class LoadBalancerAttributes(AWSProperty):
19 props = {"Key": (str, False), "Value": (str, False)}
20
21
22 class Certificate(AWSProperty):
23 props = {"CertificateArn": (str, False)}
24
25
26 class AuthenticateCognitoConfig(AWSProperty):
27 props = {
28 "AuthenticationRequestExtraParams": (dict, False),
29 "OnUnauthenticatedRequest": (str, False),
30 "Scope": (str, False),
31 "SessionCookieName": (str, False),
32 "SessionTimeout": (integer, False),
33 "UserPoolArn": (str, True),
34 "UserPoolClientId": (str, True),
35 "UserPoolDomain": (str, True),
36 }
37
38
39 class AuthenticateOidcConfig(AWSProperty):
40 props = {
41 "AuthenticationRequestExtraParams": (dict, False),
42 "AuthorizationEndpoint": (str, True),
43 "ClientId": (str, True),
44 "ClientSecret": (str, True),
45 "Issuer": (str, True),
46 "OnUnauthenticatedRequest": (str, False),
47 "Scope": (str, False),
48 "SessionCookieName": (str, False),
49 "SessionTimeout": (integer, False),
50 "TokenEndpoint": (str, True),
51 "UserInfoEndpoint": (str, True),
52 }
53
54
55 class RedirectConfig(AWSProperty):
56 # https://docs.aws.amazon.com/
57 # AWSCloudFormation/latest/UserGuide/
58 # aws-properties-elasticloadbalancingv2-listener-redirectconfig.html
59 props = {
60 "Host": (str, False),
61 "Path": (str, False),
62 "Port": (str, False),
63 "Protocol": (str, False),
64 "Query": (str, False),
65 "StatusCode": (str, True),
66 }
67
68 def validate(self):
69 one_of(
70 self.__class__.__name__,
71 self.properties,
72 "StatusCode",
73 ["HTTP_301", "HTTP_302"],
74 )
75
76
77 class FixedResponseConfig(AWSProperty):
78 props = {
79 "ContentType": (str, False),
80 "MessageBody": (str, False),
81 "StatusCode": (str, True),
82 }
83
84 def validate(self):
85 one_of(
86 self.__class__.__name__,
87 self.properties,
88 "ContentType",
89 [
90 None,
91 "text/plain",
92 "text/css",
93 "text/html",
94 "application/javascript",
95 "application/json",
96 ],
97 )
98
99
100 class TargetGroupTuple(AWSProperty):
101 props = {
102 "TargetGroupArn": (str, True),
103 "Weight": (integer, False),
104 }
105
106
107 class TargetGroupStickinessConfig(AWSProperty):
108 props = {
109 "DurationSeconds": (integer, False),
110 "Enabled": (boolean, False),
111 }
112
113
114 class ForwardConfig(AWSProperty):
115 props = {
116 "TargetGroups": ([TargetGroupTuple], False),
117 "TargetGroupStickinessConfig": (TargetGroupStickinessConfig, False),
118 }
119
120
121 class Action(AWSProperty):
122 props = {
123 "AuthenticateCognitoConfig": (AuthenticateCognitoConfig, False),
124 "AuthenticateOidcConfig": (AuthenticateOidcConfig, False),
125 "FixedResponseConfig": (FixedResponseConfig, False),
126 "Order": (integer, False),
127 "RedirectConfig": (RedirectConfig, False),
128 "TargetGroupArn": (str, False),
129 "ForwardConfig": (ForwardConfig, False),
130 "Type": (str, True),
131 }
132
133 @staticmethod
134 def any_property(require_prop, properties):
135 return any(p in require_prop for p in properties)
136
137 def validate(self):
138 one_of(
139 self.__class__.__name__,
140 self.properties,
141 "Type",
142 [
143 "forward",
144 "redirect",
145 "fixed-response",
146 "authenticate-cognito",
147 "authenticate-oidc",
148 ],
149 )
150
151 def requires(action_type, prop):
152 properties = [definition for definition in self.properties.keys()]
153 if self.properties.get("Type") == action_type and not self.any_property(
154 prop, properties
155 ):
156 raise ValueError(
157 'Type "%s" requires definition of "%s"' % (action_type, prop)
158 )
159 if (
160 self.any_property(prop, properties)
161 and self.properties.get("Type") != action_type
162 ):
163 raise ValueError(
164 'Definition of "%s" allowed only with '
165 'type "%s", was: "%s"'
166 % (prop, action_type, self.properties.get("Type"))
167 )
168
169 requires("forward", ["TargetGroupArn", "ForwardConfig"])
170 requires("redirect", ["RedirectConfig"])
171 requires("fixed-response", ["FixedResponseConfig"])
172
173
174 class HostHeaderConfig(AWSProperty):
175 props = {
176 "Values": ([str], False),
177 }
178
179
180 class HttpHeaderConfig(AWSProperty):
181 props = {
182 "HttpHeaderName": (str, False),
183 "Values": ([str], False),
184 }
185
186
187 class HttpRequestMethodConfig(AWSProperty):
188 props = {
189 "Values": ([str], False),
190 }
191
192
193 class PathPatternConfig(AWSProperty):
194 props = {
195 "Values": ([str], False),
196 }
197
198
199 class QueryStringKeyValue(AWSProperty):
200 props = {
201 "Key": (str, False),
202 "Value": (str, False),
203 }
204
205
206 class QueryStringConfig(AWSProperty):
207 props = {
208 "Values": ([QueryStringKeyValue], False),
209 }
210
211
212 class SourceIpConfig(AWSProperty):
213 props = {
214 "Values": ([str], False),
215 }
216
217
218 class Condition(AWSProperty):
219 props = {
220 "Field": (str, False),
221 "HostHeaderConfig": (HostHeaderConfig, False),
222 "HttpHeaderConfig": (HttpHeaderConfig, False),
223 "HttpRequestMethodConfig": (HttpRequestMethodConfig, False),
224 "PathPatternConfig": (PathPatternConfig, False),
225 "QueryStringConfig": (QueryStringConfig, False),
226 "SourceIpConfig": (SourceIpConfig, False),
227 "Values": ([str], False),
228 }
229
230
231 class Matcher(AWSProperty):
232 props = {"HttpCode": (str, True)}
233
234
235 class SubnetMapping(AWSProperty):
236 props = {
237 "AllocationId": (str, False),
238 "PrivateIPv4Address": (str, False),
239 "SubnetId": (str, True),
240 }
241
242
243 class TargetGroupAttribute(AWSProperty):
244 props = {"Key": (str, False), "Value": (str, False)}
245
246
247 class TargetDescription(AWSProperty):
248 props = {
249 "AvailabilityZone": (str, False),
250 "Id": (str, True),
251 "Port": (network_port, False),
252 }
253
254
255 class Listener(AWSObject):
256 resource_type = "AWS::ElasticLoadBalancingV2::Listener"
257
258 props = {
259 "AlpnPolicy": ([str], False),
260 "Certificates": ([Certificate], False),
261 "DefaultActions": ([Action], True),
262 "LoadBalancerArn": (str, True),
263 "Port": (network_port, True),
264 "Protocol": (str, True),
265 "SslPolicy": (str, False),
266 }
267
268
269 class ListenerCertificate(AWSObject):
270 resource_type = "AWS::ElasticLoadBalancingV2::ListenerCertificate"
271
272 props = {
273 "Certificates": ([Certificate], True),
274 "ListenerArn": (str, True),
275 }
276
277
278 class ListenerRule(AWSObject):
279 resource_type = "AWS::ElasticLoadBalancingV2::ListenerRule"
280
281 props = {
282 "Actions": ([Action], True),
283 "Conditions": ([Condition], True),
284 "ListenerArn": (str, True),
285 "Priority": (integer, True),
286 }
287
288
289 TARGET_TYPE_INSTANCE = "instance"
290 TARGET_TYPE_IP = "ip"
291 TARGET_TYPE_LAMBDA = "lambda"
292
293
294 def validate_target_type(target_type):
295 valid_types = [TARGET_TYPE_INSTANCE, TARGET_TYPE_IP, TARGET_TYPE_LAMBDA]
296 if target_type not in valid_types:
297 raise ValueError(
298 'TargetGroup.TargetType must be one of: "%s"' % ", ".join(valid_types)
299 )
300 return target_type
301
302
303 class TargetGroup(AWSObject):
304 resource_type = "AWS::ElasticLoadBalancingV2::TargetGroup"
305
306 props = {
307 "HealthCheckEnabled": (boolean, False),
308 "HealthCheckIntervalSeconds": (integer, False),
309 "HealthCheckPath": (str, False),
310 "HealthCheckPort": (tg_healthcheck_port, False),
311 "HealthCheckProtocol": (str, False),
312 "HealthCheckTimeoutSeconds": (integer, False),
313 "HealthyThresholdCount": (integer, False),
314 "Matcher": (Matcher, False),
315 "Name": (str, False),
316 "Port": (network_port, False),
317 "Protocol": (str, False),
318 "ProtocolVersion": (str, False),
319 "Tags": ((Tags, list), False),
320 "TargetGroupAttributes": ([TargetGroupAttribute], False),
321 "Targets": ([TargetDescription], False),
322 "TargetType": (validate_target_type, False),
323 "UnhealthyThresholdCount": (integer, False),
324 "VpcId": (str, False),
325 }
326
327 def validate(self):
328 def check_properties(action_types, props_to_check, required):
329
330 for this_type in action_types:
331
332 self_props = self.properties
333 if self_props.get("TargetType") == this_type:
334
335 invalid_props = []
336 for prop in props_to_check:
337
338 if (prop not in self_props and required is True) or (
339 prop in self_props and required is False
340 ):
341 invalid_props.append(prop)
342
343 if len(invalid_props) > 0:
344 # Make error message more readable in the default case
345 type_msg = (
346 "Omitting TargetType"
347 if this_type is None
348 else 'TargetType of "%s"' % this_type
349 )
350
351 raise ValueError(
352 '%s in "%s" %s definitions of %s'
353 % (
354 type_msg,
355 self.__class__.__name__,
356 "requires" if required is True else "must not contain",
357 str(invalid_props).strip("[]"),
358 )
359 )
360
361 # None defaults to instance as per the AWS docs
362 check_properties(
363 [None, TARGET_TYPE_INSTANCE, TARGET_TYPE_IP],
364 ["Port", "Protocol", "VpcId"],
365 True,
366 )
367 check_properties([TARGET_TYPE_LAMBDA], ["Port", "Protocol", "VpcId"], False)
368
369
370 class LoadBalancer(AWSObject):
371 resource_type = "AWS::ElasticLoadBalancingV2::LoadBalancer"
372
373 props = {
374 "IpAddressType": (str, False),
375 "LoadBalancerAttributes": ([LoadBalancerAttributes], False),
376 "Name": (elb_name, False),
377 "Scheme": (str, False),
378 "SecurityGroups": ([str], False),
379 "SubnetMappings": ([SubnetMapping], False),
380 "Subnets": ([str], False),
381 "Tags": ((Tags, list), False),
382 "Type": (str, False),
383 }
384
385 def validate(self):
386 conds = [
387 "SubnetMappings",
388 "Subnets",
389 ]
390
391 def check_if(names, props):
392 validated = []
393 for name in names:
394 validated.append(name in props and isinstance(props[name], If))
395 return all(validated)
396
397 if check_if(conds, self.properties):
398 return
399
400 exactly_one(self.__class__.__name__, self.properties, conds)
401
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/troposphere/elasticloadbalancingv2.py b/troposphere/elasticloadbalancingv2.py
--- a/troposphere/elasticloadbalancingv2.py
+++ b/troposphere/elasticloadbalancingv2.py
@@ -286,13 +286,19 @@
}
+TARGET_TYPE_ALB = "alb"
TARGET_TYPE_INSTANCE = "instance"
TARGET_TYPE_IP = "ip"
TARGET_TYPE_LAMBDA = "lambda"
def validate_target_type(target_type):
- valid_types = [TARGET_TYPE_INSTANCE, TARGET_TYPE_IP, TARGET_TYPE_LAMBDA]
+ valid_types = [
+ TARGET_TYPE_ALB,
+ TARGET_TYPE_INSTANCE,
+ TARGET_TYPE_IP,
+ TARGET_TYPE_LAMBDA,
+ ]
if target_type not in valid_types:
raise ValueError(
'TargetGroup.TargetType must be one of: "%s"' % ", ".join(valid_types)
| {"golden_diff": "diff --git a/troposphere/elasticloadbalancingv2.py b/troposphere/elasticloadbalancingv2.py\n--- a/troposphere/elasticloadbalancingv2.py\n+++ b/troposphere/elasticloadbalancingv2.py\n@@ -286,13 +286,19 @@\n }\n \n \n+TARGET_TYPE_ALB = \"alb\"\n TARGET_TYPE_INSTANCE = \"instance\"\n TARGET_TYPE_IP = \"ip\"\n TARGET_TYPE_LAMBDA = \"lambda\"\n \n \n def validate_target_type(target_type):\n- valid_types = [TARGET_TYPE_INSTANCE, TARGET_TYPE_IP, TARGET_TYPE_LAMBDA]\n+ valid_types = [\n+ TARGET_TYPE_ALB,\n+ TARGET_TYPE_INSTANCE,\n+ TARGET_TYPE_IP,\n+ TARGET_TYPE_LAMBDA,\n+ ]\n if target_type not in valid_types:\n raise ValueError(\n 'TargetGroup.TargetType must be one of: \"%s\"' % \", \".join(valid_types)\n", "issue": "Add support for Application Load Balancer to TargetGroup's valid types\nCurrently the valid TargetTypes include Instance, IP and Lambda.\r\n\r\nALB should also be added as a valid target group type - [ref](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticloadbalancingv2-targetgroup.html#cfn-elasticloadbalancingv2-targetgroup-targettype).\n", "before_files": [{"content": "# Copyright (c) 2012-2013, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSObject, AWSProperty, If, Tags\nfrom .validators import (\n boolean,\n elb_name,\n exactly_one,\n integer,\n network_port,\n one_of,\n tg_healthcheck_port,\n)\n\n\nclass LoadBalancerAttributes(AWSProperty):\n props = {\"Key\": (str, False), \"Value\": (str, False)}\n\n\nclass Certificate(AWSProperty):\n props = {\"CertificateArn\": (str, False)}\n\n\nclass AuthenticateCognitoConfig(AWSProperty):\n props = {\n \"AuthenticationRequestExtraParams\": (dict, False),\n \"OnUnauthenticatedRequest\": (str, False),\n \"Scope\": (str, False),\n \"SessionCookieName\": (str, False),\n \"SessionTimeout\": (integer, False),\n \"UserPoolArn\": (str, True),\n \"UserPoolClientId\": (str, True),\n \"UserPoolDomain\": (str, True),\n }\n\n\nclass AuthenticateOidcConfig(AWSProperty):\n props = {\n \"AuthenticationRequestExtraParams\": (dict, False),\n \"AuthorizationEndpoint\": (str, True),\n \"ClientId\": (str, True),\n \"ClientSecret\": (str, True),\n \"Issuer\": (str, True),\n \"OnUnauthenticatedRequest\": (str, False),\n \"Scope\": (str, False),\n \"SessionCookieName\": (str, False),\n \"SessionTimeout\": (integer, False),\n \"TokenEndpoint\": (str, True),\n \"UserInfoEndpoint\": (str, True),\n }\n\n\nclass RedirectConfig(AWSProperty):\n # https://docs.aws.amazon.com/\n # AWSCloudFormation/latest/UserGuide/\n # aws-properties-elasticloadbalancingv2-listener-redirectconfig.html\n props = {\n \"Host\": (str, False),\n \"Path\": (str, False),\n \"Port\": (str, False),\n \"Protocol\": (str, False),\n \"Query\": (str, False),\n \"StatusCode\": (str, True),\n }\n\n def validate(self):\n one_of(\n self.__class__.__name__,\n self.properties,\n \"StatusCode\",\n [\"HTTP_301\", \"HTTP_302\"],\n )\n\n\nclass FixedResponseConfig(AWSProperty):\n props = {\n \"ContentType\": (str, False),\n \"MessageBody\": (str, False),\n \"StatusCode\": (str, True),\n }\n\n def validate(self):\n one_of(\n self.__class__.__name__,\n self.properties,\n \"ContentType\",\n [\n None,\n \"text/plain\",\n \"text/css\",\n \"text/html\",\n \"application/javascript\",\n \"application/json\",\n ],\n )\n\n\nclass TargetGroupTuple(AWSProperty):\n props = {\n \"TargetGroupArn\": (str, True),\n \"Weight\": (integer, False),\n }\n\n\nclass TargetGroupStickinessConfig(AWSProperty):\n props = {\n \"DurationSeconds\": (integer, False),\n \"Enabled\": (boolean, False),\n }\n\n\nclass ForwardConfig(AWSProperty):\n props = {\n \"TargetGroups\": ([TargetGroupTuple], False),\n \"TargetGroupStickinessConfig\": (TargetGroupStickinessConfig, False),\n }\n\n\nclass Action(AWSProperty):\n props = {\n \"AuthenticateCognitoConfig\": (AuthenticateCognitoConfig, False),\n \"AuthenticateOidcConfig\": (AuthenticateOidcConfig, False),\n \"FixedResponseConfig\": (FixedResponseConfig, False),\n \"Order\": (integer, False),\n \"RedirectConfig\": (RedirectConfig, False),\n \"TargetGroupArn\": (str, False),\n \"ForwardConfig\": (ForwardConfig, False),\n \"Type\": (str, True),\n }\n\n @staticmethod\n def any_property(require_prop, properties):\n return any(p in require_prop for p in properties)\n\n def validate(self):\n one_of(\n self.__class__.__name__,\n self.properties,\n \"Type\",\n [\n \"forward\",\n \"redirect\",\n \"fixed-response\",\n \"authenticate-cognito\",\n \"authenticate-oidc\",\n ],\n )\n\n def requires(action_type, prop):\n properties = [definition for definition in self.properties.keys()]\n if self.properties.get(\"Type\") == action_type and not self.any_property(\n prop, properties\n ):\n raise ValueError(\n 'Type \"%s\" requires definition of \"%s\"' % (action_type, prop)\n )\n if (\n self.any_property(prop, properties)\n and self.properties.get(\"Type\") != action_type\n ):\n raise ValueError(\n 'Definition of \"%s\" allowed only with '\n 'type \"%s\", was: \"%s\"'\n % (prop, action_type, self.properties.get(\"Type\"))\n )\n\n requires(\"forward\", [\"TargetGroupArn\", \"ForwardConfig\"])\n requires(\"redirect\", [\"RedirectConfig\"])\n requires(\"fixed-response\", [\"FixedResponseConfig\"])\n\n\nclass HostHeaderConfig(AWSProperty):\n props = {\n \"Values\": ([str], False),\n }\n\n\nclass HttpHeaderConfig(AWSProperty):\n props = {\n \"HttpHeaderName\": (str, False),\n \"Values\": ([str], False),\n }\n\n\nclass HttpRequestMethodConfig(AWSProperty):\n props = {\n \"Values\": ([str], False),\n }\n\n\nclass PathPatternConfig(AWSProperty):\n props = {\n \"Values\": ([str], False),\n }\n\n\nclass QueryStringKeyValue(AWSProperty):\n props = {\n \"Key\": (str, False),\n \"Value\": (str, False),\n }\n\n\nclass QueryStringConfig(AWSProperty):\n props = {\n \"Values\": ([QueryStringKeyValue], False),\n }\n\n\nclass SourceIpConfig(AWSProperty):\n props = {\n \"Values\": ([str], False),\n }\n\n\nclass Condition(AWSProperty):\n props = {\n \"Field\": (str, False),\n \"HostHeaderConfig\": (HostHeaderConfig, False),\n \"HttpHeaderConfig\": (HttpHeaderConfig, False),\n \"HttpRequestMethodConfig\": (HttpRequestMethodConfig, False),\n \"PathPatternConfig\": (PathPatternConfig, False),\n \"QueryStringConfig\": (QueryStringConfig, False),\n \"SourceIpConfig\": (SourceIpConfig, False),\n \"Values\": ([str], False),\n }\n\n\nclass Matcher(AWSProperty):\n props = {\"HttpCode\": (str, True)}\n\n\nclass SubnetMapping(AWSProperty):\n props = {\n \"AllocationId\": (str, False),\n \"PrivateIPv4Address\": (str, False),\n \"SubnetId\": (str, True),\n }\n\n\nclass TargetGroupAttribute(AWSProperty):\n props = {\"Key\": (str, False), \"Value\": (str, False)}\n\n\nclass TargetDescription(AWSProperty):\n props = {\n \"AvailabilityZone\": (str, False),\n \"Id\": (str, True),\n \"Port\": (network_port, False),\n }\n\n\nclass Listener(AWSObject):\n resource_type = \"AWS::ElasticLoadBalancingV2::Listener\"\n\n props = {\n \"AlpnPolicy\": ([str], False),\n \"Certificates\": ([Certificate], False),\n \"DefaultActions\": ([Action], True),\n \"LoadBalancerArn\": (str, True),\n \"Port\": (network_port, True),\n \"Protocol\": (str, True),\n \"SslPolicy\": (str, False),\n }\n\n\nclass ListenerCertificate(AWSObject):\n resource_type = \"AWS::ElasticLoadBalancingV2::ListenerCertificate\"\n\n props = {\n \"Certificates\": ([Certificate], True),\n \"ListenerArn\": (str, True),\n }\n\n\nclass ListenerRule(AWSObject):\n resource_type = \"AWS::ElasticLoadBalancingV2::ListenerRule\"\n\n props = {\n \"Actions\": ([Action], True),\n \"Conditions\": ([Condition], True),\n \"ListenerArn\": (str, True),\n \"Priority\": (integer, True),\n }\n\n\nTARGET_TYPE_INSTANCE = \"instance\"\nTARGET_TYPE_IP = \"ip\"\nTARGET_TYPE_LAMBDA = \"lambda\"\n\n\ndef validate_target_type(target_type):\n valid_types = [TARGET_TYPE_INSTANCE, TARGET_TYPE_IP, TARGET_TYPE_LAMBDA]\n if target_type not in valid_types:\n raise ValueError(\n 'TargetGroup.TargetType must be one of: \"%s\"' % \", \".join(valid_types)\n )\n return target_type\n\n\nclass TargetGroup(AWSObject):\n resource_type = \"AWS::ElasticLoadBalancingV2::TargetGroup\"\n\n props = {\n \"HealthCheckEnabled\": (boolean, False),\n \"HealthCheckIntervalSeconds\": (integer, False),\n \"HealthCheckPath\": (str, False),\n \"HealthCheckPort\": (tg_healthcheck_port, False),\n \"HealthCheckProtocol\": (str, False),\n \"HealthCheckTimeoutSeconds\": (integer, False),\n \"HealthyThresholdCount\": (integer, False),\n \"Matcher\": (Matcher, False),\n \"Name\": (str, False),\n \"Port\": (network_port, False),\n \"Protocol\": (str, False),\n \"ProtocolVersion\": (str, False),\n \"Tags\": ((Tags, list), False),\n \"TargetGroupAttributes\": ([TargetGroupAttribute], False),\n \"Targets\": ([TargetDescription], False),\n \"TargetType\": (validate_target_type, False),\n \"UnhealthyThresholdCount\": (integer, False),\n \"VpcId\": (str, False),\n }\n\n def validate(self):\n def check_properties(action_types, props_to_check, required):\n\n for this_type in action_types:\n\n self_props = self.properties\n if self_props.get(\"TargetType\") == this_type:\n\n invalid_props = []\n for prop in props_to_check:\n\n if (prop not in self_props and required is True) or (\n prop in self_props and required is False\n ):\n invalid_props.append(prop)\n\n if len(invalid_props) > 0:\n # Make error message more readable in the default case\n type_msg = (\n \"Omitting TargetType\"\n if this_type is None\n else 'TargetType of \"%s\"' % this_type\n )\n\n raise ValueError(\n '%s in \"%s\" %s definitions of %s'\n % (\n type_msg,\n self.__class__.__name__,\n \"requires\" if required is True else \"must not contain\",\n str(invalid_props).strip(\"[]\"),\n )\n )\n\n # None defaults to instance as per the AWS docs\n check_properties(\n [None, TARGET_TYPE_INSTANCE, TARGET_TYPE_IP],\n [\"Port\", \"Protocol\", \"VpcId\"],\n True,\n )\n check_properties([TARGET_TYPE_LAMBDA], [\"Port\", \"Protocol\", \"VpcId\"], False)\n\n\nclass LoadBalancer(AWSObject):\n resource_type = \"AWS::ElasticLoadBalancingV2::LoadBalancer\"\n\n props = {\n \"IpAddressType\": (str, False),\n \"LoadBalancerAttributes\": ([LoadBalancerAttributes], False),\n \"Name\": (elb_name, False),\n \"Scheme\": (str, False),\n \"SecurityGroups\": ([str], False),\n \"SubnetMappings\": ([SubnetMapping], False),\n \"Subnets\": ([str], False),\n \"Tags\": ((Tags, list), False),\n \"Type\": (str, False),\n }\n\n def validate(self):\n conds = [\n \"SubnetMappings\",\n \"Subnets\",\n ]\n\n def check_if(names, props):\n validated = []\n for name in names:\n validated.append(name in props and isinstance(props[name], If))\n return all(validated)\n\n if check_if(conds, self.properties):\n return\n\n exactly_one(self.__class__.__name__, self.properties, conds)\n", "path": "troposphere/elasticloadbalancingv2.py"}], "after_files": [{"content": "# Copyright (c) 2012-2013, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSObject, AWSProperty, If, Tags\nfrom .validators import (\n boolean,\n elb_name,\n exactly_one,\n integer,\n network_port,\n one_of,\n tg_healthcheck_port,\n)\n\n\nclass LoadBalancerAttributes(AWSProperty):\n props = {\"Key\": (str, False), \"Value\": (str, False)}\n\n\nclass Certificate(AWSProperty):\n props = {\"CertificateArn\": (str, False)}\n\n\nclass AuthenticateCognitoConfig(AWSProperty):\n props = {\n \"AuthenticationRequestExtraParams\": (dict, False),\n \"OnUnauthenticatedRequest\": (str, False),\n \"Scope\": (str, False),\n \"SessionCookieName\": (str, False),\n \"SessionTimeout\": (integer, False),\n \"UserPoolArn\": (str, True),\n \"UserPoolClientId\": (str, True),\n \"UserPoolDomain\": (str, True),\n }\n\n\nclass AuthenticateOidcConfig(AWSProperty):\n props = {\n \"AuthenticationRequestExtraParams\": (dict, False),\n \"AuthorizationEndpoint\": (str, True),\n \"ClientId\": (str, True),\n \"ClientSecret\": (str, True),\n \"Issuer\": (str, True),\n \"OnUnauthenticatedRequest\": (str, False),\n \"Scope\": (str, False),\n \"SessionCookieName\": (str, False),\n \"SessionTimeout\": (integer, False),\n \"TokenEndpoint\": (str, True),\n \"UserInfoEndpoint\": (str, True),\n }\n\n\nclass RedirectConfig(AWSProperty):\n # https://docs.aws.amazon.com/\n # AWSCloudFormation/latest/UserGuide/\n # aws-properties-elasticloadbalancingv2-listener-redirectconfig.html\n props = {\n \"Host\": (str, False),\n \"Path\": (str, False),\n \"Port\": (str, False),\n \"Protocol\": (str, False),\n \"Query\": (str, False),\n \"StatusCode\": (str, True),\n }\n\n def validate(self):\n one_of(\n self.__class__.__name__,\n self.properties,\n \"StatusCode\",\n [\"HTTP_301\", \"HTTP_302\"],\n )\n\n\nclass FixedResponseConfig(AWSProperty):\n props = {\n \"ContentType\": (str, False),\n \"MessageBody\": (str, False),\n \"StatusCode\": (str, True),\n }\n\n def validate(self):\n one_of(\n self.__class__.__name__,\n self.properties,\n \"ContentType\",\n [\n None,\n \"text/plain\",\n \"text/css\",\n \"text/html\",\n \"application/javascript\",\n \"application/json\",\n ],\n )\n\n\nclass TargetGroupTuple(AWSProperty):\n props = {\n \"TargetGroupArn\": (str, True),\n \"Weight\": (integer, False),\n }\n\n\nclass TargetGroupStickinessConfig(AWSProperty):\n props = {\n \"DurationSeconds\": (integer, False),\n \"Enabled\": (boolean, False),\n }\n\n\nclass ForwardConfig(AWSProperty):\n props = {\n \"TargetGroups\": ([TargetGroupTuple], False),\n \"TargetGroupStickinessConfig\": (TargetGroupStickinessConfig, False),\n }\n\n\nclass Action(AWSProperty):\n props = {\n \"AuthenticateCognitoConfig\": (AuthenticateCognitoConfig, False),\n \"AuthenticateOidcConfig\": (AuthenticateOidcConfig, False),\n \"FixedResponseConfig\": (FixedResponseConfig, False),\n \"Order\": (integer, False),\n \"RedirectConfig\": (RedirectConfig, False),\n \"TargetGroupArn\": (str, False),\n \"ForwardConfig\": (ForwardConfig, False),\n \"Type\": (str, True),\n }\n\n @staticmethod\n def any_property(require_prop, properties):\n return any(p in require_prop for p in properties)\n\n def validate(self):\n one_of(\n self.__class__.__name__,\n self.properties,\n \"Type\",\n [\n \"forward\",\n \"redirect\",\n \"fixed-response\",\n \"authenticate-cognito\",\n \"authenticate-oidc\",\n ],\n )\n\n def requires(action_type, prop):\n properties = [definition for definition in self.properties.keys()]\n if self.properties.get(\"Type\") == action_type and not self.any_property(\n prop, properties\n ):\n raise ValueError(\n 'Type \"%s\" requires definition of \"%s\"' % (action_type, prop)\n )\n if (\n self.any_property(prop, properties)\n and self.properties.get(\"Type\") != action_type\n ):\n raise ValueError(\n 'Definition of \"%s\" allowed only with '\n 'type \"%s\", was: \"%s\"'\n % (prop, action_type, self.properties.get(\"Type\"))\n )\n\n requires(\"forward\", [\"TargetGroupArn\", \"ForwardConfig\"])\n requires(\"redirect\", [\"RedirectConfig\"])\n requires(\"fixed-response\", [\"FixedResponseConfig\"])\n\n\nclass HostHeaderConfig(AWSProperty):\n props = {\n \"Values\": ([str], False),\n }\n\n\nclass HttpHeaderConfig(AWSProperty):\n props = {\n \"HttpHeaderName\": (str, False),\n \"Values\": ([str], False),\n }\n\n\nclass HttpRequestMethodConfig(AWSProperty):\n props = {\n \"Values\": ([str], False),\n }\n\n\nclass PathPatternConfig(AWSProperty):\n props = {\n \"Values\": ([str], False),\n }\n\n\nclass QueryStringKeyValue(AWSProperty):\n props = {\n \"Key\": (str, False),\n \"Value\": (str, False),\n }\n\n\nclass QueryStringConfig(AWSProperty):\n props = {\n \"Values\": ([QueryStringKeyValue], False),\n }\n\n\nclass SourceIpConfig(AWSProperty):\n props = {\n \"Values\": ([str], False),\n }\n\n\nclass Condition(AWSProperty):\n props = {\n \"Field\": (str, False),\n \"HostHeaderConfig\": (HostHeaderConfig, False),\n \"HttpHeaderConfig\": (HttpHeaderConfig, False),\n \"HttpRequestMethodConfig\": (HttpRequestMethodConfig, False),\n \"PathPatternConfig\": (PathPatternConfig, False),\n \"QueryStringConfig\": (QueryStringConfig, False),\n \"SourceIpConfig\": (SourceIpConfig, False),\n \"Values\": ([str], False),\n }\n\n\nclass Matcher(AWSProperty):\n props = {\"HttpCode\": (str, True)}\n\n\nclass SubnetMapping(AWSProperty):\n props = {\n \"AllocationId\": (str, False),\n \"PrivateIPv4Address\": (str, False),\n \"SubnetId\": (str, True),\n }\n\n\nclass TargetGroupAttribute(AWSProperty):\n props = {\"Key\": (str, False), \"Value\": (str, False)}\n\n\nclass TargetDescription(AWSProperty):\n props = {\n \"AvailabilityZone\": (str, False),\n \"Id\": (str, True),\n \"Port\": (network_port, False),\n }\n\n\nclass Listener(AWSObject):\n resource_type = \"AWS::ElasticLoadBalancingV2::Listener\"\n\n props = {\n \"AlpnPolicy\": ([str], False),\n \"Certificates\": ([Certificate], False),\n \"DefaultActions\": ([Action], True),\n \"LoadBalancerArn\": (str, True),\n \"Port\": (network_port, True),\n \"Protocol\": (str, True),\n \"SslPolicy\": (str, False),\n }\n\n\nclass ListenerCertificate(AWSObject):\n resource_type = \"AWS::ElasticLoadBalancingV2::ListenerCertificate\"\n\n props = {\n \"Certificates\": ([Certificate], True),\n \"ListenerArn\": (str, True),\n }\n\n\nclass ListenerRule(AWSObject):\n resource_type = \"AWS::ElasticLoadBalancingV2::ListenerRule\"\n\n props = {\n \"Actions\": ([Action], True),\n \"Conditions\": ([Condition], True),\n \"ListenerArn\": (str, True),\n \"Priority\": (integer, True),\n }\n\n\nTARGET_TYPE_ALB = \"alb\"\nTARGET_TYPE_INSTANCE = \"instance\"\nTARGET_TYPE_IP = \"ip\"\nTARGET_TYPE_LAMBDA = \"lambda\"\n\n\ndef validate_target_type(target_type):\n valid_types = [\n TARGET_TYPE_ALB,\n TARGET_TYPE_INSTANCE,\n TARGET_TYPE_IP,\n TARGET_TYPE_LAMBDA,\n ]\n if target_type not in valid_types:\n raise ValueError(\n 'TargetGroup.TargetType must be one of: \"%s\"' % \", \".join(valid_types)\n )\n return target_type\n\n\nclass TargetGroup(AWSObject):\n resource_type = \"AWS::ElasticLoadBalancingV2::TargetGroup\"\n\n props = {\n \"HealthCheckEnabled\": (boolean, False),\n \"HealthCheckIntervalSeconds\": (integer, False),\n \"HealthCheckPath\": (str, False),\n \"HealthCheckPort\": (tg_healthcheck_port, False),\n \"HealthCheckProtocol\": (str, False),\n \"HealthCheckTimeoutSeconds\": (integer, False),\n \"HealthyThresholdCount\": (integer, False),\n \"Matcher\": (Matcher, False),\n \"Name\": (str, False),\n \"Port\": (network_port, False),\n \"Protocol\": (str, False),\n \"ProtocolVersion\": (str, False),\n \"Tags\": ((Tags, list), False),\n \"TargetGroupAttributes\": ([TargetGroupAttribute], False),\n \"Targets\": ([TargetDescription], False),\n \"TargetType\": (validate_target_type, False),\n \"UnhealthyThresholdCount\": (integer, False),\n \"VpcId\": (str, False),\n }\n\n def validate(self):\n def check_properties(action_types, props_to_check, required):\n\n for this_type in action_types:\n\n self_props = self.properties\n if self_props.get(\"TargetType\") == this_type:\n\n invalid_props = []\n for prop in props_to_check:\n\n if (prop not in self_props and required is True) or (\n prop in self_props and required is False\n ):\n invalid_props.append(prop)\n\n if len(invalid_props) > 0:\n # Make error message more readable in the default case\n type_msg = (\n \"Omitting TargetType\"\n if this_type is None\n else 'TargetType of \"%s\"' % this_type\n )\n\n raise ValueError(\n '%s in \"%s\" %s definitions of %s'\n % (\n type_msg,\n self.__class__.__name__,\n \"requires\" if required is True else \"must not contain\",\n str(invalid_props).strip(\"[]\"),\n )\n )\n\n # None defaults to instance as per the AWS docs\n check_properties(\n [None, TARGET_TYPE_INSTANCE, TARGET_TYPE_IP],\n [\"Port\", \"Protocol\", \"VpcId\"],\n True,\n )\n check_properties([TARGET_TYPE_LAMBDA], [\"Port\", \"Protocol\", \"VpcId\"], False)\n\n\nclass LoadBalancer(AWSObject):\n resource_type = \"AWS::ElasticLoadBalancingV2::LoadBalancer\"\n\n props = {\n \"IpAddressType\": (str, False),\n \"LoadBalancerAttributes\": ([LoadBalancerAttributes], False),\n \"Name\": (elb_name, False),\n \"Scheme\": (str, False),\n \"SecurityGroups\": ([str], False),\n \"SubnetMappings\": ([SubnetMapping], False),\n \"Subnets\": ([str], False),\n \"Tags\": ((Tags, list), False),\n \"Type\": (str, False),\n }\n\n def validate(self):\n conds = [\n \"SubnetMappings\",\n \"Subnets\",\n ]\n\n def check_if(names, props):\n validated = []\n for name in names:\n validated.append(name in props and isinstance(props[name], If))\n return all(validated)\n\n if check_if(conds, self.properties):\n return\n\n exactly_one(self.__class__.__name__, self.properties, conds)\n", "path": "troposphere/elasticloadbalancingv2.py"}]} | 4,051 | 206 |
gh_patches_debug_17053 | rasdani/github-patches | git_diff | feast-dev__feast-2255 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Optimize `_populate_result_rows_from_feature_view`
Signed-off-by: Judah Rand <[email protected]>
<!-- Thanks for sending a pull request! Here are some tips for you:
1. Ensure that your code follows our code conventions: https://github.com/feast-dev/feast/blob/master/CONTRIBUTING.md#code-style--linting
2. Run unit tests and ensure that they are passing: https://github.com/feast-dev/feast/blob/master/CONTRIBUTING.md#unit-tests
3. If your change introduces any API changes, make sure to update the integration tests scripts here: https://github.com/feast-dev/feast/tree/master/sdk/python/tests or https://github.com/feast-dev/feast/tree/master/sdk/go
4. Make sure documentation is updated for your PR!
5. Make sure you have signed the CLA https://cla.developers.google.com/clas
-->
**What this PR does / why we need it**:
This commit optimizes the fetching of features by only fetching
the features for each unique Entity once and then expands the result
to the shape of input EntityKeys.
Previously, if an Entity occurred twice the features would be fetched
from the OnlineStore twice. This can be hugely inefficient.
The only assumption that this makes is that the OnlineStore will return
the feature data in the same order as the EntityKeyProtos are provided.
**Which issue(s) this PR fixes**:
<!--
*Automatically closes linked issue when PR is merged.
Usage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`.
-->
Fixes #
**Does this PR introduce a user-facing change?**:
<!--
If no, just write "NONE" in the release-note block below.
If yes, a release note is required:
Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string "action required".
For more information about release notes, see kubernetes' guide here:
http://git.k8s.io/community/contributors/guide/release-notes.md
-->
```release-note
Speed up `get_online_features` when duplicate Entities are present.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/setup.py`
Content:
```
1 # Copyright 2019 The Feast Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import glob
15 import os
16 import re
17 import shutil
18 import subprocess
19 import pathlib
20
21 from distutils.cmd import Command
22 from setuptools import find_packages
23
24 try:
25 from setuptools import setup
26 from setuptools.command.install import install
27 from setuptools.command.develop import develop
28 from setuptools.command.egg_info import egg_info
29 from setuptools.command.sdist import sdist
30 from setuptools.command.build_py import build_py
31 except ImportError:
32 from distutils.core import setup
33 from distutils.command.install import install
34 from distutils.command.build_py import build_py
35
36 NAME = "feast"
37 DESCRIPTION = "Python SDK for Feast"
38 URL = "https://github.com/feast-dev/feast"
39 AUTHOR = "Feast"
40 REQUIRES_PYTHON = ">=3.7.0"
41
42 REQUIRED = [
43 "Click==8.*",
44 "colorama>=0.3.9",
45 "dill==0.3.*",
46 "fastavro>=1.1.0",
47 "google-api-core>=1.23.0",
48 "googleapis-common-protos==1.52.*",
49 "grpcio>=1.34.0",
50 "grpcio-reflection>=1.34.0",
51 "Jinja2>=2.0.0",
52 "jsonschema",
53 "mmh3",
54 "pandas>=1.0.0",
55 "pandavro==1.5.*",
56 "protobuf>=3.10",
57 "proto-plus<1.19.7",
58 "pyarrow>=4.0.0",
59 "pydantic>=1.0.0",
60 "PyYAML>=5.4.*",
61 "tabulate==0.8.*",
62 "tenacity>=7.*",
63 "toml==0.10.*",
64 "tqdm==4.*",
65 "fastapi>=0.68.0",
66 "uvicorn[standard]>=0.14.0",
67 "proto-plus<1.19.7",
68 "tensorflow-metadata>=1.0.0,<2.0.0",
69 ]
70
71 GCP_REQUIRED = [
72 "google-cloud-bigquery>=2.28.1",
73 "google-cloud-bigquery-storage >= 2.0.0",
74 "google-cloud-datastore>=2.1.*",
75 "google-cloud-storage>=1.34.*,<1.41",
76 "google-cloud-core>=1.4.0,<2.0.0",
77 ]
78
79 REDIS_REQUIRED = [
80 "redis>=4.1.0",
81 "hiredis>=2.0.0",
82 ]
83
84 AWS_REQUIRED = [
85 "boto3>=1.17.0",
86 "docker>=5.0.2",
87 ]
88
89 CI_REQUIRED = (
90 [
91 "cryptography==3.3.2",
92 "flake8",
93 "black==19.10b0",
94 "isort>=5",
95 "grpcio-tools==1.34.0",
96 "grpcio-testing==1.34.0",
97 "minio==7.1.0",
98 "mock==2.0.0",
99 "moto",
100 "mypy==0.931",
101 "mypy-protobuf==3.1.0",
102 "avro==1.10.0",
103 "gcsfs",
104 "urllib3>=1.25.4",
105 "pytest>=6.0.0",
106 "pytest-cov",
107 "pytest-xdist",
108 "pytest-benchmark>=3.4.1",
109 "pytest-lazy-fixture==0.6.3",
110 "pytest-timeout==1.4.2",
111 "pytest-ordering==0.6.*",
112 "pytest-mock==1.10.4",
113 "Sphinx!=4.0.0,<4.4.0",
114 "sphinx-rtd-theme",
115 "testcontainers==3.4.2",
116 "adlfs==0.5.9",
117 "firebase-admin==4.5.2",
118 "pre-commit",
119 "assertpy==1.1",
120 "pip-tools",
121 "types-protobuf",
122 "types-python-dateutil",
123 "types-pytz",
124 "types-PyYAML",
125 "types-redis",
126 "types-requests",
127 "types-setuptools",
128 "types-tabulate",
129 ]
130 + GCP_REQUIRED
131 + REDIS_REQUIRED
132 + AWS_REQUIRED
133 )
134
135 DEV_REQUIRED = ["mypy-protobuf>=1.*", "grpcio-testing==1.*"] + CI_REQUIRED
136
137 # Get git repo root directory
138 repo_root = str(pathlib.Path(__file__).resolve().parent.parent.parent)
139
140 # README file from Feast repo root directory
141 README_FILE = os.path.join(repo_root, "README.md")
142 with open(README_FILE, "r", encoding="utf8") as f:
143 LONG_DESCRIPTION = f.read()
144
145 # Add Support for parsing tags that have a prefix containing '/' (ie 'sdk/go') to setuptools_scm.
146 # Regex modified from default tag regex in:
147 # https://github.com/pypa/setuptools_scm/blob/2a1b46d38fb2b8aeac09853e660bcd0d7c1bc7be/src/setuptools_scm/config.py#L9
148 TAG_REGEX = re.compile(
149 r"^(?:[\/\w-]+)?(?P<version>[vV]?\d+(?:\.\d+){0,2}[^\+]*)(?:\+.*)?$"
150 )
151
152 # Only set use_scm_version if git executable exists (setting this variable causes pip to use git under the hood)
153 if shutil.which("git"):
154 use_scm_version = {"root": "../..", "relative_to": __file__, "tag_regex": TAG_REGEX}
155 else:
156 use_scm_version = None
157
158
159 class BuildProtoCommand(Command):
160 description = "Builds the proto files into python files."
161
162 def initialize_options(self):
163 self.protoc = ["python", "-m", "grpc_tools.protoc"] # find_executable("protoc")
164 self.proto_folder = os.path.join(repo_root, "protos")
165 self.this_package = os.path.join(os.path.dirname(__file__) or os.getcwd(), 'feast/protos')
166 self.sub_folders = ["core", "serving", "types", "storage"]
167
168 def finalize_options(self):
169 pass
170
171 def _generate_protos(self, path):
172 proto_files = glob.glob(os.path.join(self.proto_folder, path))
173
174 subprocess.check_call(self.protoc + [
175 '-I', self.proto_folder,
176 '--python_out', self.this_package,
177 '--grpc_python_out', self.this_package,
178 '--mypy_out', self.this_package] + proto_files)
179
180 def run(self):
181 for sub_folder in self.sub_folders:
182 self._generate_protos(f'feast/{sub_folder}/*.proto')
183
184 from pathlib import Path
185
186 for path in Path('feast/protos').rglob('*.py'):
187 for folder in self.sub_folders:
188 # Read in the file
189 with open(path, 'r') as file:
190 filedata = file.read()
191
192 # Replace the target string
193 filedata = filedata.replace(f'from feast.{folder}', f'from feast.protos.feast.{folder}')
194
195 # Write the file out again
196 with open(path, 'w') as file:
197 file.write(filedata)
198
199
200 class BuildCommand(build_py):
201 """Custom build command."""
202
203 def run(self):
204 self.run_command('build_proto')
205 build_py.run(self)
206
207
208 class DevelopCommand(develop):
209 """Custom develop command."""
210
211 def run(self):
212 self.run_command('build_proto')
213 develop.run(self)
214
215
216 setup(
217 name=NAME,
218 author=AUTHOR,
219 description=DESCRIPTION,
220 long_description=LONG_DESCRIPTION,
221 long_description_content_type="text/markdown",
222 python_requires=REQUIRES_PYTHON,
223 url=URL,
224 packages=find_packages(exclude=("tests",)),
225 install_requires=REQUIRED,
226 # https://stackoverflow.com/questions/28509965/setuptools-development-requirements
227 # Install dev requirements with: pip install -e .[dev]
228 extras_require={
229 "dev": DEV_REQUIRED,
230 "ci": CI_REQUIRED,
231 "gcp": GCP_REQUIRED,
232 "aws": AWS_REQUIRED,
233 "redis": REDIS_REQUIRED,
234 },
235 include_package_data=True,
236 license="Apache",
237 classifiers=[
238 # Trove classifiers
239 # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers
240 "License :: OSI Approved :: Apache Software License",
241 "Programming Language :: Python",
242 "Programming Language :: Python :: 3",
243 "Programming Language :: Python :: 3.7",
244 ],
245 entry_points={"console_scripts": ["feast=feast.cli:cli"]},
246 use_scm_version=use_scm_version,
247 setup_requires=["setuptools_scm", "grpcio", "grpcio-tools==1.34.0", "mypy-protobuf==1.*", "sphinx!=4.0.0"],
248 package_data={
249 "": [
250 "protos/feast/**/*.proto",
251 "protos/feast/third_party/grpc/health/v1/*.proto",
252 "feast/protos/feast/**/*.py",
253 ],
254 },
255 cmdclass={
256 "build_proto": BuildProtoCommand,
257 "build_py": BuildCommand,
258 "develop": DevelopCommand,
259 },
260 )
261
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -132,7 +132,7 @@
+ AWS_REQUIRED
)
-DEV_REQUIRED = ["mypy-protobuf>=1.*", "grpcio-testing==1.*"] + CI_REQUIRED
+DEV_REQUIRED = ["mypy-protobuf>=3.1.0", "grpcio-testing==1.*"] + CI_REQUIRED
# Get git repo root directory
repo_root = str(pathlib.Path(__file__).resolve().parent.parent.parent)
@@ -244,7 +244,7 @@
],
entry_points={"console_scripts": ["feast=feast.cli:cli"]},
use_scm_version=use_scm_version,
- setup_requires=["setuptools_scm", "grpcio", "grpcio-tools==1.34.0", "mypy-protobuf==1.*", "sphinx!=4.0.0"],
+ setup_requires=["setuptools_scm", "grpcio", "grpcio-tools==1.34.0", "mypy-protobuf==3.1.0", "sphinx!=4.0.0"],
package_data={
"": [
"protos/feast/**/*.proto",
| {"golden_diff": "diff --git a/sdk/python/setup.py b/sdk/python/setup.py\n--- a/sdk/python/setup.py\n+++ b/sdk/python/setup.py\n@@ -132,7 +132,7 @@\n + AWS_REQUIRED\n )\n \n-DEV_REQUIRED = [\"mypy-protobuf>=1.*\", \"grpcio-testing==1.*\"] + CI_REQUIRED\n+DEV_REQUIRED = [\"mypy-protobuf>=3.1.0\", \"grpcio-testing==1.*\"] + CI_REQUIRED\n \n # Get git repo root directory\n repo_root = str(pathlib.Path(__file__).resolve().parent.parent.parent)\n@@ -244,7 +244,7 @@\n ],\n entry_points={\"console_scripts\": [\"feast=feast.cli:cli\"]},\n use_scm_version=use_scm_version,\n- setup_requires=[\"setuptools_scm\", \"grpcio\", \"grpcio-tools==1.34.0\", \"mypy-protobuf==1.*\", \"sphinx!=4.0.0\"],\n+ setup_requires=[\"setuptools_scm\", \"grpcio\", \"grpcio-tools==1.34.0\", \"mypy-protobuf==3.1.0\", \"sphinx!=4.0.0\"],\n package_data={\n \"\": [\n \"protos/feast/**/*.proto\",\n", "issue": "Optimize `_populate_result_rows_from_feature_view`\nSigned-off-by: Judah Rand <[email protected]>\r\n\r\n<!-- Thanks for sending a pull request! Here are some tips for you:\r\n\r\n1. Ensure that your code follows our code conventions: https://github.com/feast-dev/feast/blob/master/CONTRIBUTING.md#code-style--linting\r\n2. Run unit tests and ensure that they are passing: https://github.com/feast-dev/feast/blob/master/CONTRIBUTING.md#unit-tests\r\n3. If your change introduces any API changes, make sure to update the integration tests scripts here: https://github.com/feast-dev/feast/tree/master/sdk/python/tests or https://github.com/feast-dev/feast/tree/master/sdk/go\r\n4. Make sure documentation is updated for your PR!\r\n5. Make sure you have signed the CLA https://cla.developers.google.com/clas\r\n\r\n-->\r\n\r\n**What this PR does / why we need it**:\r\nThis commit optimizes the fetching of features by only fetching\r\nthe features for each unique Entity once and then expands the result\r\nto the shape of input EntityKeys.\r\n\r\nPreviously, if an Entity occurred twice the features would be fetched\r\nfrom the OnlineStore twice. This can be hugely inefficient.\r\n\r\nThe only assumption that this makes is that the OnlineStore will return \r\nthe feature data in the same order as the EntityKeyProtos are provided.\r\n\r\n**Which issue(s) this PR fixes**:\r\n<!--\r\n*Automatically closes linked issue when PR is merged.\r\nUsage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`.\r\n-->\r\nFixes #\r\n\r\n**Does this PR introduce a user-facing change?**:\r\n<!--\r\nIf no, just write \"NONE\" in the release-note block below.\r\nIf yes, a release note is required:\r\nEnter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string \"action required\".\r\n\r\nFor more information about release notes, see kubernetes' guide here:\r\nhttp://git.k8s.io/community/contributors/guide/release-notes.md\r\n-->\r\n```release-note\r\nSpeed up `get_online_features` when duplicate Entities are present.\r\n```\r\n\n", "before_files": [{"content": "# Copyright 2019 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport glob\nimport os\nimport re\nimport shutil\nimport subprocess\nimport pathlib\n\nfrom distutils.cmd import Command\nfrom setuptools import find_packages\n\ntry:\n from setuptools import setup\n from setuptools.command.install import install\n from setuptools.command.develop import develop\n from setuptools.command.egg_info import egg_info\n from setuptools.command.sdist import sdist\n from setuptools.command.build_py import build_py\nexcept ImportError:\n from distutils.core import setup\n from distutils.command.install import install\n from distutils.command.build_py import build_py\n\nNAME = \"feast\"\nDESCRIPTION = \"Python SDK for Feast\"\nURL = \"https://github.com/feast-dev/feast\"\nAUTHOR = \"Feast\"\nREQUIRES_PYTHON = \">=3.7.0\"\n\nREQUIRED = [\n \"Click==8.*\",\n \"colorama>=0.3.9\",\n \"dill==0.3.*\",\n \"fastavro>=1.1.0\",\n \"google-api-core>=1.23.0\",\n \"googleapis-common-protos==1.52.*\",\n \"grpcio>=1.34.0\",\n \"grpcio-reflection>=1.34.0\",\n \"Jinja2>=2.0.0\",\n \"jsonschema\",\n \"mmh3\",\n \"pandas>=1.0.0\",\n \"pandavro==1.5.*\",\n \"protobuf>=3.10\",\n \"proto-plus<1.19.7\",\n \"pyarrow>=4.0.0\",\n \"pydantic>=1.0.0\",\n \"PyYAML>=5.4.*\",\n \"tabulate==0.8.*\",\n \"tenacity>=7.*\",\n \"toml==0.10.*\",\n \"tqdm==4.*\",\n \"fastapi>=0.68.0\",\n \"uvicorn[standard]>=0.14.0\",\n \"proto-plus<1.19.7\",\n \"tensorflow-metadata>=1.0.0,<2.0.0\",\n]\n\nGCP_REQUIRED = [\n \"google-cloud-bigquery>=2.28.1\",\n \"google-cloud-bigquery-storage >= 2.0.0\",\n \"google-cloud-datastore>=2.1.*\",\n \"google-cloud-storage>=1.34.*,<1.41\",\n \"google-cloud-core>=1.4.0,<2.0.0\",\n]\n\nREDIS_REQUIRED = [\n \"redis>=4.1.0\",\n \"hiredis>=2.0.0\",\n]\n\nAWS_REQUIRED = [\n \"boto3>=1.17.0\",\n \"docker>=5.0.2\",\n]\n\nCI_REQUIRED = (\n [\n \"cryptography==3.3.2\",\n \"flake8\",\n \"black==19.10b0\",\n \"isort>=5\",\n \"grpcio-tools==1.34.0\",\n \"grpcio-testing==1.34.0\",\n \"minio==7.1.0\",\n \"mock==2.0.0\",\n \"moto\",\n \"mypy==0.931\",\n \"mypy-protobuf==3.1.0\",\n \"avro==1.10.0\",\n \"gcsfs\",\n \"urllib3>=1.25.4\",\n \"pytest>=6.0.0\",\n \"pytest-cov\",\n \"pytest-xdist\",\n \"pytest-benchmark>=3.4.1\",\n \"pytest-lazy-fixture==0.6.3\",\n \"pytest-timeout==1.4.2\",\n \"pytest-ordering==0.6.*\",\n \"pytest-mock==1.10.4\",\n \"Sphinx!=4.0.0,<4.4.0\",\n \"sphinx-rtd-theme\",\n \"testcontainers==3.4.2\",\n \"adlfs==0.5.9\",\n \"firebase-admin==4.5.2\",\n \"pre-commit\",\n \"assertpy==1.1\",\n \"pip-tools\",\n \"types-protobuf\",\n \"types-python-dateutil\",\n \"types-pytz\",\n \"types-PyYAML\",\n \"types-redis\",\n \"types-requests\",\n \"types-setuptools\",\n \"types-tabulate\",\n ]\n + GCP_REQUIRED\n + REDIS_REQUIRED\n + AWS_REQUIRED\n)\n\nDEV_REQUIRED = [\"mypy-protobuf>=1.*\", \"grpcio-testing==1.*\"] + CI_REQUIRED\n\n# Get git repo root directory\nrepo_root = str(pathlib.Path(__file__).resolve().parent.parent.parent)\n\n# README file from Feast repo root directory\nREADME_FILE = os.path.join(repo_root, \"README.md\")\nwith open(README_FILE, \"r\", encoding=\"utf8\") as f:\n LONG_DESCRIPTION = f.read()\n\n# Add Support for parsing tags that have a prefix containing '/' (ie 'sdk/go') to setuptools_scm.\n# Regex modified from default tag regex in:\n# https://github.com/pypa/setuptools_scm/blob/2a1b46d38fb2b8aeac09853e660bcd0d7c1bc7be/src/setuptools_scm/config.py#L9\nTAG_REGEX = re.compile(\n r\"^(?:[\\/\\w-]+)?(?P<version>[vV]?\\d+(?:\\.\\d+){0,2}[^\\+]*)(?:\\+.*)?$\"\n)\n\n# Only set use_scm_version if git executable exists (setting this variable causes pip to use git under the hood)\nif shutil.which(\"git\"):\n use_scm_version = {\"root\": \"../..\", \"relative_to\": __file__, \"tag_regex\": TAG_REGEX}\nelse:\n use_scm_version = None\n\n\nclass BuildProtoCommand(Command):\n description = \"Builds the proto files into python files.\"\n\n def initialize_options(self):\n self.protoc = [\"python\", \"-m\", \"grpc_tools.protoc\"] # find_executable(\"protoc\")\n self.proto_folder = os.path.join(repo_root, \"protos\")\n self.this_package = os.path.join(os.path.dirname(__file__) or os.getcwd(), 'feast/protos')\n self.sub_folders = [\"core\", \"serving\", \"types\", \"storage\"]\n\n def finalize_options(self):\n pass\n\n def _generate_protos(self, path):\n proto_files = glob.glob(os.path.join(self.proto_folder, path))\n\n subprocess.check_call(self.protoc + [\n '-I', self.proto_folder,\n '--python_out', self.this_package,\n '--grpc_python_out', self.this_package,\n '--mypy_out', self.this_package] + proto_files)\n\n def run(self):\n for sub_folder in self.sub_folders:\n self._generate_protos(f'feast/{sub_folder}/*.proto')\n\n from pathlib import Path\n\n for path in Path('feast/protos').rglob('*.py'):\n for folder in self.sub_folders:\n # Read in the file\n with open(path, 'r') as file:\n filedata = file.read()\n\n # Replace the target string\n filedata = filedata.replace(f'from feast.{folder}', f'from feast.protos.feast.{folder}')\n\n # Write the file out again\n with open(path, 'w') as file:\n file.write(filedata)\n\n\nclass BuildCommand(build_py):\n \"\"\"Custom build command.\"\"\"\n\n def run(self):\n self.run_command('build_proto')\n build_py.run(self)\n\n\nclass DevelopCommand(develop):\n \"\"\"Custom develop command.\"\"\"\n\n def run(self):\n self.run_command('build_proto')\n develop.run(self)\n\n\nsetup(\n name=NAME,\n author=AUTHOR,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type=\"text/markdown\",\n python_requires=REQUIRES_PYTHON,\n url=URL,\n packages=find_packages(exclude=(\"tests\",)),\n install_requires=REQUIRED,\n # https://stackoverflow.com/questions/28509965/setuptools-development-requirements\n # Install dev requirements with: pip install -e .[dev]\n extras_require={\n \"dev\": DEV_REQUIRED,\n \"ci\": CI_REQUIRED,\n \"gcp\": GCP_REQUIRED,\n \"aws\": AWS_REQUIRED,\n \"redis\": REDIS_REQUIRED,\n },\n include_package_data=True,\n license=\"Apache\",\n classifiers=[\n # Trove classifiers\n # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n ],\n entry_points={\"console_scripts\": [\"feast=feast.cli:cli\"]},\n use_scm_version=use_scm_version,\n setup_requires=[\"setuptools_scm\", \"grpcio\", \"grpcio-tools==1.34.0\", \"mypy-protobuf==1.*\", \"sphinx!=4.0.0\"],\n package_data={\n \"\": [\n \"protos/feast/**/*.proto\",\n \"protos/feast/third_party/grpc/health/v1/*.proto\",\n \"feast/protos/feast/**/*.py\",\n ],\n },\n cmdclass={\n \"build_proto\": BuildProtoCommand,\n \"build_py\": BuildCommand,\n \"develop\": DevelopCommand,\n },\n)\n", "path": "sdk/python/setup.py"}], "after_files": [{"content": "# Copyright 2019 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport glob\nimport os\nimport re\nimport shutil\nimport subprocess\nimport pathlib\n\nfrom distutils.cmd import Command\nfrom setuptools import find_packages\n\ntry:\n from setuptools import setup\n from setuptools.command.install import install\n from setuptools.command.develop import develop\n from setuptools.command.egg_info import egg_info\n from setuptools.command.sdist import sdist\n from setuptools.command.build_py import build_py\nexcept ImportError:\n from distutils.core import setup\n from distutils.command.install import install\n from distutils.command.build_py import build_py\n\nNAME = \"feast\"\nDESCRIPTION = \"Python SDK for Feast\"\nURL = \"https://github.com/feast-dev/feast\"\nAUTHOR = \"Feast\"\nREQUIRES_PYTHON = \">=3.7.0\"\n\nREQUIRED = [\n \"Click==8.*\",\n \"colorama>=0.3.9\",\n \"dill==0.3.*\",\n \"fastavro>=1.1.0\",\n \"google-api-core>=1.23.0\",\n \"googleapis-common-protos==1.52.*\",\n \"grpcio>=1.34.0\",\n \"grpcio-reflection>=1.34.0\",\n \"Jinja2>=2.0.0\",\n \"jsonschema\",\n \"mmh3\",\n \"pandas>=1.0.0\",\n \"pandavro==1.5.*\",\n \"protobuf>=3.10\",\n \"proto-plus<1.19.7\",\n \"pyarrow>=4.0.0\",\n \"pydantic>=1.0.0\",\n \"PyYAML>=5.4.*\",\n \"tabulate==0.8.*\",\n \"tenacity>=7.*\",\n \"toml==0.10.*\",\n \"tqdm==4.*\",\n \"fastapi>=0.68.0\",\n \"uvicorn[standard]>=0.14.0\",\n \"proto-plus<1.19.7\",\n \"tensorflow-metadata>=1.0.0,<2.0.0\",\n]\n\nGCP_REQUIRED = [\n \"google-cloud-bigquery>=2.28.1\",\n \"google-cloud-bigquery-storage >= 2.0.0\",\n \"google-cloud-datastore>=2.1.*\",\n \"google-cloud-storage>=1.34.*,<1.41\",\n \"google-cloud-core>=1.4.0,<2.0.0\",\n]\n\nREDIS_REQUIRED = [\n \"redis>=4.1.0\",\n \"hiredis>=2.0.0\",\n]\n\nAWS_REQUIRED = [\n \"boto3>=1.17.0\",\n \"docker>=5.0.2\",\n]\n\nCI_REQUIRED = (\n [\n \"cryptography==3.3.2\",\n \"flake8\",\n \"black==19.10b0\",\n \"isort>=5\",\n \"grpcio-tools==1.34.0\",\n \"grpcio-testing==1.34.0\",\n \"minio==7.1.0\",\n \"mock==2.0.0\",\n \"moto\",\n \"mypy==0.931\",\n \"mypy-protobuf==3.1.0\",\n \"avro==1.10.0\",\n \"gcsfs\",\n \"urllib3>=1.25.4\",\n \"pytest>=6.0.0\",\n \"pytest-cov\",\n \"pytest-xdist\",\n \"pytest-benchmark>=3.4.1\",\n \"pytest-lazy-fixture==0.6.3\",\n \"pytest-timeout==1.4.2\",\n \"pytest-ordering==0.6.*\",\n \"pytest-mock==1.10.4\",\n \"Sphinx!=4.0.0,<4.4.0\",\n \"sphinx-rtd-theme\",\n \"testcontainers==3.4.2\",\n \"adlfs==0.5.9\",\n \"firebase-admin==4.5.2\",\n \"pre-commit\",\n \"assertpy==1.1\",\n \"pip-tools\",\n \"types-protobuf\",\n \"types-python-dateutil\",\n \"types-pytz\",\n \"types-PyYAML\",\n \"types-redis\",\n \"types-requests\",\n \"types-setuptools\",\n \"types-tabulate\",\n ]\n + GCP_REQUIRED\n + REDIS_REQUIRED\n + AWS_REQUIRED\n)\n\nDEV_REQUIRED = [\"mypy-protobuf>=3.1.0\", \"grpcio-testing==1.*\"] + CI_REQUIRED\n\n# Get git repo root directory\nrepo_root = str(pathlib.Path(__file__).resolve().parent.parent.parent)\n\n# README file from Feast repo root directory\nREADME_FILE = os.path.join(repo_root, \"README.md\")\nwith open(README_FILE, \"r\", encoding=\"utf8\") as f:\n LONG_DESCRIPTION = f.read()\n\n# Add Support for parsing tags that have a prefix containing '/' (ie 'sdk/go') to setuptools_scm.\n# Regex modified from default tag regex in:\n# https://github.com/pypa/setuptools_scm/blob/2a1b46d38fb2b8aeac09853e660bcd0d7c1bc7be/src/setuptools_scm/config.py#L9\nTAG_REGEX = re.compile(\n r\"^(?:[\\/\\w-]+)?(?P<version>[vV]?\\d+(?:\\.\\d+){0,2}[^\\+]*)(?:\\+.*)?$\"\n)\n\n# Only set use_scm_version if git executable exists (setting this variable causes pip to use git under the hood)\nif shutil.which(\"git\"):\n use_scm_version = {\"root\": \"../..\", \"relative_to\": __file__, \"tag_regex\": TAG_REGEX}\nelse:\n use_scm_version = None\n\n\nclass BuildProtoCommand(Command):\n description = \"Builds the proto files into python files.\"\n\n def initialize_options(self):\n self.protoc = [\"python\", \"-m\", \"grpc_tools.protoc\"] # find_executable(\"protoc\")\n self.proto_folder = os.path.join(repo_root, \"protos\")\n self.this_package = os.path.join(os.path.dirname(__file__) or os.getcwd(), 'feast/protos')\n self.sub_folders = [\"core\", \"serving\", \"types\", \"storage\"]\n\n def finalize_options(self):\n pass\n\n def _generate_protos(self, path):\n proto_files = glob.glob(os.path.join(self.proto_folder, path))\n\n subprocess.check_call(self.protoc + [\n '-I', self.proto_folder,\n '--python_out', self.this_package,\n '--grpc_python_out', self.this_package,\n '--mypy_out', self.this_package] + proto_files)\n\n def run(self):\n for sub_folder in self.sub_folders:\n self._generate_protos(f'feast/{sub_folder}/*.proto')\n\n from pathlib import Path\n\n for path in Path('feast/protos').rglob('*.py'):\n for folder in self.sub_folders:\n # Read in the file\n with open(path, 'r') as file:\n filedata = file.read()\n\n # Replace the target string\n filedata = filedata.replace(f'from feast.{folder}', f'from feast.protos.feast.{folder}')\n\n # Write the file out again\n with open(path, 'w') as file:\n file.write(filedata)\n\n\nclass BuildCommand(build_py):\n \"\"\"Custom build command.\"\"\"\n\n def run(self):\n self.run_command('build_proto')\n build_py.run(self)\n\n\nclass DevelopCommand(develop):\n \"\"\"Custom develop command.\"\"\"\n\n def run(self):\n self.run_command('build_proto')\n develop.run(self)\n\n\nsetup(\n name=NAME,\n author=AUTHOR,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type=\"text/markdown\",\n python_requires=REQUIRES_PYTHON,\n url=URL,\n packages=find_packages(exclude=(\"tests\",)),\n install_requires=REQUIRED,\n # https://stackoverflow.com/questions/28509965/setuptools-development-requirements\n # Install dev requirements with: pip install -e .[dev]\n extras_require={\n \"dev\": DEV_REQUIRED,\n \"ci\": CI_REQUIRED,\n \"gcp\": GCP_REQUIRED,\n \"aws\": AWS_REQUIRED,\n \"redis\": REDIS_REQUIRED,\n },\n include_package_data=True,\n license=\"Apache\",\n classifiers=[\n # Trove classifiers\n # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n ],\n entry_points={\"console_scripts\": [\"feast=feast.cli:cli\"]},\n use_scm_version=use_scm_version,\n setup_requires=[\"setuptools_scm\", \"grpcio\", \"grpcio-tools==1.34.0\", \"mypy-protobuf==3.1.0\", \"sphinx!=4.0.0\"],\n package_data={\n \"\": [\n \"protos/feast/**/*.proto\",\n \"protos/feast/third_party/grpc/health/v1/*.proto\",\n \"feast/protos/feast/**/*.py\",\n ],\n },\n cmdclass={\n \"build_proto\": BuildProtoCommand,\n \"build_py\": BuildCommand,\n \"develop\": DevelopCommand,\n },\n)\n", "path": "sdk/python/setup.py"}]} | 3,645 | 281 |
gh_patches_debug_7622 | rasdani/github-patches | git_diff | ytdl-org__youtube-dl-18425 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Extractor for yourporn.sexy is broken
## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.12.03*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.12.03**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser
### What is the purpose of your *issue*?
- [x] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
```
$ youtube-dl -v https://yourporn.sexy/post/5bf56573616c2.html
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-v', u'https://yourporn.sexy/post/5bf56573616c2.html']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2018.12.03
[debug] Python version 2.7.10 (CPython) - Darwin-17.7.0-x86_64-i386-64bit
[debug] exe versions: ffmpeg 4.0.2, ffprobe 4.0.2
[debug] Proxy map: {}
[YourPorn] 5bf56573616c2: Downloading webpage
[debug] Default format spec: bestvideo+bestaudio/best
[debug] Invoking downloader on u'https://yourporn.sexy/cdn/c11/ldjJi9usRy26gVwhgzEn9w/1544086469/hk5sajembx0dd41hcp09ah8m3s2/25qb3fr5d605l7m316y1969c42k.mp4'
ERROR: Did not get any data blocks
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/Users/v-delta/.local/bin/youtube-dl/__main__.py", line 19, in <module>
youtube_dl.main()
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/__init__.py", line 472, in main
_real_main(argv)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/__init__.py", line 462, in _real_main
retcode = ydl.download(all_urls)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 2001, in download
url, force_generic_extractor=self.params.get('force_generic_extractor', False))
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 803, in extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 857, in process_ie_result
return self.process_video_result(ie_result, download=download)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1635, in process_video_result
self.process_info(new_info)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1908, in process_info
success = dl(filename, info_dict)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 1847, in dl
return fd.download(name, info)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/common.py", line 364, in download
return self.real_download(filename, info_dict)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/http.py", line 342, in real_download
return download()
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/http.py", line 312, in download
self.report_error('Did not get any data blocks')
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/common.py", line 165, in report_error
self.ydl.report_error(*args, **kargs)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 620, in report_error
self.trouble(error_message, tb)
File "/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 582, in trouble
tb_data = traceback.format_list(traceback.extract_stack())
```
### Description of your *issue*, suggested solution and other information
The videos play fine in any browser, that is because somewehre the URL the extractor delivers is changed from
```
https://yourporn.sexy/cdn/c11/tlRIwnitpU4dxFtCUK1OMQ/1544087142/fx5xahe3b40kda1sc709a98q342/e51bafd56655a7m356z1s6tcv2i.mp4
```
to
```
https://yourporn.sexy/cdn2/c11/tlRIwnitpU4dxFtCUK1OMQ/1544087142/fx5xahe3b40kda1sc709a98q342/e51bafd56655a7m356z1s6tcv2i.mp4
```
A `2` is inserted after `/cdn`. I will create a pull request fixing this bug soon.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `youtube_dl/extractor/yourporn.py`
Content:
```
1 from __future__ import unicode_literals
2
3 from .common import InfoExtractor
4 from ..utils import urljoin
5
6
7 class YourPornIE(InfoExtractor):
8 _VALID_URL = r'https?://(?:www\.)?yourporn\.sexy/post/(?P<id>[^/?#&.]+)'
9 _TEST = {
10 'url': 'https://yourporn.sexy/post/57ffcb2e1179b.html',
11 'md5': '6f8682b6464033d87acaa7a8ff0c092e',
12 'info_dict': {
13 'id': '57ffcb2e1179b',
14 'ext': 'mp4',
15 'title': 'md5:c9f43630bd968267672651ba905a7d35',
16 'thumbnail': r're:^https?://.*\.jpg$',
17 },
18 }
19
20 def _real_extract(self, url):
21 video_id = self._match_id(url)
22
23 webpage = self._download_webpage(url, video_id)
24
25 video_url = urljoin(url, self._parse_json(
26 self._search_regex(
27 r'data-vnfo=(["\'])(?P<data>{.+?})\1', webpage, 'data info',
28 group='data'),
29 video_id)[video_id])
30
31 title = (self._search_regex(
32 r'<[^>]+\bclass=["\']PostEditTA[^>]+>([^<]+)', webpage, 'title',
33 default=None) or self._og_search_description(webpage)).strip()
34 thumbnail = self._og_search_thumbnail(webpage)
35
36 return {
37 'id': video_id,
38 'url': video_url,
39 'title': title,
40 'thumbnail': thumbnail,
41 }
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/youtube_dl/extractor/yourporn.py b/youtube_dl/extractor/yourporn.py
--- a/youtube_dl/extractor/yourporn.py
+++ b/youtube_dl/extractor/yourporn.py
@@ -26,7 +26,7 @@
self._search_regex(
r'data-vnfo=(["\'])(?P<data>{.+?})\1', webpage, 'data info',
group='data'),
- video_id)[video_id])
+ video_id)[video_id]).replace('/cdn/', '/cdn2/')
title = (self._search_regex(
r'<[^>]+\bclass=["\']PostEditTA[^>]+>([^<]+)', webpage, 'title',
| {"golden_diff": "diff --git a/youtube_dl/extractor/yourporn.py b/youtube_dl/extractor/yourporn.py\n--- a/youtube_dl/extractor/yourporn.py\n+++ b/youtube_dl/extractor/yourporn.py\n@@ -26,7 +26,7 @@\n self._search_regex(\n r'data-vnfo=([\"\\'])(?P<data>{.+?})\\1', webpage, 'data info',\n group='data'),\n- video_id)[video_id])\n+ video_id)[video_id]).replace('/cdn/', '/cdn2/')\n \n title = (self._search_regex(\n r'<[^>]+\\bclass=[\"\\']PostEditTA[^>]+>([^<]+)', webpage, 'title',\n", "issue": "Extractor for yourporn.sexy is broken\n## Please follow the guide below\r\n\r\n- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly\r\n- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)\r\n- Use the *Preview* tab to see what your issue will actually look like\r\n\r\n---\r\n\r\n### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.12.03*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.12.03**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n- [x] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser\r\n\r\n### What is the purpose of your *issue*?\r\n- [x] Bug report (encountered problems with youtube-dl)\r\n- [ ] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [ ] Question\r\n- [ ] Other\r\n\r\n```\r\n$ youtube-dl -v https://yourporn.sexy/post/5bf56573616c2.html\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: [u'-v', u'https://yourporn.sexy/post/5bf56573616c2.html']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8\r\n[debug] youtube-dl version 2018.12.03\r\n[debug] Python version 2.7.10 (CPython) - Darwin-17.7.0-x86_64-i386-64bit\r\n[debug] exe versions: ffmpeg 4.0.2, ffprobe 4.0.2\r\n[debug] Proxy map: {}\r\n[YourPorn] 5bf56573616c2: Downloading webpage\r\n[debug] Default format spec: bestvideo+bestaudio/best\r\n[debug] Invoking downloader on u'https://yourporn.sexy/cdn/c11/ldjJi9usRy26gVwhgzEn9w/1544086469/hk5sajembx0dd41hcp09ah8m3s2/25qb3fr5d605l7m316y1969c42k.mp4'\r\n\r\n\r\nERROR: Did not get any data blocks\r\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 162, in _run_module_as_main\r\n \"__main__\", fname, loader, pkg_name)\r\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py\", line 72, in _run_code\r\n exec code in run_globals\r\n File \"/Users/v-delta/.local/bin/youtube-dl/__main__.py\", line 19, in <module>\r\n youtube_dl.main()\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/__init__.py\", line 472, in main\r\n _real_main(argv)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/__init__.py\", line 462, in _real_main\r\n retcode = ydl.download(all_urls)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 2001, in download\r\n url, force_generic_extractor=self.params.get('force_generic_extractor', False))\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 803, in extract_info\r\n return self.process_ie_result(ie_result, download, extra_info)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 857, in process_ie_result\r\n return self.process_video_result(ie_result, download=download)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 1635, in process_video_result\r\n self.process_info(new_info)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 1908, in process_info\r\n success = dl(filename, info_dict)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 1847, in dl\r\n return fd.download(name, info)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/common.py\", line 364, in download\r\n return self.real_download(filename, info_dict)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/http.py\", line 342, in real_download\r\n return download()\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/http.py\", line 312, in download\r\n self.report_error('Did not get any data blocks')\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/downloader/common.py\", line 165, in report_error\r\n self.ydl.report_error(*args, **kargs)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 620, in report_error\r\n self.trouble(error_message, tb)\r\n File \"/Users/v-delta/.local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 582, in trouble\r\n tb_data = traceback.format_list(traceback.extract_stack())\r\n```\r\n\r\n### Description of your *issue*, suggested solution and other information\r\n\r\nThe videos play fine in any browser, that is because somewehre the URL the extractor delivers is changed from\r\n\r\n```\r\nhttps://yourporn.sexy/cdn/c11/tlRIwnitpU4dxFtCUK1OMQ/1544087142/fx5xahe3b40kda1sc709a98q342/e51bafd56655a7m356z1s6tcv2i.mp4\r\n```\r\n\r\nto\r\n\r\n```\r\nhttps://yourporn.sexy/cdn2/c11/tlRIwnitpU4dxFtCUK1OMQ/1544087142/fx5xahe3b40kda1sc709a98q342/e51bafd56655a7m356z1s6tcv2i.mp4\r\n```\r\n\r\nA `2` is inserted after `/cdn`. I will create a pull request fixing this bug soon.\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom .common import InfoExtractor\nfrom ..utils import urljoin\n\n\nclass YourPornIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?yourporn\\.sexy/post/(?P<id>[^/?#&.]+)'\n _TEST = {\n 'url': 'https://yourporn.sexy/post/57ffcb2e1179b.html',\n 'md5': '6f8682b6464033d87acaa7a8ff0c092e',\n 'info_dict': {\n 'id': '57ffcb2e1179b',\n 'ext': 'mp4',\n 'title': 'md5:c9f43630bd968267672651ba905a7d35',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n },\n }\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n\n webpage = self._download_webpage(url, video_id)\n\n video_url = urljoin(url, self._parse_json(\n self._search_regex(\n r'data-vnfo=([\"\\'])(?P<data>{.+?})\\1', webpage, 'data info',\n group='data'),\n video_id)[video_id])\n\n title = (self._search_regex(\n r'<[^>]+\\bclass=[\"\\']PostEditTA[^>]+>([^<]+)', webpage, 'title',\n default=None) or self._og_search_description(webpage)).strip()\n thumbnail = self._og_search_thumbnail(webpage)\n\n return {\n 'id': video_id,\n 'url': video_url,\n 'title': title,\n 'thumbnail': thumbnail,\n }\n", "path": "youtube_dl/extractor/yourporn.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nfrom .common import InfoExtractor\nfrom ..utils import urljoin\n\n\nclass YourPornIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?yourporn\\.sexy/post/(?P<id>[^/?#&.]+)'\n _TEST = {\n 'url': 'https://yourporn.sexy/post/57ffcb2e1179b.html',\n 'md5': '6f8682b6464033d87acaa7a8ff0c092e',\n 'info_dict': {\n 'id': '57ffcb2e1179b',\n 'ext': 'mp4',\n 'title': 'md5:c9f43630bd968267672651ba905a7d35',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n },\n }\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n\n webpage = self._download_webpage(url, video_id)\n\n video_url = urljoin(url, self._parse_json(\n self._search_regex(\n r'data-vnfo=([\"\\'])(?P<data>{.+?})\\1', webpage, 'data info',\n group='data'),\n video_id)[video_id]).replace('/cdn/', '/cdn2/')\n\n title = (self._search_regex(\n r'<[^>]+\\bclass=[\"\\']PostEditTA[^>]+>([^<]+)', webpage, 'title',\n default=None) or self._og_search_description(webpage)).strip()\n thumbnail = self._og_search_thumbnail(webpage)\n\n return {\n 'id': video_id,\n 'url': video_url,\n 'title': title,\n 'thumbnail': thumbnail,\n }\n", "path": "youtube_dl/extractor/yourporn.py"}]} | 2,472 | 164 |
gh_patches_debug_25195 | rasdani/github-patches | git_diff | pypi__warehouse-1178 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Search relevancy is still not ideal
#1020 fixed #1019 for the majority of packages, however a few still produce odd results:
For example:
https://warehouse.python.org/search/?q=flask (`Flask` package is 2nd, `Flask-Admin` is first)
https://warehouse.python.org/search/?q=django (`Django` package is 11th, `dotulu` is first)
https://warehouse.python.org/search/?q=git (First 3 packages do not have "git" anywhere in them)
This is hard to test in dev because the dev DB is a snapshot of TestPyPI, and those packages are missing.
@dstufft, would it be possible to get a more complete DB for local development?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/packaging/search.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from elasticsearch_dsl import DocType, String, analyzer, MetaField, Date
14
15 from warehouse.search import doc_type
16
17
18 EmailAnalyzer = analyzer(
19 "email",
20 tokenizer="uax_url_email",
21 filter=["standard", "lowercase", "stop", "snowball"],
22 )
23
24
25 @doc_type
26 class Project(DocType):
27
28 name = String()
29 normalized_name = String(index="not_analyzed")
30 version = String(index="not_analyzed", multi=True)
31 summary = String(analyzer="snowball")
32 description = String(analyzer="snowball")
33 author = String()
34 author_email = String(analyzer=EmailAnalyzer)
35 maintainer = String()
36 maintainer_email = String(analyzer=EmailAnalyzer)
37 license = String()
38 home_page = String(index="not_analyzed")
39 download_url = String(index="not_analyzed")
40 keywords = String(analyzer="snowball")
41 platform = String(index="not_analyzed")
42 created = Date()
43 classifiers = String(index="not_analyzed", multi=True)
44
45 class Meta:
46 # disable the _all field to save some space
47 all = MetaField(enabled=False)
48
49 @classmethod
50 def from_db(cls, release):
51 obj = cls(meta={"id": release.project.normalized_name})
52 obj["name"] = release.project.name
53 obj["normalized_name"] = release.project.normalized_name
54 obj["version"] = [r.version for r in release.project.releases]
55 obj["summary"] = release.summary
56 obj["description"] = release.description
57 obj["author"] = release.author
58 obj["author_email"] = release.author_email
59 obj["maintainer"] = release.maintainer
60 obj["maintainer_email"] = release.maintainer_email
61 obj["home_page"] = release.home_page
62 obj["download_url"] = release.download_url
63 obj["keywords"] = release.keywords
64 obj["platform"] = release.platform
65 obj["created"] = release.created
66 obj["classifiers"] = list(release.classifiers)
67
68 return obj
69
```
Path: `warehouse/views.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import collections
14
15 from pyramid.httpexceptions import (
16 HTTPException, HTTPSeeOther, HTTPMovedPermanently, HTTPNotFound,
17 )
18 from pyramid.view import (
19 notfound_view_config, forbidden_view_config, view_config,
20 )
21 from sqlalchemy import func
22 from sqlalchemy.orm import aliased, joinedload
23
24 from warehouse.accounts import REDIRECT_FIELD_NAME
25 from warehouse.accounts.models import User
26 from warehouse.cache.origin import origin_cache
27 from warehouse.cache.http import cache_control
28 from warehouse.classifiers.models import Classifier
29 from warehouse.packaging.models import Project, Release, File
30 from warehouse.utils.row_counter import RowCount
31 from warehouse.utils.paginate import ElasticsearchPage, paginate_url_factory
32
33
34 @view_config(context=HTTPException)
35 @notfound_view_config(append_slash=HTTPMovedPermanently)
36 def httpexception_view(exc, request):
37 return exc
38
39
40 @forbidden_view_config()
41 def forbidden(exc, request):
42 # If the forbidden error is because the user isn't logged in, then we'll
43 # redirect them to the log in page.
44 if request.authenticated_userid is None:
45 url = request.route_url(
46 "accounts.login",
47 _query={REDIRECT_FIELD_NAME: request.path_qs},
48 )
49 return HTTPSeeOther(url)
50
51 # If we've reached here, then the user is logged in and they are genuinely
52 # not allowed to access this page.
53 # TODO: Style the forbidden page.
54 return exc
55
56
57 @view_config(
58 route_name="robots.txt",
59 renderer="robots.txt",
60 decorator=[
61 cache_control(1 * 24 * 60 * 60), # 1 day
62 origin_cache(
63 1 * 24 * 60 * 60, # 1 day
64 stale_while_revalidate=6 * 60 * 60, # 6 hours
65 stale_if_error=1 * 24 * 60 * 60, # 1 day
66 ),
67 ],
68 )
69 def robotstxt(request):
70 request.response.content_type = "text/plain"
71 return {}
72
73
74 @view_config(
75 route_name="index",
76 renderer="index.html",
77 decorator=[
78 origin_cache(
79 1 * 60 * 60, # 1 hour
80 stale_while_revalidate=10 * 60, # 10 minutes
81 stale_if_error=1 * 24 * 60 * 60, # 1 day
82 keys=["all-projects"],
83 ),
84 ]
85 )
86 def index(request):
87 project_names = [
88 r[0] for r in (
89 request.db.query(File.name)
90 .group_by(File.name)
91 .order_by(func.sum(File.downloads).desc())
92 .limit(5)
93 .all())
94 ]
95 release_a = aliased(
96 Release,
97 request.db.query(Release)
98 .distinct(Release.name)
99 .filter(Release.name.in_(project_names))
100 .order_by(Release.name, Release._pypi_ordering.desc())
101 .subquery(),
102 )
103 top_projects = (
104 request.db.query(release_a)
105 .options(joinedload(release_a.project),
106 joinedload(release_a.uploader))
107 .order_by(func.array_idx(project_names, release_a.name))
108 .all()
109 )
110
111 latest_releases = (
112 request.db.query(Release)
113 .options(joinedload(Release.project),
114 joinedload(Release.uploader))
115 .order_by(Release.created.desc())
116 .limit(5)
117 .all()
118 )
119
120 counts = dict(
121 request.db.query(RowCount.table_name, RowCount.count)
122 .filter(
123 RowCount.table_name.in_([
124 Project.__tablename__,
125 Release.__tablename__,
126 File.__tablename__,
127 User.__tablename__,
128 ]))
129 .all()
130 )
131
132 return {
133 "latest_releases": latest_releases,
134 "top_projects": top_projects,
135 "num_projects": counts.get(Project.__tablename__, 0),
136 "num_releases": counts.get(Release.__tablename__, 0),
137 "num_files": counts.get(File.__tablename__, 0),
138 "num_users": counts.get(User.__tablename__, 0),
139 }
140
141
142 @view_config(
143 route_name="search",
144 renderer="search/results.html",
145 decorator=[
146 origin_cache(
147 1 * 60 * 60, # 1 hour
148 stale_while_revalidate=10 * 60, # 10 minutes
149 stale_if_error=1 * 24 * 60 * 60, # 1 day
150 keys=["all-projects"],
151 )
152 ],
153 )
154 def search(request):
155 if request.params.get("q"):
156 query = request.es.query(
157 "multi_match",
158 query=request.params["q"],
159 fields=[
160 "name^2", "version", "author", "author_email", "maintainer",
161 "maintainer_email", "home_page", "license", "summary",
162 "description", "keywords", "platform", "download_url",
163 ],
164 ).suggest(
165 name="name_suggestion",
166 text=request.params["q"],
167 term={"field": "name"}
168 )
169 else:
170 query = request.es.query()
171
172 if request.params.get("o"):
173 query = query.sort(request.params["o"])
174
175 if request.params.getall("c"):
176 query = query.filter('terms', classifiers=request.params.getall("c"))
177
178 page_num = int(request.params.get("page", 1))
179 page = ElasticsearchPage(
180 query,
181 page=page_num,
182 url_maker=paginate_url_factory(request),
183 )
184
185 if page.page_count and page_num > page.page_count:
186 raise HTTPNotFound
187
188 available_filters = collections.defaultdict(list)
189
190 for cls in request.db.query(Classifier).order_by(Classifier.classifier):
191 first, *_ = cls.classifier.split(' :: ')
192 available_filters[first].append(cls.classifier)
193
194 return {
195 "page": page,
196 "term": request.params.get("q", ''),
197 "order": request.params.get("o", ''),
198 "available_filters": sorted(available_filters.items()),
199 "applied_filters": request.params.getall("c"),
200 }
201
202
203 @view_config(
204 route_name="includes.current-user-indicator",
205 renderer="includes/current-user-indicator.html",
206 uses_session=True,
207 )
208 def current_user_indicator(request):
209 return {}
210
211
212 @view_config(route_name="health", renderer="string")
213 def health(request):
214 # This will ensure that we can access the database and run queries against
215 # it without doing anything that will take a lock or block other queries.
216 request.db.execute("SELECT 1")
217
218 # Nothing will actually check this, but it's a little nicer to have
219 # something to return besides an empty body.
220 return "OK"
221
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/warehouse/packaging/search.py b/warehouse/packaging/search.py
--- a/warehouse/packaging/search.py
+++ b/warehouse/packaging/search.py
@@ -21,12 +21,18 @@
filter=["standard", "lowercase", "stop", "snowball"],
)
+NameAnalyzer = analyzer(
+ "normalized_name",
+ tokenizer="lowercase",
+ filter=["standard", "lowercase", "word_delimiter"],
+)
+
@doc_type
class Project(DocType):
name = String()
- normalized_name = String(index="not_analyzed")
+ normalized_name = String(analyzer=NameAnalyzer, index_options="docs")
version = String(index="not_analyzed", multi=True)
summary = String(analyzer="snowball")
description = String(analyzer="snowball")
diff --git a/warehouse/views.py b/warehouse/views.py
--- a/warehouse/views.py
+++ b/warehouse/views.py
@@ -157,9 +157,10 @@
"multi_match",
query=request.params["q"],
fields=[
- "name^2", "version", "author", "author_email", "maintainer",
- "maintainer_email", "home_page", "license", "summary",
- "description", "keywords", "platform", "download_url",
+ "author", "author_email", "description^5", "download_url",
+ "home_page", "keywords^5", "license", "maintainer",
+ "maintainer_email", "normalized_name^10", "platform",
+ "summary^5",
],
).suggest(
name="name_suggestion",
| {"golden_diff": "diff --git a/warehouse/packaging/search.py b/warehouse/packaging/search.py\n--- a/warehouse/packaging/search.py\n+++ b/warehouse/packaging/search.py\n@@ -21,12 +21,18 @@\n filter=[\"standard\", \"lowercase\", \"stop\", \"snowball\"],\n )\n \n+NameAnalyzer = analyzer(\n+ \"normalized_name\",\n+ tokenizer=\"lowercase\",\n+ filter=[\"standard\", \"lowercase\", \"word_delimiter\"],\n+)\n+\n \n @doc_type\n class Project(DocType):\n \n name = String()\n- normalized_name = String(index=\"not_analyzed\")\n+ normalized_name = String(analyzer=NameAnalyzer, index_options=\"docs\")\n version = String(index=\"not_analyzed\", multi=True)\n summary = String(analyzer=\"snowball\")\n description = String(analyzer=\"snowball\")\ndiff --git a/warehouse/views.py b/warehouse/views.py\n--- a/warehouse/views.py\n+++ b/warehouse/views.py\n@@ -157,9 +157,10 @@\n \"multi_match\",\n query=request.params[\"q\"],\n fields=[\n- \"name^2\", \"version\", \"author\", \"author_email\", \"maintainer\",\n- \"maintainer_email\", \"home_page\", \"license\", \"summary\",\n- \"description\", \"keywords\", \"platform\", \"download_url\",\n+ \"author\", \"author_email\", \"description^5\", \"download_url\",\n+ \"home_page\", \"keywords^5\", \"license\", \"maintainer\",\n+ \"maintainer_email\", \"normalized_name^10\", \"platform\",\n+ \"summary^5\",\n ],\n ).suggest(\n name=\"name_suggestion\",\n", "issue": "Search relevancy is still not ideal\n#1020 fixed #1019 for the majority of packages, however a few still produce odd results:\n\nFor example:\nhttps://warehouse.python.org/search/?q=flask (`Flask` package is 2nd, `Flask-Admin` is first)\nhttps://warehouse.python.org/search/?q=django (`Django` package is 11th, `dotulu` is first)\nhttps://warehouse.python.org/search/?q=git (First 3 packages do not have \"git\" anywhere in them)\n\nThis is hard to test in dev because the dev DB is a snapshot of TestPyPI, and those packages are missing.\n\n@dstufft, would it be possible to get a more complete DB for local development?\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom elasticsearch_dsl import DocType, String, analyzer, MetaField, Date\n\nfrom warehouse.search import doc_type\n\n\nEmailAnalyzer = analyzer(\n \"email\",\n tokenizer=\"uax_url_email\",\n filter=[\"standard\", \"lowercase\", \"stop\", \"snowball\"],\n)\n\n\n@doc_type\nclass Project(DocType):\n\n name = String()\n normalized_name = String(index=\"not_analyzed\")\n version = String(index=\"not_analyzed\", multi=True)\n summary = String(analyzer=\"snowball\")\n description = String(analyzer=\"snowball\")\n author = String()\n author_email = String(analyzer=EmailAnalyzer)\n maintainer = String()\n maintainer_email = String(analyzer=EmailAnalyzer)\n license = String()\n home_page = String(index=\"not_analyzed\")\n download_url = String(index=\"not_analyzed\")\n keywords = String(analyzer=\"snowball\")\n platform = String(index=\"not_analyzed\")\n created = Date()\n classifiers = String(index=\"not_analyzed\", multi=True)\n\n class Meta:\n # disable the _all field to save some space\n all = MetaField(enabled=False)\n\n @classmethod\n def from_db(cls, release):\n obj = cls(meta={\"id\": release.project.normalized_name})\n obj[\"name\"] = release.project.name\n obj[\"normalized_name\"] = release.project.normalized_name\n obj[\"version\"] = [r.version for r in release.project.releases]\n obj[\"summary\"] = release.summary\n obj[\"description\"] = release.description\n obj[\"author\"] = release.author\n obj[\"author_email\"] = release.author_email\n obj[\"maintainer\"] = release.maintainer\n obj[\"maintainer_email\"] = release.maintainer_email\n obj[\"home_page\"] = release.home_page\n obj[\"download_url\"] = release.download_url\n obj[\"keywords\"] = release.keywords\n obj[\"platform\"] = release.platform\n obj[\"created\"] = release.created\n obj[\"classifiers\"] = list(release.classifiers)\n\n return obj\n", "path": "warehouse/packaging/search.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport collections\n\nfrom pyramid.httpexceptions import (\n HTTPException, HTTPSeeOther, HTTPMovedPermanently, HTTPNotFound,\n)\nfrom pyramid.view import (\n notfound_view_config, forbidden_view_config, view_config,\n)\nfrom sqlalchemy import func\nfrom sqlalchemy.orm import aliased, joinedload\n\nfrom warehouse.accounts import REDIRECT_FIELD_NAME\nfrom warehouse.accounts.models import User\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.cache.http import cache_control\nfrom warehouse.classifiers.models import Classifier\nfrom warehouse.packaging.models import Project, Release, File\nfrom warehouse.utils.row_counter import RowCount\nfrom warehouse.utils.paginate import ElasticsearchPage, paginate_url_factory\n\n\n@view_config(context=HTTPException)\n@notfound_view_config(append_slash=HTTPMovedPermanently)\ndef httpexception_view(exc, request):\n return exc\n\n\n@forbidden_view_config()\ndef forbidden(exc, request):\n # If the forbidden error is because the user isn't logged in, then we'll\n # redirect them to the log in page.\n if request.authenticated_userid is None:\n url = request.route_url(\n \"accounts.login\",\n _query={REDIRECT_FIELD_NAME: request.path_qs},\n )\n return HTTPSeeOther(url)\n\n # If we've reached here, then the user is logged in and they are genuinely\n # not allowed to access this page.\n # TODO: Style the forbidden page.\n return exc\n\n\n@view_config(\n route_name=\"robots.txt\",\n renderer=\"robots.txt\",\n decorator=[\n cache_control(1 * 24 * 60 * 60), # 1 day\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=6 * 60 * 60, # 6 hours\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n ),\n ],\n)\ndef robotstxt(request):\n request.response.content_type = \"text/plain\"\n return {}\n\n\n@view_config(\n route_name=\"index\",\n renderer=\"index.html\",\n decorator=[\n origin_cache(\n 1 * 60 * 60, # 1 hour\n stale_while_revalidate=10 * 60, # 10 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n keys=[\"all-projects\"],\n ),\n ]\n)\ndef index(request):\n project_names = [\n r[0] for r in (\n request.db.query(File.name)\n .group_by(File.name)\n .order_by(func.sum(File.downloads).desc())\n .limit(5)\n .all())\n ]\n release_a = aliased(\n Release,\n request.db.query(Release)\n .distinct(Release.name)\n .filter(Release.name.in_(project_names))\n .order_by(Release.name, Release._pypi_ordering.desc())\n .subquery(),\n )\n top_projects = (\n request.db.query(release_a)\n .options(joinedload(release_a.project),\n joinedload(release_a.uploader))\n .order_by(func.array_idx(project_names, release_a.name))\n .all()\n )\n\n latest_releases = (\n request.db.query(Release)\n .options(joinedload(Release.project),\n joinedload(Release.uploader))\n .order_by(Release.created.desc())\n .limit(5)\n .all()\n )\n\n counts = dict(\n request.db.query(RowCount.table_name, RowCount.count)\n .filter(\n RowCount.table_name.in_([\n Project.__tablename__,\n Release.__tablename__,\n File.__tablename__,\n User.__tablename__,\n ]))\n .all()\n )\n\n return {\n \"latest_releases\": latest_releases,\n \"top_projects\": top_projects,\n \"num_projects\": counts.get(Project.__tablename__, 0),\n \"num_releases\": counts.get(Release.__tablename__, 0),\n \"num_files\": counts.get(File.__tablename__, 0),\n \"num_users\": counts.get(User.__tablename__, 0),\n }\n\n\n@view_config(\n route_name=\"search\",\n renderer=\"search/results.html\",\n decorator=[\n origin_cache(\n 1 * 60 * 60, # 1 hour\n stale_while_revalidate=10 * 60, # 10 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n keys=[\"all-projects\"],\n )\n ],\n)\ndef search(request):\n if request.params.get(\"q\"):\n query = request.es.query(\n \"multi_match\",\n query=request.params[\"q\"],\n fields=[\n \"name^2\", \"version\", \"author\", \"author_email\", \"maintainer\",\n \"maintainer_email\", \"home_page\", \"license\", \"summary\",\n \"description\", \"keywords\", \"platform\", \"download_url\",\n ],\n ).suggest(\n name=\"name_suggestion\",\n text=request.params[\"q\"],\n term={\"field\": \"name\"}\n )\n else:\n query = request.es.query()\n\n if request.params.get(\"o\"):\n query = query.sort(request.params[\"o\"])\n\n if request.params.getall(\"c\"):\n query = query.filter('terms', classifiers=request.params.getall(\"c\"))\n\n page_num = int(request.params.get(\"page\", 1))\n page = ElasticsearchPage(\n query,\n page=page_num,\n url_maker=paginate_url_factory(request),\n )\n\n if page.page_count and page_num > page.page_count:\n raise HTTPNotFound\n\n available_filters = collections.defaultdict(list)\n\n for cls in request.db.query(Classifier).order_by(Classifier.classifier):\n first, *_ = cls.classifier.split(' :: ')\n available_filters[first].append(cls.classifier)\n\n return {\n \"page\": page,\n \"term\": request.params.get(\"q\", ''),\n \"order\": request.params.get(\"o\", ''),\n \"available_filters\": sorted(available_filters.items()),\n \"applied_filters\": request.params.getall(\"c\"),\n }\n\n\n@view_config(\n route_name=\"includes.current-user-indicator\",\n renderer=\"includes/current-user-indicator.html\",\n uses_session=True,\n)\ndef current_user_indicator(request):\n return {}\n\n\n@view_config(route_name=\"health\", renderer=\"string\")\ndef health(request):\n # This will ensure that we can access the database and run queries against\n # it without doing anything that will take a lock or block other queries.\n request.db.execute(\"SELECT 1\")\n\n # Nothing will actually check this, but it's a little nicer to have\n # something to return besides an empty body.\n return \"OK\"\n", "path": "warehouse/views.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom elasticsearch_dsl import DocType, String, analyzer, MetaField, Date\n\nfrom warehouse.search import doc_type\n\n\nEmailAnalyzer = analyzer(\n \"email\",\n tokenizer=\"uax_url_email\",\n filter=[\"standard\", \"lowercase\", \"stop\", \"snowball\"],\n)\n\nNameAnalyzer = analyzer(\n \"normalized_name\",\n tokenizer=\"lowercase\",\n filter=[\"standard\", \"lowercase\", \"word_delimiter\"],\n)\n\n\n@doc_type\nclass Project(DocType):\n\n name = String()\n normalized_name = String(analyzer=NameAnalyzer, index_options=\"docs\")\n version = String(index=\"not_analyzed\", multi=True)\n summary = String(analyzer=\"snowball\")\n description = String(analyzer=\"snowball\")\n author = String()\n author_email = String(analyzer=EmailAnalyzer)\n maintainer = String()\n maintainer_email = String(analyzer=EmailAnalyzer)\n license = String()\n home_page = String(index=\"not_analyzed\")\n download_url = String(index=\"not_analyzed\")\n keywords = String(analyzer=\"snowball\")\n platform = String(index=\"not_analyzed\")\n created = Date()\n classifiers = String(index=\"not_analyzed\", multi=True)\n\n class Meta:\n # disable the _all field to save some space\n all = MetaField(enabled=False)\n\n @classmethod\n def from_db(cls, release):\n obj = cls(meta={\"id\": release.project.normalized_name})\n obj[\"name\"] = release.project.name\n obj[\"normalized_name\"] = release.project.normalized_name\n obj[\"version\"] = [r.version for r in release.project.releases]\n obj[\"summary\"] = release.summary\n obj[\"description\"] = release.description\n obj[\"author\"] = release.author\n obj[\"author_email\"] = release.author_email\n obj[\"maintainer\"] = release.maintainer\n obj[\"maintainer_email\"] = release.maintainer_email\n obj[\"home_page\"] = release.home_page\n obj[\"download_url\"] = release.download_url\n obj[\"keywords\"] = release.keywords\n obj[\"platform\"] = release.platform\n obj[\"created\"] = release.created\n obj[\"classifiers\"] = list(release.classifiers)\n\n return obj\n", "path": "warehouse/packaging/search.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport collections\n\nfrom pyramid.httpexceptions import (\n HTTPException, HTTPSeeOther, HTTPMovedPermanently, HTTPNotFound,\n)\nfrom pyramid.view import (\n notfound_view_config, forbidden_view_config, view_config,\n)\nfrom sqlalchemy import func\nfrom sqlalchemy.orm import aliased, joinedload\n\nfrom warehouse.accounts import REDIRECT_FIELD_NAME\nfrom warehouse.accounts.models import User\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.cache.http import cache_control\nfrom warehouse.classifiers.models import Classifier\nfrom warehouse.packaging.models import Project, Release, File\nfrom warehouse.utils.row_counter import RowCount\nfrom warehouse.utils.paginate import ElasticsearchPage, paginate_url_factory\n\n\n@view_config(context=HTTPException)\n@notfound_view_config(append_slash=HTTPMovedPermanently)\ndef httpexception_view(exc, request):\n return exc\n\n\n@forbidden_view_config()\ndef forbidden(exc, request):\n # If the forbidden error is because the user isn't logged in, then we'll\n # redirect them to the log in page.\n if request.authenticated_userid is None:\n url = request.route_url(\n \"accounts.login\",\n _query={REDIRECT_FIELD_NAME: request.path_qs},\n )\n return HTTPSeeOther(url)\n\n # If we've reached here, then the user is logged in and they are genuinely\n # not allowed to access this page.\n # TODO: Style the forbidden page.\n return exc\n\n\n@view_config(\n route_name=\"robots.txt\",\n renderer=\"robots.txt\",\n decorator=[\n cache_control(1 * 24 * 60 * 60), # 1 day\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=6 * 60 * 60, # 6 hours\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n ),\n ],\n)\ndef robotstxt(request):\n request.response.content_type = \"text/plain\"\n return {}\n\n\n@view_config(\n route_name=\"index\",\n renderer=\"index.html\",\n decorator=[\n origin_cache(\n 1 * 60 * 60, # 1 hour\n stale_while_revalidate=10 * 60, # 10 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n keys=[\"all-projects\"],\n ),\n ]\n)\ndef index(request):\n project_names = [\n r[0] for r in (\n request.db.query(File.name)\n .group_by(File.name)\n .order_by(func.sum(File.downloads).desc())\n .limit(5)\n .all())\n ]\n release_a = aliased(\n Release,\n request.db.query(Release)\n .distinct(Release.name)\n .filter(Release.name.in_(project_names))\n .order_by(Release.name, Release._pypi_ordering.desc())\n .subquery(),\n )\n top_projects = (\n request.db.query(release_a)\n .options(joinedload(release_a.project),\n joinedload(release_a.uploader))\n .order_by(func.array_idx(project_names, release_a.name))\n .all()\n )\n\n latest_releases = (\n request.db.query(Release)\n .options(joinedload(Release.project),\n joinedload(Release.uploader))\n .order_by(Release.created.desc())\n .limit(5)\n .all()\n )\n\n counts = dict(\n request.db.query(RowCount.table_name, RowCount.count)\n .filter(\n RowCount.table_name.in_([\n Project.__tablename__,\n Release.__tablename__,\n File.__tablename__,\n User.__tablename__,\n ]))\n .all()\n )\n\n return {\n \"latest_releases\": latest_releases,\n \"top_projects\": top_projects,\n \"num_projects\": counts.get(Project.__tablename__, 0),\n \"num_releases\": counts.get(Release.__tablename__, 0),\n \"num_files\": counts.get(File.__tablename__, 0),\n \"num_users\": counts.get(User.__tablename__, 0),\n }\n\n\n@view_config(\n route_name=\"search\",\n renderer=\"search/results.html\",\n decorator=[\n origin_cache(\n 1 * 60 * 60, # 1 hour\n stale_while_revalidate=10 * 60, # 10 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n keys=[\"all-projects\"],\n )\n ],\n)\ndef search(request):\n if request.params.get(\"q\"):\n query = request.es.query(\n \"multi_match\",\n query=request.params[\"q\"],\n fields=[\n \"author\", \"author_email\", \"description^5\", \"download_url\",\n \"home_page\", \"keywords^5\", \"license\", \"maintainer\",\n \"maintainer_email\", \"normalized_name^10\", \"platform\",\n \"summary^5\",\n ],\n ).suggest(\n name=\"name_suggestion\",\n text=request.params[\"q\"],\n term={\"field\": \"name\"}\n )\n else:\n query = request.es.query()\n\n if request.params.get(\"o\"):\n query = query.sort(request.params[\"o\"])\n\n if request.params.getall(\"c\"):\n query = query.filter('terms', classifiers=request.params.getall(\"c\"))\n\n page_num = int(request.params.get(\"page\", 1))\n page = ElasticsearchPage(\n query,\n page=page_num,\n url_maker=paginate_url_factory(request),\n )\n\n if page.page_count and page_num > page.page_count:\n raise HTTPNotFound\n\n available_filters = collections.defaultdict(list)\n\n for cls in request.db.query(Classifier).order_by(Classifier.classifier):\n first, *_ = cls.classifier.split(' :: ')\n available_filters[first].append(cls.classifier)\n\n return {\n \"page\": page,\n \"term\": request.params.get(\"q\", ''),\n \"order\": request.params.get(\"o\", ''),\n \"available_filters\": sorted(available_filters.items()),\n \"applied_filters\": request.params.getall(\"c\"),\n }\n\n\n@view_config(\n route_name=\"includes.current-user-indicator\",\n renderer=\"includes/current-user-indicator.html\",\n uses_session=True,\n)\ndef current_user_indicator(request):\n return {}\n\n\n@view_config(route_name=\"health\", renderer=\"string\")\ndef health(request):\n # This will ensure that we can access the database and run queries against\n # it without doing anything that will take a lock or block other queries.\n request.db.execute(\"SELECT 1\")\n\n # Nothing will actually check this, but it's a little nicer to have\n # something to return besides an empty body.\n return \"OK\"\n", "path": "warehouse/views.py"}]} | 3,284 | 372 |
gh_patches_debug_51074 | rasdani/github-patches | git_diff | Qiskit__qiskit-4331 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pass_manager_drawer requires filename to render
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: master
- **Python version**:
- **Operating system**:
### What is the current behavior?
The `pass_manager_drawer` requires a filename in order to run. However this is not really a requirement of the code itself. Indeed, this works fine:
```python
pass_manager_drawer(pm, '')
```
### Steps to reproduce the problem
### What is the expected behavior?
### Suggested solutions
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qiskit/visualization/pass_manager_visualization.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """
16 Visualization function for a pass manager. Passes are grouped based on their
17 flow controller, and coloured based on the type of pass.
18 """
19 import os
20 import inspect
21 import tempfile
22
23 try:
24 from PIL import Image
25
26 HAS_PIL = True
27 except ImportError:
28 HAS_PIL = False
29
30 from qiskit.visualization import utils
31 from qiskit.visualization.exceptions import VisualizationError
32 from qiskit.transpiler.basepasses import AnalysisPass, TransformationPass
33
34 DEFAULT_STYLE = {AnalysisPass: 'red',
35 TransformationPass: 'blue'}
36
37
38 def pass_manager_drawer(pass_manager, filename, style=None, raw=False):
39 """
40 Draws the pass manager.
41
42 This function needs `pydot <https://github.com/erocarrera/pydot>`, which in turn needs
43 Graphviz <https://www.graphviz.org/>` to be installed.
44
45 Args:
46 pass_manager (PassManager): the pass manager to be drawn
47 filename (str): file path to save image to
48 style (dict or OrderedDict): keys are the pass classes and the values are
49 the colors to make them. An example can be seen in the DEFAULT_STYLE. An ordered
50 dict can be used to ensure a priority coloring when pass falls into multiple
51 categories. Any values not included in the provided dict will be filled in from
52 the default dict
53 raw (Bool) : True if you want to save the raw Dot output not an image. The
54 default is False.
55 Returns:
56 PIL.Image or None: an in-memory representation of the pass manager. Or None if
57 no image was generated or PIL is not installed.
58 Raises:
59 ImportError: when nxpd or pydot not installed.
60 VisualizationError: If raw=True and filename=None.
61
62 Example:
63 .. code-block::
64
65 %matplotlib inline
66 from qiskit import QuantumCircuit
67 from qiskit.compiler import transpile
68 from qiskit.transpiler import PassManager
69 from qiskit.visualization import pass_manager_drawer
70 from qiskit.transpiler.passes import Unroller
71
72 circ = QuantumCircuit(3)
73 circ.ccx(0, 1, 2)
74 circ.draw()
75
76 pass_ = Unroller(['u1', 'u2', 'u3', 'cx'])
77 pm = PassManager(pass_)
78 new_circ = pm.run(circ)
79 new_circ.draw(output='mpl')
80
81 pass_manager_drawer(pm, "passmanager.jpg")
82 """
83
84 try:
85 import subprocess
86
87 _PROC = subprocess.Popen(['dot', '-V'], # pylint: disable=invalid-name
88 stdout=subprocess.PIPE,
89 stderr=subprocess.PIPE)
90 _PROC.communicate()
91 if _PROC.returncode != 0:
92 has_graphviz = False
93 else:
94 has_graphviz = True
95 except Exception: # pylint: disable=broad-except
96 # this is raised when the dot command cannot be found, which means GraphViz
97 # isn't installed
98 has_graphviz = False
99
100 HAS_GRAPHVIZ = has_graphviz # pylint: disable=invalid-name
101
102 try:
103 import pydot
104 if not HAS_GRAPHVIZ:
105 raise ImportError
106 except ImportError:
107 raise ImportError("pass_manager_drawer requires pydot and graphviz. "
108 "Run 'pip install pydot'. "
109 "Graphviz can be installed using 'brew install graphviz' on Mac"
110 " or by downloading it from the website.")
111
112 passes = pass_manager.passes()
113
114 if not style:
115 style = DEFAULT_STYLE
116
117 # create the overall graph
118 graph = pydot.Dot()
119
120 # identifiers for nodes need to be unique, so assign an id
121 # can't just use python's id in case the exact same pass was
122 # appended more than once
123 component_id = 0
124
125 prev_node = None
126
127 for index, controller_group in enumerate(passes):
128
129 # label is the name of the flow controller parameter
130 label = "[%s] %s" % (index, ', '.join(controller_group['flow_controllers']))
131
132 # create the subgraph for this controller
133 subgraph = pydot.Cluster(str(component_id), label=label, fontname='helvetica',
134 labeljust='l')
135 component_id += 1
136
137 for pass_ in controller_group['passes']:
138
139 # label is the name of the pass
140 node = pydot.Node(str(component_id),
141 label=str(type(pass_).__name__),
142 color=_get_node_color(pass_, style),
143 shape="rectangle",
144 fontname='helvetica')
145
146 subgraph.add_node(node)
147 component_id += 1
148
149 # the arguments that were provided to the pass when it was created
150 arg_spec = inspect.getfullargspec(pass_.__init__)
151 # 0 is the args, 1: to remove the self arg
152 args = arg_spec[0][1:]
153
154 num_optional = len(arg_spec[3]) if arg_spec[3] else 0
155
156 # add in the inputs to the pass
157 for arg_index, arg in enumerate(args):
158 nd_style = 'solid'
159 # any optional args are dashed
160 # the num of optional counts from the end towards the start of the list
161 if arg_index >= (len(args) - num_optional):
162 nd_style = 'dashed'
163
164 input_node = pydot.Node(component_id, label=arg,
165 color="black",
166 shape="ellipse",
167 fontsize=10,
168 style=nd_style,
169 fontname='helvetica')
170 subgraph.add_node(input_node)
171 component_id += 1
172 subgraph.add_edge(pydot.Edge(input_node, node))
173
174 # if there is a previous node, add an edge between them
175 if prev_node:
176 subgraph.add_edge(pydot.Edge(prev_node, node))
177
178 prev_node = node
179
180 graph.add_subgraph(subgraph)
181
182 if raw:
183 if filename:
184 graph.write(filename, format='raw')
185 return None
186 else:
187 raise VisualizationError("if format=raw, then a filename is required.")
188
189 if not HAS_PIL and filename:
190 # linter says this isn't a method - it is
191 graph.write_png(filename) # pylint: disable=no-member
192 return None
193
194 with tempfile.TemporaryDirectory() as tmpdirname:
195 tmppath = os.path.join(tmpdirname, 'pass_manager.png')
196
197 # linter says this isn't a method - it is
198 graph.write_png(tmppath) # pylint: disable=no-member
199
200 image = Image.open(tmppath)
201 image = utils._trim(image)
202 os.remove(tmppath)
203 if filename:
204 image.save(filename, 'PNG')
205 return image
206
207
208 def _get_node_color(pss, style):
209 # look in the user provided dict first
210 for typ, color in style.items():
211 if isinstance(pss, typ):
212 return color
213
214 # failing that, look in the default
215 for typ, color in DEFAULT_STYLE.items():
216 if isinstance(pss, typ):
217 return color
218
219 return "black"
220
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qiskit/visualization/pass_manager_visualization.py b/qiskit/visualization/pass_manager_visualization.py
--- a/qiskit/visualization/pass_manager_visualization.py
+++ b/qiskit/visualization/pass_manager_visualization.py
@@ -35,7 +35,7 @@
TransformationPass: 'blue'}
-def pass_manager_drawer(pass_manager, filename, style=None, raw=False):
+def pass_manager_drawer(pass_manager, filename=None, style=None, raw=False):
"""
Draws the pass manager.
| {"golden_diff": "diff --git a/qiskit/visualization/pass_manager_visualization.py b/qiskit/visualization/pass_manager_visualization.py\n--- a/qiskit/visualization/pass_manager_visualization.py\n+++ b/qiskit/visualization/pass_manager_visualization.py\n@@ -35,7 +35,7 @@\n TransformationPass: 'blue'}\n \n \n-def pass_manager_drawer(pass_manager, filename, style=None, raw=False):\n+def pass_manager_drawer(pass_manager, filename=None, style=None, raw=False):\n \"\"\"\n Draws the pass manager.\n", "issue": "pass_manager_drawer requires filename to render\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues -->\r\n\r\n### Information\r\n\r\n- **Qiskit Terra version**: master\r\n- **Python version**:\r\n- **Operating system**:\r\n\r\n### What is the current behavior?\r\nThe `pass_manager_drawer` requires a filename in order to run. However this is not really a requirement of the code itself. Indeed, this works fine:\r\n```python\r\npass_manager_drawer(pm, '')\r\n```\r\n\r\n\r\n### Steps to reproduce the problem\r\n\r\n\r\n\r\n### What is the expected behavior?\r\n\r\n\r\n\r\n### Suggested solutions\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2019.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"\nVisualization function for a pass manager. Passes are grouped based on their\nflow controller, and coloured based on the type of pass.\n\"\"\"\nimport os\nimport inspect\nimport tempfile\n\ntry:\n from PIL import Image\n\n HAS_PIL = True\nexcept ImportError:\n HAS_PIL = False\n\nfrom qiskit.visualization import utils\nfrom qiskit.visualization.exceptions import VisualizationError\nfrom qiskit.transpiler.basepasses import AnalysisPass, TransformationPass\n\nDEFAULT_STYLE = {AnalysisPass: 'red',\n TransformationPass: 'blue'}\n\n\ndef pass_manager_drawer(pass_manager, filename, style=None, raw=False):\n \"\"\"\n Draws the pass manager.\n\n This function needs `pydot <https://github.com/erocarrera/pydot>`, which in turn needs\n Graphviz <https://www.graphviz.org/>` to be installed.\n\n Args:\n pass_manager (PassManager): the pass manager to be drawn\n filename (str): file path to save image to\n style (dict or OrderedDict): keys are the pass classes and the values are\n the colors to make them. An example can be seen in the DEFAULT_STYLE. An ordered\n dict can be used to ensure a priority coloring when pass falls into multiple\n categories. Any values not included in the provided dict will be filled in from\n the default dict\n raw (Bool) : True if you want to save the raw Dot output not an image. The\n default is False.\n Returns:\n PIL.Image or None: an in-memory representation of the pass manager. Or None if\n no image was generated or PIL is not installed.\n Raises:\n ImportError: when nxpd or pydot not installed.\n VisualizationError: If raw=True and filename=None.\n\n Example:\n .. code-block::\n\n %matplotlib inline\n from qiskit import QuantumCircuit\n from qiskit.compiler import transpile\n from qiskit.transpiler import PassManager\n from qiskit.visualization import pass_manager_drawer\n from qiskit.transpiler.passes import Unroller\n\n circ = QuantumCircuit(3)\n circ.ccx(0, 1, 2)\n circ.draw()\n\n pass_ = Unroller(['u1', 'u2', 'u3', 'cx'])\n pm = PassManager(pass_)\n new_circ = pm.run(circ)\n new_circ.draw(output='mpl')\n\n pass_manager_drawer(pm, \"passmanager.jpg\")\n \"\"\"\n\n try:\n import subprocess\n\n _PROC = subprocess.Popen(['dot', '-V'], # pylint: disable=invalid-name\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE)\n _PROC.communicate()\n if _PROC.returncode != 0:\n has_graphviz = False\n else:\n has_graphviz = True\n except Exception: # pylint: disable=broad-except\n # this is raised when the dot command cannot be found, which means GraphViz\n # isn't installed\n has_graphviz = False\n\n HAS_GRAPHVIZ = has_graphviz # pylint: disable=invalid-name\n\n try:\n import pydot\n if not HAS_GRAPHVIZ:\n raise ImportError\n except ImportError:\n raise ImportError(\"pass_manager_drawer requires pydot and graphviz. \"\n \"Run 'pip install pydot'. \"\n \"Graphviz can be installed using 'brew install graphviz' on Mac\"\n \" or by downloading it from the website.\")\n\n passes = pass_manager.passes()\n\n if not style:\n style = DEFAULT_STYLE\n\n # create the overall graph\n graph = pydot.Dot()\n\n # identifiers for nodes need to be unique, so assign an id\n # can't just use python's id in case the exact same pass was\n # appended more than once\n component_id = 0\n\n prev_node = None\n\n for index, controller_group in enumerate(passes):\n\n # label is the name of the flow controller parameter\n label = \"[%s] %s\" % (index, ', '.join(controller_group['flow_controllers']))\n\n # create the subgraph for this controller\n subgraph = pydot.Cluster(str(component_id), label=label, fontname='helvetica',\n labeljust='l')\n component_id += 1\n\n for pass_ in controller_group['passes']:\n\n # label is the name of the pass\n node = pydot.Node(str(component_id),\n label=str(type(pass_).__name__),\n color=_get_node_color(pass_, style),\n shape=\"rectangle\",\n fontname='helvetica')\n\n subgraph.add_node(node)\n component_id += 1\n\n # the arguments that were provided to the pass when it was created\n arg_spec = inspect.getfullargspec(pass_.__init__)\n # 0 is the args, 1: to remove the self arg\n args = arg_spec[0][1:]\n\n num_optional = len(arg_spec[3]) if arg_spec[3] else 0\n\n # add in the inputs to the pass\n for arg_index, arg in enumerate(args):\n nd_style = 'solid'\n # any optional args are dashed\n # the num of optional counts from the end towards the start of the list\n if arg_index >= (len(args) - num_optional):\n nd_style = 'dashed'\n\n input_node = pydot.Node(component_id, label=arg,\n color=\"black\",\n shape=\"ellipse\",\n fontsize=10,\n style=nd_style,\n fontname='helvetica')\n subgraph.add_node(input_node)\n component_id += 1\n subgraph.add_edge(pydot.Edge(input_node, node))\n\n # if there is a previous node, add an edge between them\n if prev_node:\n subgraph.add_edge(pydot.Edge(prev_node, node))\n\n prev_node = node\n\n graph.add_subgraph(subgraph)\n\n if raw:\n if filename:\n graph.write(filename, format='raw')\n return None\n else:\n raise VisualizationError(\"if format=raw, then a filename is required.\")\n\n if not HAS_PIL and filename:\n # linter says this isn't a method - it is\n graph.write_png(filename) # pylint: disable=no-member\n return None\n\n with tempfile.TemporaryDirectory() as tmpdirname:\n tmppath = os.path.join(tmpdirname, 'pass_manager.png')\n\n # linter says this isn't a method - it is\n graph.write_png(tmppath) # pylint: disable=no-member\n\n image = Image.open(tmppath)\n image = utils._trim(image)\n os.remove(tmppath)\n if filename:\n image.save(filename, 'PNG')\n return image\n\n\ndef _get_node_color(pss, style):\n # look in the user provided dict first\n for typ, color in style.items():\n if isinstance(pss, typ):\n return color\n\n # failing that, look in the default\n for typ, color in DEFAULT_STYLE.items():\n if isinstance(pss, typ):\n return color\n\n return \"black\"\n", "path": "qiskit/visualization/pass_manager_visualization.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2019.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"\nVisualization function for a pass manager. Passes are grouped based on their\nflow controller, and coloured based on the type of pass.\n\"\"\"\nimport os\nimport inspect\nimport tempfile\n\ntry:\n from PIL import Image\n\n HAS_PIL = True\nexcept ImportError:\n HAS_PIL = False\n\nfrom qiskit.visualization import utils\nfrom qiskit.visualization.exceptions import VisualizationError\nfrom qiskit.transpiler.basepasses import AnalysisPass, TransformationPass\n\nDEFAULT_STYLE = {AnalysisPass: 'red',\n TransformationPass: 'blue'}\n\n\ndef pass_manager_drawer(pass_manager, filename=None, style=None, raw=False):\n \"\"\"\n Draws the pass manager.\n\n This function needs `pydot <https://github.com/erocarrera/pydot>`, which in turn needs\n Graphviz <https://www.graphviz.org/>` to be installed.\n\n Args:\n pass_manager (PassManager): the pass manager to be drawn\n filename (str): file path to save image to\n style (dict or OrderedDict): keys are the pass classes and the values are\n the colors to make them. An example can be seen in the DEFAULT_STYLE. An ordered\n dict can be used to ensure a priority coloring when pass falls into multiple\n categories. Any values not included in the provided dict will be filled in from\n the default dict\n raw (Bool) : True if you want to save the raw Dot output not an image. The\n default is False.\n Returns:\n PIL.Image or None: an in-memory representation of the pass manager. Or None if\n no image was generated or PIL is not installed.\n Raises:\n ImportError: when nxpd or pydot not installed.\n VisualizationError: If raw=True and filename=None.\n\n Example:\n .. code-block::\n\n %matplotlib inline\n from qiskit import QuantumCircuit\n from qiskit.compiler import transpile\n from qiskit.transpiler import PassManager\n from qiskit.visualization import pass_manager_drawer\n from qiskit.transpiler.passes import Unroller\n\n circ = QuantumCircuit(3)\n circ.ccx(0, 1, 2)\n circ.draw()\n\n pass_ = Unroller(['u1', 'u2', 'u3', 'cx'])\n pm = PassManager(pass_)\n new_circ = pm.run(circ)\n new_circ.draw(output='mpl')\n\n pass_manager_drawer(pm, \"passmanager.jpg\")\n \"\"\"\n\n try:\n import subprocess\n\n _PROC = subprocess.Popen(['dot', '-V'], # pylint: disable=invalid-name\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE)\n _PROC.communicate()\n if _PROC.returncode != 0:\n has_graphviz = False\n else:\n has_graphviz = True\n except Exception: # pylint: disable=broad-except\n # this is raised when the dot command cannot be found, which means GraphViz\n # isn't installed\n has_graphviz = False\n\n HAS_GRAPHVIZ = has_graphviz # pylint: disable=invalid-name\n\n try:\n import pydot\n if not HAS_GRAPHVIZ:\n raise ImportError\n except ImportError:\n raise ImportError(\"pass_manager_drawer requires pydot and graphviz. \"\n \"Run 'pip install pydot'. \"\n \"Graphviz can be installed using 'brew install graphviz' on Mac\"\n \" or by downloading it from the website.\")\n\n passes = pass_manager.passes()\n\n if not style:\n style = DEFAULT_STYLE\n\n # create the overall graph\n graph = pydot.Dot()\n\n # identifiers for nodes need to be unique, so assign an id\n # can't just use python's id in case the exact same pass was\n # appended more than once\n component_id = 0\n\n prev_node = None\n\n for index, controller_group in enumerate(passes):\n\n # label is the name of the flow controller parameter\n label = \"[%s] %s\" % (index, ', '.join(controller_group['flow_controllers']))\n\n # create the subgraph for this controller\n subgraph = pydot.Cluster(str(component_id), label=label, fontname='helvetica',\n labeljust='l')\n component_id += 1\n\n for pass_ in controller_group['passes']:\n\n # label is the name of the pass\n node = pydot.Node(str(component_id),\n label=str(type(pass_).__name__),\n color=_get_node_color(pass_, style),\n shape=\"rectangle\",\n fontname='helvetica')\n\n subgraph.add_node(node)\n component_id += 1\n\n # the arguments that were provided to the pass when it was created\n arg_spec = inspect.getfullargspec(pass_.__init__)\n # 0 is the args, 1: to remove the self arg\n args = arg_spec[0][1:]\n\n num_optional = len(arg_spec[3]) if arg_spec[3] else 0\n\n # add in the inputs to the pass\n for arg_index, arg in enumerate(args):\n nd_style = 'solid'\n # any optional args are dashed\n # the num of optional counts from the end towards the start of the list\n if arg_index >= (len(args) - num_optional):\n nd_style = 'dashed'\n\n input_node = pydot.Node(component_id, label=arg,\n color=\"black\",\n shape=\"ellipse\",\n fontsize=10,\n style=nd_style,\n fontname='helvetica')\n subgraph.add_node(input_node)\n component_id += 1\n subgraph.add_edge(pydot.Edge(input_node, node))\n\n # if there is a previous node, add an edge between them\n if prev_node:\n subgraph.add_edge(pydot.Edge(prev_node, node))\n\n prev_node = node\n\n graph.add_subgraph(subgraph)\n\n if raw:\n if filename:\n graph.write(filename, format='raw')\n return None\n else:\n raise VisualizationError(\"if format=raw, then a filename is required.\")\n\n if not HAS_PIL and filename:\n # linter says this isn't a method - it is\n graph.write_png(filename) # pylint: disable=no-member\n return None\n\n with tempfile.TemporaryDirectory() as tmpdirname:\n tmppath = os.path.join(tmpdirname, 'pass_manager.png')\n\n # linter says this isn't a method - it is\n graph.write_png(tmppath) # pylint: disable=no-member\n\n image = Image.open(tmppath)\n image = utils._trim(image)\n os.remove(tmppath)\n if filename:\n image.save(filename, 'PNG')\n return image\n\n\ndef _get_node_color(pss, style):\n # look in the user provided dict first\n for typ, color in style.items():\n if isinstance(pss, typ):\n return color\n\n # failing that, look in the default\n for typ, color in DEFAULT_STYLE.items():\n if isinstance(pss, typ):\n return color\n\n return \"black\"\n", "path": "qiskit/visualization/pass_manager_visualization.py"}]} | 2,636 | 112 |
gh_patches_debug_30359 | rasdani/github-patches | git_diff | apluslms__a-plus-1293 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Course staff may create duplicate student groups
Course staff may create student groups (course/models.py class StudentGroup) that contain exactly the same group members as an existing group. Duplicate groups should not be allowed. The course staff UI for editing groups is in the URL http://localhost:8000/def/current/teachers/groups/ (in the left navigation menu, it is the "Groups" link under the heading Course staff).
Course staff may also create new groups (or edit existing groups) that are empty (no members) or only have one member. Groups should always have at least two members.
When students create groups in the "form a group" page (with user personal codes), A+ already prevents empty and duplicate groups.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `course/forms.py`
Content:
```
1 from typing import Any
2
3 from django import forms
4 from django.contrib.humanize.templatetags.humanize import ordinal
5 from django.utils.safestring import mark_safe
6 from django.utils.text import format_lazy
7 from django.utils.translation import gettext_lazy as _
8
9 from aplus.api import api_reverse
10 from exercise.models import SubmissionDraft
11 from lib.fields import UsersSearchSelectField
12 from .models import Enrollment, StudentGroup
13 from userprofile.models import UserProfile
14
15
16 class GroupsForm(forms.Form):
17
18 def __init__(self, *args, **kwargs):
19 self.profile = kwargs.pop('profile')
20 self.instance = kwargs.pop('instance')
21 self.content = kwargs.pop('content')
22 super().__init__(*args, **kwargs)
23 total = self.content.total()
24 min_size = max(total.min_group_size, 2)
25 max_size = total.max_group_size
26
27 for n in range(2, max_size + 1):
28 widget = forms.TextInput(attrs={'class':'form-control'})
29 field = forms.CharField(widget=widget, required=(n <= min_size))
30 field.label = mark_safe(format_lazy(_('GROUP_MEMBER_LABEL -- {num}'), num=ordinal(n)))
31 self.fields['member{:d}'.format(n)] = field
32
33 def clean(self):
34 super().clean()
35
36 self.member_profiles = [self.profile]
37 for key in self.fields.keys():
38 if key in self.cleaned_data and self.cleaned_data[key]:
39 enrollment = Enrollment.objects.filter(
40 course_instance=self.instance,
41 personal_code=self.cleaned_data[key].upper()
42 ).first()
43 if not enrollment:
44 self.add_error(key, _('ERROR_CODE_NOT_RECOGNIZED'))
45 elif enrollment.user_profile in self.member_profiles:
46 self.add_error(key, _('ERROR_USER_ALREADY_IN_GROUP'))
47 else:
48 self.member_profiles.append(enrollment.user_profile)
49
50 if not self.errors and len(self.member_profiles) > 1:
51 if StudentGroup.get_exact(self.instance, self.member_profiles):
52 self.add_error(None, _('ERROR_GROUP_ALREADY_EXISTS'))
53
54 return self.cleaned_data
55
56 def save(self):
57 group = StudentGroup(course_instance=self.instance)
58 group.save()
59 group.members.add(*self.member_profiles)
60 return group
61
62
63 class GroupSelectForm(forms.Form):
64 group = forms.IntegerField(required=True)
65
66 def __init__(self, *args, **kwargs):
67 self.profile = kwargs.pop('profile')
68 self.instance = kwargs.pop('instance')
69 super().__init__(*args, **kwargs)
70
71 def clean(self):
72 super().clean()
73 self.selected_group = None
74 if 'group' in self.cleaned_data:
75 gid = self.cleaned_data['group']
76 if gid != 0:
77 group = self.profile.groups.filter(id=gid, course_instance=self.instance).first()
78 if group:
79 self.selected_group = group
80 else:
81 self.add_error('group', 'Invalid group id')
82 return self.cleaned_data
83
84 def save(self) -> Enrollment:
85 enrollment = self.instance.get_enrollment_for(self.profile.user)
86 enrollment.selected_group = self.selected_group
87 enrollment.save()
88 # Deactivate all drafts when changing groups.
89 SubmissionDraft.objects.filter(
90 exercise__course_module__course_instance=self.instance,
91 submitter=self.profile,
92 active=True,
93 ).update(active=False)
94 return enrollment
95
96
97 class GroupEditForm(forms.ModelForm):
98
99 members = UsersSearchSelectField(queryset=UserProfile.objects.none(),
100 initial_queryset=UserProfile.objects.none(),
101 label=_('LABEL_MEMBERS'),
102 )
103
104 def __init__(self, *args: Any, **kwargs: Any) -> None:
105 course_instance = kwargs.get('instance').course_instance
106 super().__init__(*args, **kwargs)
107 self.fields['members'].widget.search_api_url = api_reverse(
108 "course-students-list",
109 kwargs={'course_id': course_instance.id},
110 )
111 self.fields["members"].queryset = course_instance.get_student_profiles()
112 # Course staff may use this form for modifying and creating student groups.
113 # If an existing group is being modified, its current members must be
114 # set to the initial queryset.
115 if self.instance.id:
116 self.fields["members"].initial_queryset = self.instance.members.all()
117
118 class Meta:
119 model = StudentGroup
120 fields = ['members']
121
122
123 class EnrollStudentsForm(forms.Form):
124
125 user_profiles = UsersSearchSelectField(queryset=UserProfile.objects.all(),
126 initial_queryset=UserProfile.objects.none(),
127 label=_('LABEL_USERS'),
128 required=False,
129 )
130
131 def __init__(self, *args: Any, **kwargs: Any) -> None:
132 self.instance = kwargs.pop('instance')
133 super().__init__(*args, **kwargs)
134 self.fields['user_profiles'].widget.search_api_url = api_reverse("user-list")
135 if self.instance.sis_id:
136 self.fields['sis'] = forms.BooleanField(
137 required=False,
138 label=_('LABEL_ENROLL_FROM_SIS'),
139 )
140
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/course/forms.py b/course/forms.py
--- a/course/forms.py
+++ b/course/forms.py
@@ -5,6 +5,7 @@
from django.utils.safestring import mark_safe
from django.utils.text import format_lazy
from django.utils.translation import gettext_lazy as _
+from django.db.models import Count
from aplus.api import api_reverse
from exercise.models import SubmissionDraft
@@ -115,6 +116,30 @@
if self.instance.id:
self.fields["members"].initial_queryset = self.instance.members.all()
+ def clean(self):
+ super().clean()
+ members = self.cleaned_data.get('members')
+ if members:
+ if len(members) == 1:
+ self.add_error('members', _('MUST_HAVE_TWO_MEMBERS'))
+ course_instance = self.instance.course_instance
+ # Filter all groups with course instance and that have one or more similar members as in the members list
+ filtered_groups = StudentGroup.objects.filter(course_instance=course_instance, members__in=members)
+ # Count number of members in each group
+ groups_with_member_count = filtered_groups.annotate(member_count=Count('members'))
+ # Filter only those groups that have same number of members
+ groups_with_exact_member_count = groups_with_member_count.filter(member_count=len(members))
+ # Loop through the returned groups and check if any group with exact same members exist
+ group_exists = False
+ for group in groups_with_exact_member_count:
+ group_members = group.members.all()
+ if list(group_members) == list(members):
+ group_exists = True
+ if group_exists:
+ self.add_error('members', _('ERROR_GROUP_ALREADY_EXISTS'))
+ return self.cleaned_data
+
+
class Meta:
model = StudentGroup
fields = ['members']
| {"golden_diff": "diff --git a/course/forms.py b/course/forms.py\n--- a/course/forms.py\n+++ b/course/forms.py\n@@ -5,6 +5,7 @@\n from django.utils.safestring import mark_safe\n from django.utils.text import format_lazy\n from django.utils.translation import gettext_lazy as _\n+from django.db.models import Count\n \n from aplus.api import api_reverse\n from exercise.models import SubmissionDraft\n@@ -115,6 +116,30 @@\n if self.instance.id:\n self.fields[\"members\"].initial_queryset = self.instance.members.all()\n \n+ def clean(self):\n+ super().clean()\n+ members = self.cleaned_data.get('members')\n+ if members:\n+ if len(members) == 1:\n+ self.add_error('members', _('MUST_HAVE_TWO_MEMBERS'))\n+ course_instance = self.instance.course_instance\n+ # Filter all groups with course instance and that have one or more similar members as in the members list\n+ filtered_groups = StudentGroup.objects.filter(course_instance=course_instance, members__in=members)\n+ # Count number of members in each group\n+ groups_with_member_count = filtered_groups.annotate(member_count=Count('members'))\n+ # Filter only those groups that have same number of members\n+ groups_with_exact_member_count = groups_with_member_count.filter(member_count=len(members))\n+ # Loop through the returned groups and check if any group with exact same members exist\n+ group_exists = False\n+ for group in groups_with_exact_member_count:\n+ group_members = group.members.all()\n+ if list(group_members) == list(members):\n+ group_exists = True\n+ if group_exists:\n+ self.add_error('members', _('ERROR_GROUP_ALREADY_EXISTS'))\n+ return self.cleaned_data\n+\n+\n class Meta:\n model = StudentGroup\n fields = ['members']\n", "issue": "Course staff may create duplicate student groups\nCourse staff may create student groups (course/models.py class StudentGroup) that contain exactly the same group members as an existing group. Duplicate groups should not be allowed. The course staff UI for editing groups is in the URL http://localhost:8000/def/current/teachers/groups/ (in the left navigation menu, it is the \"Groups\" link under the heading Course staff).\r\n\r\nCourse staff may also create new groups (or edit existing groups) that are empty (no members) or only have one member. Groups should always have at least two members.\r\n\r\nWhen students create groups in the \"form a group\" page (with user personal codes), A+ already prevents empty and duplicate groups.\n", "before_files": [{"content": "from typing import Any\n\nfrom django import forms\nfrom django.contrib.humanize.templatetags.humanize import ordinal\nfrom django.utils.safestring import mark_safe\nfrom django.utils.text import format_lazy\nfrom django.utils.translation import gettext_lazy as _\n\nfrom aplus.api import api_reverse\nfrom exercise.models import SubmissionDraft\nfrom lib.fields import UsersSearchSelectField\nfrom .models import Enrollment, StudentGroup\nfrom userprofile.models import UserProfile\n\n\nclass GroupsForm(forms.Form):\n\n def __init__(self, *args, **kwargs):\n self.profile = kwargs.pop('profile')\n self.instance = kwargs.pop('instance')\n self.content = kwargs.pop('content')\n super().__init__(*args, **kwargs)\n total = self.content.total()\n min_size = max(total.min_group_size, 2)\n max_size = total.max_group_size\n\n for n in range(2, max_size + 1):\n widget = forms.TextInput(attrs={'class':'form-control'})\n field = forms.CharField(widget=widget, required=(n <= min_size))\n field.label = mark_safe(format_lazy(_('GROUP_MEMBER_LABEL -- {num}'), num=ordinal(n)))\n self.fields['member{:d}'.format(n)] = field\n\n def clean(self):\n super().clean()\n\n self.member_profiles = [self.profile]\n for key in self.fields.keys():\n if key in self.cleaned_data and self.cleaned_data[key]:\n enrollment = Enrollment.objects.filter(\n course_instance=self.instance,\n personal_code=self.cleaned_data[key].upper()\n ).first()\n if not enrollment:\n self.add_error(key, _('ERROR_CODE_NOT_RECOGNIZED'))\n elif enrollment.user_profile in self.member_profiles:\n self.add_error(key, _('ERROR_USER_ALREADY_IN_GROUP'))\n else:\n self.member_profiles.append(enrollment.user_profile)\n\n if not self.errors and len(self.member_profiles) > 1:\n if StudentGroup.get_exact(self.instance, self.member_profiles):\n self.add_error(None, _('ERROR_GROUP_ALREADY_EXISTS'))\n\n return self.cleaned_data\n\n def save(self):\n group = StudentGroup(course_instance=self.instance)\n group.save()\n group.members.add(*self.member_profiles)\n return group\n\n\nclass GroupSelectForm(forms.Form):\n group = forms.IntegerField(required=True)\n\n def __init__(self, *args, **kwargs):\n self.profile = kwargs.pop('profile')\n self.instance = kwargs.pop('instance')\n super().__init__(*args, **kwargs)\n\n def clean(self):\n super().clean()\n self.selected_group = None\n if 'group' in self.cleaned_data:\n gid = self.cleaned_data['group']\n if gid != 0:\n group = self.profile.groups.filter(id=gid, course_instance=self.instance).first()\n if group:\n self.selected_group = group\n else:\n self.add_error('group', 'Invalid group id')\n return self.cleaned_data\n\n def save(self) -> Enrollment:\n enrollment = self.instance.get_enrollment_for(self.profile.user)\n enrollment.selected_group = self.selected_group\n enrollment.save()\n # Deactivate all drafts when changing groups.\n SubmissionDraft.objects.filter(\n exercise__course_module__course_instance=self.instance,\n submitter=self.profile,\n active=True,\n ).update(active=False)\n return enrollment\n\n\nclass GroupEditForm(forms.ModelForm):\n\n members = UsersSearchSelectField(queryset=UserProfile.objects.none(),\n initial_queryset=UserProfile.objects.none(),\n label=_('LABEL_MEMBERS'),\n )\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n course_instance = kwargs.get('instance').course_instance\n super().__init__(*args, **kwargs)\n self.fields['members'].widget.search_api_url = api_reverse(\n \"course-students-list\",\n kwargs={'course_id': course_instance.id},\n )\n self.fields[\"members\"].queryset = course_instance.get_student_profiles()\n # Course staff may use this form for modifying and creating student groups.\n # If an existing group is being modified, its current members must be\n # set to the initial queryset.\n if self.instance.id:\n self.fields[\"members\"].initial_queryset = self.instance.members.all()\n\n class Meta:\n model = StudentGroup\n fields = ['members']\n\n\nclass EnrollStudentsForm(forms.Form):\n\n user_profiles = UsersSearchSelectField(queryset=UserProfile.objects.all(),\n initial_queryset=UserProfile.objects.none(),\n label=_('LABEL_USERS'),\n required=False,\n )\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n self.instance = kwargs.pop('instance')\n super().__init__(*args, **kwargs)\n self.fields['user_profiles'].widget.search_api_url = api_reverse(\"user-list\")\n if self.instance.sis_id:\n self.fields['sis'] = forms.BooleanField(\n required=False,\n label=_('LABEL_ENROLL_FROM_SIS'),\n )\n", "path": "course/forms.py"}], "after_files": [{"content": "from typing import Any\n\nfrom django import forms\nfrom django.contrib.humanize.templatetags.humanize import ordinal\nfrom django.utils.safestring import mark_safe\nfrom django.utils.text import format_lazy\nfrom django.utils.translation import gettext_lazy as _\nfrom django.db.models import Count\n\nfrom aplus.api import api_reverse\nfrom exercise.models import SubmissionDraft\nfrom lib.fields import UsersSearchSelectField\nfrom .models import Enrollment, StudentGroup\nfrom userprofile.models import UserProfile\n\n\nclass GroupsForm(forms.Form):\n\n def __init__(self, *args, **kwargs):\n self.profile = kwargs.pop('profile')\n self.instance = kwargs.pop('instance')\n self.content = kwargs.pop('content')\n super().__init__(*args, **kwargs)\n total = self.content.total()\n min_size = max(total.min_group_size, 2)\n max_size = total.max_group_size\n\n for n in range(2, max_size + 1):\n widget = forms.TextInput(attrs={'class':'form-control'})\n field = forms.CharField(widget=widget, required=(n <= min_size))\n field.label = mark_safe(format_lazy(_('GROUP_MEMBER_LABEL -- {num}'), num=ordinal(n)))\n self.fields['member{:d}'.format(n)] = field\n\n def clean(self):\n super().clean()\n\n self.member_profiles = [self.profile]\n for key in self.fields.keys():\n if key in self.cleaned_data and self.cleaned_data[key]:\n enrollment = Enrollment.objects.filter(\n course_instance=self.instance,\n personal_code=self.cleaned_data[key].upper()\n ).first()\n if not enrollment:\n self.add_error(key, _('ERROR_CODE_NOT_RECOGNIZED'))\n elif enrollment.user_profile in self.member_profiles:\n self.add_error(key, _('ERROR_USER_ALREADY_IN_GROUP'))\n else:\n self.member_profiles.append(enrollment.user_profile)\n\n if not self.errors and len(self.member_profiles) > 1:\n if StudentGroup.get_exact(self.instance, self.member_profiles):\n self.add_error(None, _('ERROR_GROUP_ALREADY_EXISTS'))\n\n return self.cleaned_data\n\n def save(self):\n group = StudentGroup(course_instance=self.instance)\n group.save()\n group.members.add(*self.member_profiles)\n return group\n\n\nclass GroupSelectForm(forms.Form):\n group = forms.IntegerField(required=True)\n\n def __init__(self, *args, **kwargs):\n self.profile = kwargs.pop('profile')\n self.instance = kwargs.pop('instance')\n super().__init__(*args, **kwargs)\n\n def clean(self):\n super().clean()\n self.selected_group = None\n if 'group' in self.cleaned_data:\n gid = self.cleaned_data['group']\n if gid != 0:\n group = self.profile.groups.filter(id=gid, course_instance=self.instance).first()\n if group:\n self.selected_group = group\n else:\n self.add_error('group', 'Invalid group id')\n return self.cleaned_data\n\n def save(self) -> Enrollment:\n enrollment = self.instance.get_enrollment_for(self.profile.user)\n enrollment.selected_group = self.selected_group\n enrollment.save()\n # Deactivate all drafts when changing groups.\n SubmissionDraft.objects.filter(\n exercise__course_module__course_instance=self.instance,\n submitter=self.profile,\n active=True,\n ).update(active=False)\n return enrollment\n\n\nclass GroupEditForm(forms.ModelForm):\n\n members = UsersSearchSelectField(queryset=UserProfile.objects.none(),\n initial_queryset=UserProfile.objects.none(),\n label=_('LABEL_MEMBERS'),\n )\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n course_instance = kwargs.get('instance').course_instance\n super().__init__(*args, **kwargs)\n self.fields['members'].widget.search_api_url = api_reverse(\n \"course-students-list\",\n kwargs={'course_id': course_instance.id},\n )\n self.fields[\"members\"].queryset = course_instance.get_student_profiles()\n # Course staff may use this form for modifying and creating student groups.\n # If an existing group is being modified, its current members must be\n # set to the initial queryset.\n if self.instance.id:\n self.fields[\"members\"].initial_queryset = self.instance.members.all()\n\n def clean(self):\n super().clean()\n members = self.cleaned_data.get('members')\n if members:\n if len(members) == 1:\n self.add_error('members', _('MUST_HAVE_TWO_MEMBERS'))\n course_instance = self.instance.course_instance\n # Filter all groups with course instance and that have one or more similar members as in the members list\n filtered_groups = StudentGroup.objects.filter(course_instance=course_instance, members__in=members)\n # Count number of members in each group\n groups_with_member_count = filtered_groups.annotate(member_count=Count('members'))\n # Filter only those groups that have same number of members\n groups_with_exact_member_count = groups_with_member_count.filter(member_count=len(members))\n # Loop through the returned groups and check if any group with exact same members exist\n group_exists = False\n for group in groups_with_exact_member_count:\n group_members = group.members.all()\n if list(group_members) == list(members):\n group_exists = True\n if group_exists:\n self.add_error('members', _('ERROR_GROUP_ALREADY_EXISTS'))\n return self.cleaned_data\n\n\n class Meta:\n model = StudentGroup\n fields = ['members']\n\n\nclass EnrollStudentsForm(forms.Form):\n\n user_profiles = UsersSearchSelectField(queryset=UserProfile.objects.all(),\n initial_queryset=UserProfile.objects.none(),\n label=_('LABEL_USERS'),\n required=False,\n )\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n self.instance = kwargs.pop('instance')\n super().__init__(*args, **kwargs)\n self.fields['user_profiles'].widget.search_api_url = api_reverse(\"user-list\")\n if self.instance.sis_id:\n self.fields['sis'] = forms.BooleanField(\n required=False,\n label=_('LABEL_ENROLL_FROM_SIS'),\n )\n", "path": "course/forms.py"}]} | 1,768 | 399 |
gh_patches_debug_24936 | rasdani/github-patches | git_diff | DDMAL__CantusDB-733 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Links to unpublished sources should not appear on Provenance detail pages
Example: visit http://206.12.93.196/provenance/3665 (while logged out), click on first link. We get a 403 Forbidden error, since the source is unpublished.
Unpublished sources should not be listed on the Provenance Detail page.
Credit to @zhannaklimanova and her link checker script for catching this bug!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/views/provenance.py`
Content:
```
1 from django.views.generic import DetailView
2 from main_app.models import Provenance
3
4
5 class ProvenanceDetailView(DetailView):
6 model = Provenance
7 context_object_name = "provenance"
8 template_name = "provenance_detail.html"
9
```
Path: `django/cantusdb_project/main_app/views/century.py`
Content:
```
1 from django.views.generic import DetailView
2 from main_app.models import Century, Source
3 from typing import Any
4
5
6 class CenturyDetailView(DetailView):
7 model = Century
8 context_object_name = "century"
9 template_name = "century_detail.html"
10
11 def get_context_data(self, **kwargs: Any) -> dict[str, Any]:
12 context = super().get_context_data(**kwargs)
13 century = self.get_object()
14 user = self.request.user
15 display_unpublished = user.is_authenticated
16 sources = Source.objects.filter(century=century)
17 if not display_unpublished:
18 sources = sources.filter(published=True)
19 sources = sources.only("title", "id")
20 context["sources"] = sources
21 return context
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django/cantusdb_project/main_app/views/century.py b/django/cantusdb_project/main_app/views/century.py
--- a/django/cantusdb_project/main_app/views/century.py
+++ b/django/cantusdb_project/main_app/views/century.py
@@ -16,6 +16,6 @@
sources = Source.objects.filter(century=century)
if not display_unpublished:
sources = sources.filter(published=True)
- sources = sources.only("title", "id")
+ sources = sources.only("title", "id", "siglum")
context["sources"] = sources
return context
diff --git a/django/cantusdb_project/main_app/views/provenance.py b/django/cantusdb_project/main_app/views/provenance.py
--- a/django/cantusdb_project/main_app/views/provenance.py
+++ b/django/cantusdb_project/main_app/views/provenance.py
@@ -1,8 +1,21 @@
from django.views.generic import DetailView
-from main_app.models import Provenance
+from main_app.models import Provenance, Source
+from typing import Any
class ProvenanceDetailView(DetailView):
model = Provenance
context_object_name = "provenance"
template_name = "provenance_detail.html"
+
+ def get_context_data(self, **kwargs: Any) -> dict[str, Any]:
+ context = super().get_context_data(**kwargs)
+ provenance = self.get_object()
+ user = self.request.user
+ display_unpublished = user.is_authenticated
+ sources = Source.objects.filter(provenance=provenance)
+ if not display_unpublished:
+ sources = sources.filter(published=True)
+ sources = sources.only("title", "id", "siglum")
+ context["sources"] = sources
+ return context
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/views/century.py b/django/cantusdb_project/main_app/views/century.py\n--- a/django/cantusdb_project/main_app/views/century.py\n+++ b/django/cantusdb_project/main_app/views/century.py\n@@ -16,6 +16,6 @@\n sources = Source.objects.filter(century=century)\n if not display_unpublished:\n sources = sources.filter(published=True)\n- sources = sources.only(\"title\", \"id\")\n+ sources = sources.only(\"title\", \"id\", \"siglum\")\n context[\"sources\"] = sources\n return context\ndiff --git a/django/cantusdb_project/main_app/views/provenance.py b/django/cantusdb_project/main_app/views/provenance.py\n--- a/django/cantusdb_project/main_app/views/provenance.py\n+++ b/django/cantusdb_project/main_app/views/provenance.py\n@@ -1,8 +1,21 @@\n from django.views.generic import DetailView\n-from main_app.models import Provenance\n+from main_app.models import Provenance, Source\n+from typing import Any\n \n \n class ProvenanceDetailView(DetailView):\n model = Provenance\n context_object_name = \"provenance\"\n template_name = \"provenance_detail.html\"\n+\n+ def get_context_data(self, **kwargs: Any) -> dict[str, Any]:\n+ context = super().get_context_data(**kwargs)\n+ provenance = self.get_object()\n+ user = self.request.user\n+ display_unpublished = user.is_authenticated\n+ sources = Source.objects.filter(provenance=provenance)\n+ if not display_unpublished:\n+ sources = sources.filter(published=True)\n+ sources = sources.only(\"title\", \"id\", \"siglum\")\n+ context[\"sources\"] = sources\n+ return context\n", "issue": "Links to unpublished sources should not appear on Provenance detail pages\nExample: visit http://206.12.93.196/provenance/3665 (while logged out), click on first link. We get a 403 Forbidden error, since the source is unpublished.\r\n\r\nUnpublished sources should not be listed on the Provenance Detail page.\r\n\r\nCredit to @zhannaklimanova and her link checker script for catching this bug!\n", "before_files": [{"content": "from django.views.generic import DetailView\nfrom main_app.models import Provenance\n\n\nclass ProvenanceDetailView(DetailView):\n model = Provenance\n context_object_name = \"provenance\"\n template_name = \"provenance_detail.html\"\n", "path": "django/cantusdb_project/main_app/views/provenance.py"}, {"content": "from django.views.generic import DetailView\nfrom main_app.models import Century, Source\nfrom typing import Any\n\n\nclass CenturyDetailView(DetailView):\n model = Century\n context_object_name = \"century\"\n template_name = \"century_detail.html\"\n\n def get_context_data(self, **kwargs: Any) -> dict[str, Any]:\n context = super().get_context_data(**kwargs)\n century = self.get_object()\n user = self.request.user\n display_unpublished = user.is_authenticated\n sources = Source.objects.filter(century=century)\n if not display_unpublished:\n sources = sources.filter(published=True)\n sources = sources.only(\"title\", \"id\")\n context[\"sources\"] = sources\n return context\n", "path": "django/cantusdb_project/main_app/views/century.py"}], "after_files": [{"content": "from django.views.generic import DetailView\nfrom main_app.models import Provenance, Source\nfrom typing import Any\n\n\nclass ProvenanceDetailView(DetailView):\n model = Provenance\n context_object_name = \"provenance\"\n template_name = \"provenance_detail.html\"\n\n def get_context_data(self, **kwargs: Any) -> dict[str, Any]:\n context = super().get_context_data(**kwargs)\n provenance = self.get_object()\n user = self.request.user\n display_unpublished = user.is_authenticated\n sources = Source.objects.filter(provenance=provenance)\n if not display_unpublished:\n sources = sources.filter(published=True)\n sources = sources.only(\"title\", \"id\", \"siglum\")\n context[\"sources\"] = sources\n return context\n", "path": "django/cantusdb_project/main_app/views/provenance.py"}, {"content": "from django.views.generic import DetailView\nfrom main_app.models import Century, Source\nfrom typing import Any\n\n\nclass CenturyDetailView(DetailView):\n model = Century\n context_object_name = \"century\"\n template_name = \"century_detail.html\"\n\n def get_context_data(self, **kwargs: Any) -> dict[str, Any]:\n context = super().get_context_data(**kwargs)\n century = self.get_object()\n user = self.request.user\n display_unpublished = user.is_authenticated\n sources = Source.objects.filter(century=century)\n if not display_unpublished:\n sources = sources.filter(published=True)\n sources = sources.only(\"title\", \"id\", \"siglum\")\n context[\"sources\"] = sources\n return context\n", "path": "django/cantusdb_project/main_app/views/century.py"}]} | 650 | 425 |
gh_patches_debug_39481 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3317 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider lowes is broken
During the global build at 2021-06-02-14-42-40, spider **lowes** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/lowes.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/lowes.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/lowes.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/lowes.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import re
4 import json
5 from locations.items import GeojsonPointItem
6 from locations.hours import OpeningHours
7
8
9 day_mapping = {'Monday': 'Mo', 'Tuesday': 'Tu', 'Wednesday': 'We', 'Thursday': 'Th', 'Friday': 'Fr', 'Saturday': 'Sa',
10 'Sunday': 'Su'}
11
12
13 class LowesSpider(scrapy.Spider):
14 """"This spider scrapes Lowes retail store locations"""
15 name = "lowes"
16 item_attributes = { 'brand': "Lowe's", 'brand_wikidata': "Q1373493" }
17 allowed_domains = ["lowes.com"]
18 start_urls = ('https://www.lowes.com/Lowes-Stores',)
19 download_delay = 0.5
20
21 custom_settings = {
22 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',
23 }
24
25 def parse_hours(self, store_hours):
26 opening_hours = OpeningHours()
27
28 for weekday in store_hours:
29 day = weekday.get('day').get('day')
30 open_time = weekday.get('day').get('open')
31 hour, minute, sec = open_time.split('.')
32 open_time_formatted = hour + ':' + minute
33
34 close = weekday.get('day').get('close')
35 hour, minute, sec = close.split('.')
36 close_time_formatted = hour + ':' + minute
37
38 if close_time_formatted in {'00:00', '24:00'}:
39 close_time_formatted = "23:59"
40
41 opening_hours.add_range(day=day_mapping[day],
42 open_time=open_time_formatted,
43 close_time=close_time_formatted)
44
45 return opening_hours.as_opening_hours()
46
47 def parse_store(self, response):
48 ref = re.search(r'.+/(.+)', response.url).group(1)
49
50 script_content = response.xpath('//script[contains(text(),"storeHours")]/text()').extract_first()
51 if not script_content:
52 return
53
54 # effectively strip off leading "window.__PRELOADED_STATE__ = " where
55 # the rest is a json blob
56 script_data = script_content.split(" = ", 1)[-1]
57 json_data = json.loads(script_data)
58 store_hours = json_data.get('storeHours')
59
60 state_texts = response.xpath('//span[@itemprop="addressRegion"]/text()').extract()
61 properties = {
62 'lat': float(json_data['storeDetails']['lat']),
63 'lon': float(json_data['storeDetails']['long']),
64 'ref': ref,
65 'addr_full': response.xpath('normalize-space(//span[@itemprop="streetAddress"]/text())').extract_first(),
66 'city': response.xpath('normalize-space(//span[@itemprop="addressLocality"]/text())').extract_first(),
67 'state': " ".join(text.strip() for text in state_texts if text.strip()),
68 'postcode': response.xpath('normalize-space(//span[@itemprop="postalCode"]/text())').extract_first(),
69 'phone': response.xpath('normalize-space(//meta[@itemprop="telephone"]/@content)').extract_first(),
70 'website': response.request.url,
71 'opening_hours': self.parse_hours(store_hours),
72 'extras': {
73 'amenity:toilets': True,
74 },
75 }
76
77 yield GeojsonPointItem(**properties)
78
79 def parse_state(self, response):
80 city_urls = response.xpath('//div[@class="v-spacing-small"]/a/@href').extract()
81 for path in city_urls:
82 yield scrapy.Request(response.urljoin(path), callback=self.parse_store)
83
84 def parse(self, response):
85 urls = response.xpath('//div[@id="mainContent"]//li[@role="listitem"]/a/@href').extract()
86 for path in urls:
87 yield scrapy.Request(response.urljoin(path), callback=self.parse_state)
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/lowes.py b/locations/spiders/lowes.py
--- a/locations/spiders/lowes.py
+++ b/locations/spiders/lowes.py
@@ -6,16 +6,23 @@
from locations.hours import OpeningHours
-day_mapping = {'Monday': 'Mo', 'Tuesday': 'Tu', 'Wednesday': 'We', 'Thursday': 'Th', 'Friday': 'Fr', 'Saturday': 'Sa',
- 'Sunday': 'Su'}
+day_mapping = {
+ 'Monday': 'Mo',
+ 'Tuesday': 'Tu',
+ 'Wednesday': 'We',
+ 'Thursday': 'Th',
+ 'Friday': 'Fr',
+ 'Saturday': 'Sa',
+ 'Sunday': 'Su',
+}
class LowesSpider(scrapy.Spider):
""""This spider scrapes Lowes retail store locations"""
name = "lowes"
- item_attributes = { 'brand': "Lowe's", 'brand_wikidata': "Q1373493" }
+ item_attributes = {'brand': "Lowe's", 'brand_wikidata': "Q1373493"}
allowed_domains = ["lowes.com"]
- start_urls = ('https://www.lowes.com/Lowes-Stores',)
+ start_urls = ('https://www.lowes.com/sitemap/store0.xml',)
download_delay = 0.5
custom_settings = {
@@ -59,14 +66,14 @@
state_texts = response.xpath('//span[@itemprop="addressRegion"]/text()').extract()
properties = {
- 'lat': float(json_data['storeDetails']['lat']),
- 'lon': float(json_data['storeDetails']['long']),
- 'ref': ref,
- 'addr_full': response.xpath('normalize-space(//span[@itemprop="streetAddress"]/text())').extract_first(),
- 'city': response.xpath('normalize-space(//span[@itemprop="addressLocality"]/text())').extract_first(),
- 'state': " ".join(text.strip() for text in state_texts if text.strip()),
- 'postcode': response.xpath('normalize-space(//span[@itemprop="postalCode"]/text())').extract_first(),
- 'phone': response.xpath('normalize-space(//meta[@itemprop="telephone"]/@content)').extract_first(),
+ 'lat': json_data['storeDetails']['lat'],
+ 'lon': json_data['storeDetails']['long'],
+ 'ref': json_data['storeDetails']['id'],
+ 'addr_full': json_data['storeDetails']['address'],
+ 'city': json_data['storeDetails']['city'],
+ 'state': json_data['storeDetails']['state'],
+ 'postcode': json_data['storeDetails']['zip'],
+ 'phone': json_data['storeDetails']['phone'],
'website': response.request.url,
'opening_hours': self.parse_hours(store_hours),
'extras': {
@@ -76,12 +83,9 @@
yield GeojsonPointItem(**properties)
- def parse_state(self, response):
- city_urls = response.xpath('//div[@class="v-spacing-small"]/a/@href').extract()
- for path in city_urls:
- yield scrapy.Request(response.urljoin(path), callback=self.parse_store)
-
def parse(self, response):
- urls = response.xpath('//div[@id="mainContent"]//li[@role="listitem"]/a/@href').extract()
- for path in urls:
- yield scrapy.Request(response.urljoin(path), callback=self.parse_state)
+ response.selector.remove_namespaces()
+ urls = response.xpath('//url/loc/text()').extract()
+
+ for url in urls:
+ yield scrapy.Request(url, callback=self.parse_store)
| {"golden_diff": "diff --git a/locations/spiders/lowes.py b/locations/spiders/lowes.py\n--- a/locations/spiders/lowes.py\n+++ b/locations/spiders/lowes.py\n@@ -6,16 +6,23 @@\n from locations.hours import OpeningHours\n \n \n-day_mapping = {'Monday': 'Mo', 'Tuesday': 'Tu', 'Wednesday': 'We', 'Thursday': 'Th', 'Friday': 'Fr', 'Saturday': 'Sa',\n- 'Sunday': 'Su'}\n+day_mapping = {\n+ 'Monday': 'Mo',\n+ 'Tuesday': 'Tu',\n+ 'Wednesday': 'We',\n+ 'Thursday': 'Th',\n+ 'Friday': 'Fr',\n+ 'Saturday': 'Sa',\n+ 'Sunday': 'Su',\n+}\n \n \n class LowesSpider(scrapy.Spider):\n \"\"\"\"This spider scrapes Lowes retail store locations\"\"\"\n name = \"lowes\"\n- item_attributes = { 'brand': \"Lowe's\", 'brand_wikidata': \"Q1373493\" }\n+ item_attributes = {'brand': \"Lowe's\", 'brand_wikidata': \"Q1373493\"}\n allowed_domains = [\"lowes.com\"]\n- start_urls = ('https://www.lowes.com/Lowes-Stores',)\n+ start_urls = ('https://www.lowes.com/sitemap/store0.xml',)\n download_delay = 0.5\n \n custom_settings = {\n@@ -59,14 +66,14 @@\n \n state_texts = response.xpath('//span[@itemprop=\"addressRegion\"]/text()').extract()\n properties = {\n- 'lat': float(json_data['storeDetails']['lat']),\n- 'lon': float(json_data['storeDetails']['long']),\n- 'ref': ref,\n- 'addr_full': response.xpath('normalize-space(//span[@itemprop=\"streetAddress\"]/text())').extract_first(),\n- 'city': response.xpath('normalize-space(//span[@itemprop=\"addressLocality\"]/text())').extract_first(),\n- 'state': \" \".join(text.strip() for text in state_texts if text.strip()),\n- 'postcode': response.xpath('normalize-space(//span[@itemprop=\"postalCode\"]/text())').extract_first(),\n- 'phone': response.xpath('normalize-space(//meta[@itemprop=\"telephone\"]/@content)').extract_first(),\n+ 'lat': json_data['storeDetails']['lat'],\n+ 'lon': json_data['storeDetails']['long'],\n+ 'ref': json_data['storeDetails']['id'],\n+ 'addr_full': json_data['storeDetails']['address'],\n+ 'city': json_data['storeDetails']['city'],\n+ 'state': json_data['storeDetails']['state'],\n+ 'postcode': json_data['storeDetails']['zip'],\n+ 'phone': json_data['storeDetails']['phone'],\n 'website': response.request.url,\n 'opening_hours': self.parse_hours(store_hours),\n 'extras': {\n@@ -76,12 +83,9 @@\n \n yield GeojsonPointItem(**properties)\n \n- def parse_state(self, response):\n- city_urls = response.xpath('//div[@class=\"v-spacing-small\"]/a/@href').extract()\n- for path in city_urls:\n- yield scrapy.Request(response.urljoin(path), callback=self.parse_store)\n-\n def parse(self, response):\n- urls = response.xpath('//div[@id=\"mainContent\"]//li[@role=\"listitem\"]/a/@href').extract()\n- for path in urls:\n- yield scrapy.Request(response.urljoin(path), callback=self.parse_state)\n+ response.selector.remove_namespaces()\n+ urls = response.xpath('//url/loc/text()').extract()\n+\n+ for url in urls:\n+ yield scrapy.Request(url, callback=self.parse_store)\n", "issue": "Spider lowes is broken\nDuring the global build at 2021-06-02-14-42-40, spider **lowes** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/lowes.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/lowes.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/lowes.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport re\nimport json\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nday_mapping = {'Monday': 'Mo', 'Tuesday': 'Tu', 'Wednesday': 'We', 'Thursday': 'Th', 'Friday': 'Fr', 'Saturday': 'Sa',\n 'Sunday': 'Su'}\n\n\nclass LowesSpider(scrapy.Spider):\n \"\"\"\"This spider scrapes Lowes retail store locations\"\"\"\n name = \"lowes\"\n item_attributes = { 'brand': \"Lowe's\", 'brand_wikidata': \"Q1373493\" }\n allowed_domains = [\"lowes.com\"]\n start_urls = ('https://www.lowes.com/Lowes-Stores',)\n download_delay = 0.5\n\n custom_settings = {\n 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',\n }\n\n def parse_hours(self, store_hours):\n opening_hours = OpeningHours()\n\n for weekday in store_hours:\n day = weekday.get('day').get('day')\n open_time = weekday.get('day').get('open')\n hour, minute, sec = open_time.split('.')\n open_time_formatted = hour + ':' + minute\n\n close = weekday.get('day').get('close')\n hour, minute, sec = close.split('.')\n close_time_formatted = hour + ':' + minute\n\n if close_time_formatted in {'00:00', '24:00'}:\n close_time_formatted = \"23:59\"\n\n opening_hours.add_range(day=day_mapping[day],\n open_time=open_time_formatted,\n close_time=close_time_formatted)\n\n return opening_hours.as_opening_hours()\n\n def parse_store(self, response):\n ref = re.search(r'.+/(.+)', response.url).group(1)\n\n script_content = response.xpath('//script[contains(text(),\"storeHours\")]/text()').extract_first()\n if not script_content:\n return\n\n # effectively strip off leading \"window.__PRELOADED_STATE__ = \" where\n # the rest is a json blob\n script_data = script_content.split(\" = \", 1)[-1]\n json_data = json.loads(script_data)\n store_hours = json_data.get('storeHours')\n\n state_texts = response.xpath('//span[@itemprop=\"addressRegion\"]/text()').extract()\n properties = {\n 'lat': float(json_data['storeDetails']['lat']),\n 'lon': float(json_data['storeDetails']['long']),\n 'ref': ref,\n 'addr_full': response.xpath('normalize-space(//span[@itemprop=\"streetAddress\"]/text())').extract_first(),\n 'city': response.xpath('normalize-space(//span[@itemprop=\"addressLocality\"]/text())').extract_first(),\n 'state': \" \".join(text.strip() for text in state_texts if text.strip()),\n 'postcode': response.xpath('normalize-space(//span[@itemprop=\"postalCode\"]/text())').extract_first(),\n 'phone': response.xpath('normalize-space(//meta[@itemprop=\"telephone\"]/@content)').extract_first(),\n 'website': response.request.url,\n 'opening_hours': self.parse_hours(store_hours),\n 'extras': {\n 'amenity:toilets': True,\n },\n }\n\n yield GeojsonPointItem(**properties)\n\n def parse_state(self, response):\n city_urls = response.xpath('//div[@class=\"v-spacing-small\"]/a/@href').extract()\n for path in city_urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_store)\n\n def parse(self, response):\n urls = response.xpath('//div[@id=\"mainContent\"]//li[@role=\"listitem\"]/a/@href').extract()\n for path in urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_state)\n", "path": "locations/spiders/lowes.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport re\nimport json\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nday_mapping = {\n 'Monday': 'Mo',\n 'Tuesday': 'Tu',\n 'Wednesday': 'We',\n 'Thursday': 'Th',\n 'Friday': 'Fr',\n 'Saturday': 'Sa',\n 'Sunday': 'Su',\n}\n\n\nclass LowesSpider(scrapy.Spider):\n \"\"\"\"This spider scrapes Lowes retail store locations\"\"\"\n name = \"lowes\"\n item_attributes = {'brand': \"Lowe's\", 'brand_wikidata': \"Q1373493\"}\n allowed_domains = [\"lowes.com\"]\n start_urls = ('https://www.lowes.com/sitemap/store0.xml',)\n download_delay = 0.5\n\n custom_settings = {\n 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',\n }\n\n def parse_hours(self, store_hours):\n opening_hours = OpeningHours()\n\n for weekday in store_hours:\n day = weekday.get('day').get('day')\n open_time = weekday.get('day').get('open')\n hour, minute, sec = open_time.split('.')\n open_time_formatted = hour + ':' + minute\n\n close = weekday.get('day').get('close')\n hour, minute, sec = close.split('.')\n close_time_formatted = hour + ':' + minute\n\n if close_time_formatted in {'00:00', '24:00'}:\n close_time_formatted = \"23:59\"\n\n opening_hours.add_range(day=day_mapping[day],\n open_time=open_time_formatted,\n close_time=close_time_formatted)\n\n return opening_hours.as_opening_hours()\n\n def parse_store(self, response):\n ref = re.search(r'.+/(.+)', response.url).group(1)\n\n script_content = response.xpath('//script[contains(text(),\"storeHours\")]/text()').extract_first()\n if not script_content:\n return\n\n # effectively strip off leading \"window.__PRELOADED_STATE__ = \" where\n # the rest is a json blob\n script_data = script_content.split(\" = \", 1)[-1]\n json_data = json.loads(script_data)\n store_hours = json_data.get('storeHours')\n\n state_texts = response.xpath('//span[@itemprop=\"addressRegion\"]/text()').extract()\n properties = {\n 'lat': json_data['storeDetails']['lat'],\n 'lon': json_data['storeDetails']['long'],\n 'ref': json_data['storeDetails']['id'],\n 'addr_full': json_data['storeDetails']['address'],\n 'city': json_data['storeDetails']['city'],\n 'state': json_data['storeDetails']['state'],\n 'postcode': json_data['storeDetails']['zip'],\n 'phone': json_data['storeDetails']['phone'],\n 'website': response.request.url,\n 'opening_hours': self.parse_hours(store_hours),\n 'extras': {\n 'amenity:toilets': True,\n },\n }\n\n yield GeojsonPointItem(**properties)\n\n def parse(self, response):\n response.selector.remove_namespaces()\n urls = response.xpath('//url/loc/text()').extract()\n\n for url in urls:\n yield scrapy.Request(url, callback=self.parse_store)\n", "path": "locations/spiders/lowes.py"}]} | 1,487 | 830 |
gh_patches_debug_28882 | rasdani/github-patches | git_diff | GeotrekCE__Geotrek-admin-1391 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
WYSIWYG for static pages
Client-side WYSIWYG :
- http://sofish.github.io/pen/
- https://github.com/mduvall/grande.js
- http://imperavi.com/redactor/
- https://github.com/tholman/zenpen
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geotrek/flatpages/views.py`
Content:
```
1 from rest_framework import viewsets
2
3 from geotrek.flatpages.serializers import FlatPageSerializer
4 from geotrek.flatpages import models as flatpages_models
5
6
7 class FlatPageViewSet(viewsets.ModelViewSet):
8 """
9 A viewset for viewing and editing flat pages instances.
10 """
11 serializer_class = FlatPageSerializer
12 queryset = flatpages_models.FlatPage.objects.all()
13
```
Path: `geotrek/flatpages/admin.py`
Content:
```
1 from django.contrib import admin
2 from django.conf import settings
3
4 from modeltranslation.admin import TranslationAdmin
5
6 from geotrek.flatpages import models as flatpages_models
7
8
9 class FlatPagesAdmin(TranslationAdmin):
10 list_display = ('title', 'published', 'publication_date', 'target')
11 search_fields = ('title', 'content')
12
13
14 if settings.FLATPAGES_ENABLED:
15 admin.site.register(flatpages_models.FlatPage, FlatPagesAdmin)
16
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/geotrek/flatpages/admin.py b/geotrek/flatpages/admin.py
--- a/geotrek/flatpages/admin.py
+++ b/geotrek/flatpages/admin.py
@@ -2,6 +2,7 @@
from django.conf import settings
from modeltranslation.admin import TranslationAdmin
+from tinymce.widgets import TinyMCE
from geotrek.flatpages import models as flatpages_models
@@ -10,6 +11,11 @@
list_display = ('title', 'published', 'publication_date', 'target')
search_fields = ('title', 'content')
+ def formfield_for_dbfield(self, db_field, **kwargs):
+ if db_field.name[:7] == 'content':
+ return db_field.formfield(widget=TinyMCE)
+ return super(FlatPagesAdmin, self).formfield_for_dbfield(db_field, **kwargs)
+
if settings.FLATPAGES_ENABLED:
admin.site.register(flatpages_models.FlatPage, FlatPagesAdmin)
diff --git a/geotrek/flatpages/views.py b/geotrek/flatpages/views.py
--- a/geotrek/flatpages/views.py
+++ b/geotrek/flatpages/views.py
@@ -1,3 +1,4 @@
+from rest_framework import permissions as rest_permissions
from rest_framework import viewsets
from geotrek.flatpages.serializers import FlatPageSerializer
@@ -8,5 +9,9 @@
"""
A viewset for viewing and editing flat pages instances.
"""
+ model = flatpages_models.FlatPage
serializer_class = FlatPageSerializer
- queryset = flatpages_models.FlatPage.objects.all()
+ permission_classes = [rest_permissions.DjangoModelPermissionsOrAnonReadOnly]
+
+ def get_queryset(self):
+ return flatpages_models.FlatPage.objects.filter(published=True)
| {"golden_diff": "diff --git a/geotrek/flatpages/admin.py b/geotrek/flatpages/admin.py\n--- a/geotrek/flatpages/admin.py\n+++ b/geotrek/flatpages/admin.py\n@@ -2,6 +2,7 @@\n from django.conf import settings\n \n from modeltranslation.admin import TranslationAdmin\n+from tinymce.widgets import TinyMCE\n \n from geotrek.flatpages import models as flatpages_models\n \n@@ -10,6 +11,11 @@\n list_display = ('title', 'published', 'publication_date', 'target')\n search_fields = ('title', 'content')\n \n+ def formfield_for_dbfield(self, db_field, **kwargs):\n+ if db_field.name[:7] == 'content':\n+ return db_field.formfield(widget=TinyMCE)\n+ return super(FlatPagesAdmin, self).formfield_for_dbfield(db_field, **kwargs)\n+\n \n if settings.FLATPAGES_ENABLED:\n admin.site.register(flatpages_models.FlatPage, FlatPagesAdmin)\ndiff --git a/geotrek/flatpages/views.py b/geotrek/flatpages/views.py\n--- a/geotrek/flatpages/views.py\n+++ b/geotrek/flatpages/views.py\n@@ -1,3 +1,4 @@\n+from rest_framework import permissions as rest_permissions\n from rest_framework import viewsets\n \n from geotrek.flatpages.serializers import FlatPageSerializer\n@@ -8,5 +9,9 @@\n \"\"\"\n A viewset for viewing and editing flat pages instances.\n \"\"\"\n+ model = flatpages_models.FlatPage\n serializer_class = FlatPageSerializer\n- queryset = flatpages_models.FlatPage.objects.all()\n+ permission_classes = [rest_permissions.DjangoModelPermissionsOrAnonReadOnly]\n+\n+ def get_queryset(self):\n+ return flatpages_models.FlatPage.objects.filter(published=True)\n", "issue": "WYSIWYG for static pages\nClient-side WYSIWYG : \n- http://sofish.github.io/pen/\n- https://github.com/mduvall/grande.js\n- http://imperavi.com/redactor/\n- https://github.com/tholman/zenpen\n\n", "before_files": [{"content": "from rest_framework import viewsets\n\nfrom geotrek.flatpages.serializers import FlatPageSerializer\nfrom geotrek.flatpages import models as flatpages_models\n\n\nclass FlatPageViewSet(viewsets.ModelViewSet):\n \"\"\"\n A viewset for viewing and editing flat pages instances.\n \"\"\"\n serializer_class = FlatPageSerializer\n queryset = flatpages_models.FlatPage.objects.all()\n", "path": "geotrek/flatpages/views.py"}, {"content": "from django.contrib import admin\nfrom django.conf import settings\n\nfrom modeltranslation.admin import TranslationAdmin\n\nfrom geotrek.flatpages import models as flatpages_models\n\n\nclass FlatPagesAdmin(TranslationAdmin):\n list_display = ('title', 'published', 'publication_date', 'target')\n search_fields = ('title', 'content')\n\n\nif settings.FLATPAGES_ENABLED:\n admin.site.register(flatpages_models.FlatPage, FlatPagesAdmin)\n", "path": "geotrek/flatpages/admin.py"}], "after_files": [{"content": "from rest_framework import permissions as rest_permissions\nfrom rest_framework import viewsets\n\nfrom geotrek.flatpages.serializers import FlatPageSerializer\nfrom geotrek.flatpages import models as flatpages_models\n\n\nclass FlatPageViewSet(viewsets.ModelViewSet):\n \"\"\"\n A viewset for viewing and editing flat pages instances.\n \"\"\"\n model = flatpages_models.FlatPage\n serializer_class = FlatPageSerializer\n permission_classes = [rest_permissions.DjangoModelPermissionsOrAnonReadOnly]\n\n def get_queryset(self):\n return flatpages_models.FlatPage.objects.filter(published=True)\n", "path": "geotrek/flatpages/views.py"}, {"content": "from django.contrib import admin\nfrom django.conf import settings\n\nfrom modeltranslation.admin import TranslationAdmin\nfrom tinymce.widgets import TinyMCE\n\nfrom geotrek.flatpages import models as flatpages_models\n\n\nclass FlatPagesAdmin(TranslationAdmin):\n list_display = ('title', 'published', 'publication_date', 'target')\n search_fields = ('title', 'content')\n\n def formfield_for_dbfield(self, db_field, **kwargs):\n if db_field.name[:7] == 'content':\n return db_field.formfield(widget=TinyMCE)\n return super(FlatPagesAdmin, self).formfield_for_dbfield(db_field, **kwargs)\n\n\nif settings.FLATPAGES_ENABLED:\n admin.site.register(flatpages_models.FlatPage, FlatPagesAdmin)\n", "path": "geotrek/flatpages/admin.py"}]} | 559 | 400 |
gh_patches_debug_7795 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-4076 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
azure - unpinn EventGrid SDK version
We need AdvancedFilters to be added to the stable version.
https://pypi.org/project/azure-mgmt-eventgrid/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/c7n_azure/setup.py`
Content:
```
1 # Copyright 2018 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from io import open
16 from os import path
17 from setuptools import setup, find_packages
18
19 # read the contents of your README file
20 this_directory = path.abspath(path.dirname(__file__))
21 readme = path.join(this_directory, 'readme.md')
22 long_description = ''
23 if path.exists(readme):
24 with open(readme, encoding='utf-8') as f:
25 long_description = f.read()
26
27 setup(
28 name="c7n_azure",
29 version='0.5.3',
30 description="Cloud Custodian - Azure Support",
31 long_description=long_description,
32 long_description_content_type='text/markdown',
33 classifiers=[
34 "Topic :: System :: Systems Administration",
35 "Topic :: System :: Distributed Computing"
36 ],
37 url="https://github.com/cloud-custodian/cloud-custodian",
38 license="Apache-2.0",
39 packages=find_packages(),
40 entry_points={
41 "custodian.resources": [
42 'azure = c7n_azure.entry:initialize_azure']
43 },
44 install_requires=["azure-mgmt-authorization",
45 "azure-mgmt-applicationinsights==0.1.1",
46 "azure-mgmt-batch",
47 "azure-mgmt-cognitiveservices",
48 "azure-mgmt-cosmosdb",
49 "azure-mgmt-compute",
50 "azure-mgmt-cdn",
51 "azure-mgmt-containerregistry",
52 "azure-mgmt-containerservice",
53 "azure-mgmt-datalake-store",
54 "azure-mgmt-datafactory",
55 "azure-mgmt-iothub",
56 "azure-mgmt-keyvault",
57 "azure-mgmt-managementgroups",
58 "azure-mgmt-network",
59 "azure-mgmt-redis",
60 "azure-mgmt-resource==2.1.0",
61 "azure-mgmt-sql",
62 "azure-mgmt-storage",
63 "azure-mgmt-web",
64 "azure-mgmt-monitor",
65 "azure-mgmt-policyinsights",
66 "azure-mgmt-eventgrid==2.0.0rc2", # RC2 supports AdvancedFilters
67 "azure-graphrbac",
68 "azure-keyvault",
69 "azure-storage-blob",
70 "azure-storage-queue",
71 "distlib",
72 "requests",
73 "PyJWT",
74 "c7n",
75 "requests",
76 "azure-cli-core",
77 "adal",
78 "backports.functools_lru_cache",
79 "futures>=3.1.1",
80 "netaddr"],
81 package_data={str(''): [str('function_binding_resources/bin/*.dll'),
82 str('function_binding_resources/*.csproj'),
83 str('function_binding_resources/bin/*.json')]}
84 )
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/c7n_azure/setup.py b/tools/c7n_azure/setup.py
--- a/tools/c7n_azure/setup.py
+++ b/tools/c7n_azure/setup.py
@@ -63,7 +63,7 @@
"azure-mgmt-web",
"azure-mgmt-monitor",
"azure-mgmt-policyinsights",
- "azure-mgmt-eventgrid==2.0.0rc2", # RC2 supports AdvancedFilters
+ "azure-mgmt-eventgrid",
"azure-graphrbac",
"azure-keyvault",
"azure-storage-blob",
| {"golden_diff": "diff --git a/tools/c7n_azure/setup.py b/tools/c7n_azure/setup.py\n--- a/tools/c7n_azure/setup.py\n+++ b/tools/c7n_azure/setup.py\n@@ -63,7 +63,7 @@\n \"azure-mgmt-web\",\n \"azure-mgmt-monitor\",\n \"azure-mgmt-policyinsights\",\n- \"azure-mgmt-eventgrid==2.0.0rc2\", # RC2 supports AdvancedFilters\n+ \"azure-mgmt-eventgrid\",\n \"azure-graphrbac\",\n \"azure-keyvault\",\n \"azure-storage-blob\",\n", "issue": "azure - unpinn EventGrid SDK version\nWe need AdvancedFilters to be added to the stable version.\r\n\r\nhttps://pypi.org/project/azure-mgmt-eventgrid/\n", "before_files": [{"content": "# Copyright 2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom io import open\nfrom os import path\nfrom setuptools import setup, find_packages\n\n# read the contents of your README file\nthis_directory = path.abspath(path.dirname(__file__))\nreadme = path.join(this_directory, 'readme.md')\nlong_description = ''\nif path.exists(readme):\n with open(readme, encoding='utf-8') as f:\n long_description = f.read()\n\nsetup(\n name=\"c7n_azure\",\n version='0.5.3',\n description=\"Cloud Custodian - Azure Support\",\n long_description=long_description,\n long_description_content_type='text/markdown',\n classifiers=[\n \"Topic :: System :: Systems Administration\",\n \"Topic :: System :: Distributed Computing\"\n ],\n url=\"https://github.com/cloud-custodian/cloud-custodian\",\n license=\"Apache-2.0\",\n packages=find_packages(),\n entry_points={\n \"custodian.resources\": [\n 'azure = c7n_azure.entry:initialize_azure']\n },\n install_requires=[\"azure-mgmt-authorization\",\n \"azure-mgmt-applicationinsights==0.1.1\",\n \"azure-mgmt-batch\",\n \"azure-mgmt-cognitiveservices\",\n \"azure-mgmt-cosmosdb\",\n \"azure-mgmt-compute\",\n \"azure-mgmt-cdn\",\n \"azure-mgmt-containerregistry\",\n \"azure-mgmt-containerservice\",\n \"azure-mgmt-datalake-store\",\n \"azure-mgmt-datafactory\",\n \"azure-mgmt-iothub\",\n \"azure-mgmt-keyvault\",\n \"azure-mgmt-managementgroups\",\n \"azure-mgmt-network\",\n \"azure-mgmt-redis\",\n \"azure-mgmt-resource==2.1.0\",\n \"azure-mgmt-sql\",\n \"azure-mgmt-storage\",\n \"azure-mgmt-web\",\n \"azure-mgmt-monitor\",\n \"azure-mgmt-policyinsights\",\n \"azure-mgmt-eventgrid==2.0.0rc2\", # RC2 supports AdvancedFilters\n \"azure-graphrbac\",\n \"azure-keyvault\",\n \"azure-storage-blob\",\n \"azure-storage-queue\",\n \"distlib\",\n \"requests\",\n \"PyJWT\",\n \"c7n\",\n \"requests\",\n \"azure-cli-core\",\n \"adal\",\n \"backports.functools_lru_cache\",\n \"futures>=3.1.1\",\n \"netaddr\"],\n package_data={str(''): [str('function_binding_resources/bin/*.dll'),\n str('function_binding_resources/*.csproj'),\n str('function_binding_resources/bin/*.json')]}\n)\n", "path": "tools/c7n_azure/setup.py"}], "after_files": [{"content": "# Copyright 2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom io import open\nfrom os import path\nfrom setuptools import setup, find_packages\n\n# read the contents of your README file\nthis_directory = path.abspath(path.dirname(__file__))\nreadme = path.join(this_directory, 'readme.md')\nlong_description = ''\nif path.exists(readme):\n with open(readme, encoding='utf-8') as f:\n long_description = f.read()\n\nsetup(\n name=\"c7n_azure\",\n version='0.5.3',\n description=\"Cloud Custodian - Azure Support\",\n long_description=long_description,\n long_description_content_type='text/markdown',\n classifiers=[\n \"Topic :: System :: Systems Administration\",\n \"Topic :: System :: Distributed Computing\"\n ],\n url=\"https://github.com/cloud-custodian/cloud-custodian\",\n license=\"Apache-2.0\",\n packages=find_packages(),\n entry_points={\n \"custodian.resources\": [\n 'azure = c7n_azure.entry:initialize_azure']\n },\n install_requires=[\"azure-mgmt-authorization\",\n \"azure-mgmt-applicationinsights==0.1.1\",\n \"azure-mgmt-batch\",\n \"azure-mgmt-cognitiveservices\",\n \"azure-mgmt-cosmosdb\",\n \"azure-mgmt-compute\",\n \"azure-mgmt-cdn\",\n \"azure-mgmt-containerregistry\",\n \"azure-mgmt-containerservice\",\n \"azure-mgmt-datalake-store\",\n \"azure-mgmt-datafactory\",\n \"azure-mgmt-iothub\",\n \"azure-mgmt-keyvault\",\n \"azure-mgmt-managementgroups\",\n \"azure-mgmt-network\",\n \"azure-mgmt-redis\",\n \"azure-mgmt-resource==2.1.0\",\n \"azure-mgmt-sql\",\n \"azure-mgmt-storage\",\n \"azure-mgmt-web\",\n \"azure-mgmt-monitor\",\n \"azure-mgmt-policyinsights\",\n \"azure-mgmt-eventgrid\",\n \"azure-graphrbac\",\n \"azure-keyvault\",\n \"azure-storage-blob\",\n \"azure-storage-queue\",\n \"distlib\",\n \"requests\",\n \"PyJWT\",\n \"c7n\",\n \"requests\",\n \"azure-cli-core\",\n \"adal\",\n \"backports.functools_lru_cache\",\n \"futures>=3.1.1\",\n \"netaddr\"],\n package_data={str(''): [str('function_binding_resources/bin/*.dll'),\n str('function_binding_resources/*.csproj'),\n str('function_binding_resources/bin/*.json')]}\n)\n", "path": "tools/c7n_azure/setup.py"}]} | 1,147 | 133 |
gh_patches_debug_41534 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-8855 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[CT-3107] [Bug] nested dependencies not installed when package is a tarball
### Is this a new bug in dbt-core?
- [X] I believe this is a new bug in dbt-core
- [X] I have searched the existing issues, and I could not find an existing issue for this bug
### Current Behavior
when running `dbt deps` to install a package specified as a tarball, dbt doesn't install nested dependencies (i.e. packages specified in the imported package's `packages.yml` file) as it does when installing a package from local, git or the dbt hub.
### Expected Behavior
consistent behaviour across import methods regarding nested dependencies. dbt should install any dependencies specified in the tarball project's packages.yml file.
### Steps To Reproduce
this can be reproduced by importing the tarball of a package with nested dependencies. In this case, importing dbt_expectations should cause dbt_date to be installed, as its included in the package's dependencies here: https://github.com/calogica/dbt-expectations/blob/0.9.0/packages.yml
Steps:
1. create a `packages.yml` file in a project with the following structure:
``` yaml
packages:
- tarball: "https://github.com/calogica/dbt-expectations/archive/refs/tags/0.9.0.tar.gz"
name: "dbt_expectations"
```
2. run `dbt deps`
running dbt deps will only install dbt_expectations:
```
20:08:55 Running with dbt=1.5.6
20:08:55 Installing dbt_expectations
20:08:56 Installed from tarball (url: https://github.com/calogica/dbt-expectations/archive/refs/tags/0.9.0.tar.gz)
```
compare this to installing the same package from dbt hub, with the following `packages.yml`:
``` yaml
packages:
- package: calogica/dbt_expectations
version: "0.9.0"
```
```
20:14:24 Running with dbt=1.5.6
20:14:24 Installing calogica/dbt_expectations
20:14:25 Installed from version 0.9.0
20:14:25 Up to date!
20:14:25 Installing calogica/dbt_date
20:14:25 Installed from version 0.8.1
20:14:25 Updated version available: 0.9.1
20:14:25
20:14:25 Updates available for packages: ['calogica/dbt_date']
Update your versions in packages.yml, then run dbt deps
```
### Relevant log output
_No response_
### Environment
```markdown
- OS: Mac OS 13.5.2 (22G91)
- Python: 3.9
- dbt: 1.5.6
```
### Which database adapter are you using with dbt?
snowflake
### Additional Context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/dbt/deps/tarball.py`
Content:
```
1 from typing import Dict
2
3 from dbt.contracts.project import RegistryPackageMetadata, TarballPackage
4 from dbt.deps.base import PinnedPackage, UnpinnedPackage
5
6
7 class TarballPackageMixin:
8 def __init__(self, tarball: str) -> None:
9 super().__init__()
10 self.tarball = tarball
11
12 @property
13 def name(self):
14 return self.tarball
15
16 def source_type(self) -> str:
17 return "tarball"
18
19
20 class TarballPinnedPackage(TarballPackageMixin, PinnedPackage):
21 def __init__(self, tarball: str, package: str) -> None:
22 super().__init__(tarball)
23 # setup to recycle RegistryPinnedPackage fns
24 self.package = package
25 self.version = "tarball"
26
27 @property
28 def name(self):
29 return self.package
30
31 def to_dict(self) -> Dict[str, str]:
32 return {
33 "tarball": self.tarball,
34 "version": self.version,
35 "package": self.package,
36 }
37
38 def get_version(self):
39 return self.version
40
41 def nice_version_name(self):
42 return f"tarball (url: {self.tarball})"
43
44 def _fetch_metadata(self, project, renderer):
45 """
46 recycle RegistryPackageMetadata so that we can use the install and
47 download_and_untar from RegistryPinnedPackage next.
48 build RegistryPackageMetadata from info passed via packages.yml since no
49 'metadata' service exists in this case.
50 """
51
52 dct = {
53 "name": self.package,
54 "packages": [], # note: required by RegistryPackageMetadata
55 "downloads": {"tarball": self.tarball},
56 }
57
58 return RegistryPackageMetadata.from_dict(dct)
59
60 def install(self, project, renderer):
61 self._install(project, renderer)
62
63
64 class TarballUnpinnedPackage(TarballPackageMixin, UnpinnedPackage[TarballPinnedPackage]):
65 def __init__(
66 self,
67 tarball: str,
68 package: str,
69 ) -> None:
70 super().__init__(tarball)
71 # setup to recycle RegistryPinnedPackage fns
72 self.package = package
73 self.version = "tarball"
74
75 @classmethod
76 def from_contract(cls, contract: TarballPackage) -> "TarballUnpinnedPackage":
77 return cls(tarball=contract.tarball, package=contract.name)
78
79 def incorporate(self, other: "TarballUnpinnedPackage") -> "TarballUnpinnedPackage":
80 return TarballUnpinnedPackage(tarball=self.tarball, package=self.package)
81
82 def resolved(self) -> TarballPinnedPackage:
83 return TarballPinnedPackage(tarball=self.tarball, package=self.package)
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/dbt/deps/tarball.py b/core/dbt/deps/tarball.py
--- a/core/dbt/deps/tarball.py
+++ b/core/dbt/deps/tarball.py
@@ -1,7 +1,14 @@
+import functools
+import os
+from pathlib import Path
from typing import Dict
-from dbt.contracts.project import RegistryPackageMetadata, TarballPackage
-from dbt.deps.base import PinnedPackage, UnpinnedPackage
+from dbt.clients import system
+from dbt.config.project import PartialProject
+from dbt.contracts.project import TarballPackage
+from dbt.deps.base import PinnedPackage, UnpinnedPackage, get_downloads_path
+from dbt.exceptions import DependencyError
+from dbt.utils import _connection_exception_retry as connection_exception_retry
class TarballPackageMixin:
@@ -20,9 +27,10 @@
class TarballPinnedPackage(TarballPackageMixin, PinnedPackage):
def __init__(self, tarball: str, package: str) -> None:
super().__init__(tarball)
- # setup to recycle RegistryPinnedPackage fns
self.package = package
self.version = "tarball"
+ self.tar_path = os.path.join(Path(get_downloads_path()), self.package)
+ self.untarred_path = f"{self.tar_path}_untarred"
@property
def name(self):
@@ -31,8 +39,7 @@
def to_dict(self) -> Dict[str, str]:
return {
"tarball": self.tarball,
- "version": self.version,
- "package": self.package,
+ "name": self.package,
}
def get_version(self):
@@ -42,23 +49,38 @@
return f"tarball (url: {self.tarball})"
def _fetch_metadata(self, project, renderer):
- """
- recycle RegistryPackageMetadata so that we can use the install and
- download_and_untar from RegistryPinnedPackage next.
- build RegistryPackageMetadata from info passed via packages.yml since no
- 'metadata' service exists in this case.
- """
-
- dct = {
- "name": self.package,
- "packages": [], # note: required by RegistryPackageMetadata
- "downloads": {"tarball": self.tarball},
- }
-
- return RegistryPackageMetadata.from_dict(dct)
+ """Download and untar the project and parse metadata from the project folder."""
+ download_untar_fn = functools.partial(
+ self.download_and_untar, self.tarball, self.tar_path, self.untarred_path, self.name
+ )
+ connection_exception_retry(download_untar_fn, 5)
+
+ tar_contents = os.listdir(self.untarred_path)
+ if len(tar_contents) != 1:
+ raise DependencyError(
+ f"Incorrect structure for package extracted from {self.tarball}."
+ f"The extracted package needs to follow the structure {self.name}/<package_contents>."
+ )
+ child_folder = os.listdir(self.untarred_path)[0]
+
+ self.untarred_path = os.path.join(self.untarred_path, child_folder)
+ partial = PartialProject.from_project_root(self.untarred_path)
+ metadata = partial.render_package_metadata(renderer)
+ metadata.name = self.package if self.package else metadata.name
+ return metadata
def install(self, project, renderer):
- self._install(project, renderer)
+ download_untar_fn = functools.partial(
+ self.download_and_untar, self.tarball, self.tar_path, self.untarred_path, self.name
+ )
+ connection_exception_retry(download_untar_fn, 5)
+ dest_path = self.get_installation_path(project, renderer)
+ if os.path.exists(dest_path):
+ if system.path_is_symlink(dest_path):
+ system.remove_file(dest_path)
+ else:
+ system.rmdir(dest_path)
+ system.move(self.untarred_path, dest_path)
class TarballUnpinnedPackage(TarballPackageMixin, UnpinnedPackage[TarballPinnedPackage]):
| {"golden_diff": "diff --git a/core/dbt/deps/tarball.py b/core/dbt/deps/tarball.py\n--- a/core/dbt/deps/tarball.py\n+++ b/core/dbt/deps/tarball.py\n@@ -1,7 +1,14 @@\n+import functools\n+import os\n+from pathlib import Path\n from typing import Dict\n \n-from dbt.contracts.project import RegistryPackageMetadata, TarballPackage\n-from dbt.deps.base import PinnedPackage, UnpinnedPackage\n+from dbt.clients import system\n+from dbt.config.project import PartialProject\n+from dbt.contracts.project import TarballPackage\n+from dbt.deps.base import PinnedPackage, UnpinnedPackage, get_downloads_path\n+from dbt.exceptions import DependencyError\n+from dbt.utils import _connection_exception_retry as connection_exception_retry\n \n \n class TarballPackageMixin:\n@@ -20,9 +27,10 @@\n class TarballPinnedPackage(TarballPackageMixin, PinnedPackage):\n def __init__(self, tarball: str, package: str) -> None:\n super().__init__(tarball)\n- # setup to recycle RegistryPinnedPackage fns\n self.package = package\n self.version = \"tarball\"\n+ self.tar_path = os.path.join(Path(get_downloads_path()), self.package)\n+ self.untarred_path = f\"{self.tar_path}_untarred\"\n \n @property\n def name(self):\n@@ -31,8 +39,7 @@\n def to_dict(self) -> Dict[str, str]:\n return {\n \"tarball\": self.tarball,\n- \"version\": self.version,\n- \"package\": self.package,\n+ \"name\": self.package,\n }\n \n def get_version(self):\n@@ -42,23 +49,38 @@\n return f\"tarball (url: {self.tarball})\"\n \n def _fetch_metadata(self, project, renderer):\n- \"\"\"\n- recycle RegistryPackageMetadata so that we can use the install and\n- download_and_untar from RegistryPinnedPackage next.\n- build RegistryPackageMetadata from info passed via packages.yml since no\n- 'metadata' service exists in this case.\n- \"\"\"\n-\n- dct = {\n- \"name\": self.package,\n- \"packages\": [], # note: required by RegistryPackageMetadata\n- \"downloads\": {\"tarball\": self.tarball},\n- }\n-\n- return RegistryPackageMetadata.from_dict(dct)\n+ \"\"\"Download and untar the project and parse metadata from the project folder.\"\"\"\n+ download_untar_fn = functools.partial(\n+ self.download_and_untar, self.tarball, self.tar_path, self.untarred_path, self.name\n+ )\n+ connection_exception_retry(download_untar_fn, 5)\n+\n+ tar_contents = os.listdir(self.untarred_path)\n+ if len(tar_contents) != 1:\n+ raise DependencyError(\n+ f\"Incorrect structure for package extracted from {self.tarball}.\"\n+ f\"The extracted package needs to follow the structure {self.name}/<package_contents>.\"\n+ )\n+ child_folder = os.listdir(self.untarred_path)[0]\n+\n+ self.untarred_path = os.path.join(self.untarred_path, child_folder)\n+ partial = PartialProject.from_project_root(self.untarred_path)\n+ metadata = partial.render_package_metadata(renderer)\n+ metadata.name = self.package if self.package else metadata.name\n+ return metadata\n \n def install(self, project, renderer):\n- self._install(project, renderer)\n+ download_untar_fn = functools.partial(\n+ self.download_and_untar, self.tarball, self.tar_path, self.untarred_path, self.name\n+ )\n+ connection_exception_retry(download_untar_fn, 5)\n+ dest_path = self.get_installation_path(project, renderer)\n+ if os.path.exists(dest_path):\n+ if system.path_is_symlink(dest_path):\n+ system.remove_file(dest_path)\n+ else:\n+ system.rmdir(dest_path)\n+ system.move(self.untarred_path, dest_path)\n \n \n class TarballUnpinnedPackage(TarballPackageMixin, UnpinnedPackage[TarballPinnedPackage]):\n", "issue": "[CT-3107] [Bug] nested dependencies not installed when package is a tarball\n### Is this a new bug in dbt-core?\r\n\r\n- [X] I believe this is a new bug in dbt-core\r\n- [X] I have searched the existing issues, and I could not find an existing issue for this bug\r\n\r\n### Current Behavior\r\n\r\nwhen running `dbt deps` to install a package specified as a tarball, dbt doesn't install nested dependencies (i.e. packages specified in the imported package's `packages.yml` file) as it does when installing a package from local, git or the dbt hub.\r\n\r\n### Expected Behavior\r\n\r\nconsistent behaviour across import methods regarding nested dependencies. dbt should install any dependencies specified in the tarball project's packages.yml file.\r\n\r\n\r\n### Steps To Reproduce\r\n\r\nthis can be reproduced by importing the tarball of a package with nested dependencies. In this case, importing dbt_expectations should cause dbt_date to be installed, as its included in the package's dependencies here: https://github.com/calogica/dbt-expectations/blob/0.9.0/packages.yml\r\n\r\nSteps:\r\n1. create a `packages.yml` file in a project with the following structure:\r\n``` yaml\r\npackages:\r\n - tarball: \"https://github.com/calogica/dbt-expectations/archive/refs/tags/0.9.0.tar.gz\"\r\n name: \"dbt_expectations\"\r\n```\r\n2. run `dbt deps`\r\n\r\n\r\nrunning dbt deps will only install dbt_expectations:\r\n```\r\n20:08:55 Running with dbt=1.5.6\r\n20:08:55 Installing dbt_expectations\r\n20:08:56 Installed from tarball (url: https://github.com/calogica/dbt-expectations/archive/refs/tags/0.9.0.tar.gz)\r\n```\r\ncompare this to installing the same package from dbt hub, with the following `packages.yml`:\r\n``` yaml\r\npackages:\r\n - package: calogica/dbt_expectations\r\n version: \"0.9.0\"\r\n```\r\n```\r\n20:14:24 Running with dbt=1.5.6\r\n20:14:24 Installing calogica/dbt_expectations\r\n20:14:25 Installed from version 0.9.0\r\n20:14:25 Up to date!\r\n20:14:25 Installing calogica/dbt_date\r\n20:14:25 Installed from version 0.8.1\r\n20:14:25 Updated version available: 0.9.1\r\n20:14:25 \r\n20:14:25 Updates available for packages: ['calogica/dbt_date'] \r\nUpdate your versions in packages.yml, then run dbt deps\r\n```\r\n\r\n### Relevant log output\r\n\r\n_No response_\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: Mac OS 13.5.2 (22G91)\r\n- Python: 3.9\r\n- dbt: 1.5.6\r\n```\r\n\r\n\r\n### Which database adapter are you using with dbt?\r\n\r\nsnowflake\r\n\r\n### Additional Context\r\n\r\n_No response_\n", "before_files": [{"content": "from typing import Dict\n\nfrom dbt.contracts.project import RegistryPackageMetadata, TarballPackage\nfrom dbt.deps.base import PinnedPackage, UnpinnedPackage\n\n\nclass TarballPackageMixin:\n def __init__(self, tarball: str) -> None:\n super().__init__()\n self.tarball = tarball\n\n @property\n def name(self):\n return self.tarball\n\n def source_type(self) -> str:\n return \"tarball\"\n\n\nclass TarballPinnedPackage(TarballPackageMixin, PinnedPackage):\n def __init__(self, tarball: str, package: str) -> None:\n super().__init__(tarball)\n # setup to recycle RegistryPinnedPackage fns\n self.package = package\n self.version = \"tarball\"\n\n @property\n def name(self):\n return self.package\n\n def to_dict(self) -> Dict[str, str]:\n return {\n \"tarball\": self.tarball,\n \"version\": self.version,\n \"package\": self.package,\n }\n\n def get_version(self):\n return self.version\n\n def nice_version_name(self):\n return f\"tarball (url: {self.tarball})\"\n\n def _fetch_metadata(self, project, renderer):\n \"\"\"\n recycle RegistryPackageMetadata so that we can use the install and\n download_and_untar from RegistryPinnedPackage next.\n build RegistryPackageMetadata from info passed via packages.yml since no\n 'metadata' service exists in this case.\n \"\"\"\n\n dct = {\n \"name\": self.package,\n \"packages\": [], # note: required by RegistryPackageMetadata\n \"downloads\": {\"tarball\": self.tarball},\n }\n\n return RegistryPackageMetadata.from_dict(dct)\n\n def install(self, project, renderer):\n self._install(project, renderer)\n\n\nclass TarballUnpinnedPackage(TarballPackageMixin, UnpinnedPackage[TarballPinnedPackage]):\n def __init__(\n self,\n tarball: str,\n package: str,\n ) -> None:\n super().__init__(tarball)\n # setup to recycle RegistryPinnedPackage fns\n self.package = package\n self.version = \"tarball\"\n\n @classmethod\n def from_contract(cls, contract: TarballPackage) -> \"TarballUnpinnedPackage\":\n return cls(tarball=contract.tarball, package=contract.name)\n\n def incorporate(self, other: \"TarballUnpinnedPackage\") -> \"TarballUnpinnedPackage\":\n return TarballUnpinnedPackage(tarball=self.tarball, package=self.package)\n\n def resolved(self) -> TarballPinnedPackage:\n return TarballPinnedPackage(tarball=self.tarball, package=self.package)\n", "path": "core/dbt/deps/tarball.py"}], "after_files": [{"content": "import functools\nimport os\nfrom pathlib import Path\nfrom typing import Dict\n\nfrom dbt.clients import system\nfrom dbt.config.project import PartialProject\nfrom dbt.contracts.project import TarballPackage\nfrom dbt.deps.base import PinnedPackage, UnpinnedPackage, get_downloads_path\nfrom dbt.exceptions import DependencyError\nfrom dbt.utils import _connection_exception_retry as connection_exception_retry\n\n\nclass TarballPackageMixin:\n def __init__(self, tarball: str) -> None:\n super().__init__()\n self.tarball = tarball\n\n @property\n def name(self):\n return self.tarball\n\n def source_type(self) -> str:\n return \"tarball\"\n\n\nclass TarballPinnedPackage(TarballPackageMixin, PinnedPackage):\n def __init__(self, tarball: str, package: str) -> None:\n super().__init__(tarball)\n self.package = package\n self.version = \"tarball\"\n self.tar_path = os.path.join(Path(get_downloads_path()), self.package)\n self.untarred_path = f\"{self.tar_path}_untarred\"\n\n @property\n def name(self):\n return self.package\n\n def to_dict(self) -> Dict[str, str]:\n return {\n \"tarball\": self.tarball,\n \"name\": self.package,\n }\n\n def get_version(self):\n return self.version\n\n def nice_version_name(self):\n return f\"tarball (url: {self.tarball})\"\n\n def _fetch_metadata(self, project, renderer):\n \"\"\"Download and untar the project and parse metadata from the project folder.\"\"\"\n download_untar_fn = functools.partial(\n self.download_and_untar, self.tarball, self.tar_path, self.untarred_path, self.name\n )\n connection_exception_retry(download_untar_fn, 5)\n\n tar_contents = os.listdir(self.untarred_path)\n if len(tar_contents) != 1:\n raise DependencyError(\n f\"Incorrect structure for package extracted from {self.tarball}.\"\n f\"The extracted package needs to follow the structure {self.name}/<package_contents>.\"\n )\n child_folder = os.listdir(self.untarred_path)[0]\n\n self.untarred_path = os.path.join(self.untarred_path, child_folder)\n partial = PartialProject.from_project_root(self.untarred_path)\n metadata = partial.render_package_metadata(renderer)\n metadata.name = self.package if self.package else metadata.name\n return metadata\n\n def install(self, project, renderer):\n download_untar_fn = functools.partial(\n self.download_and_untar, self.tarball, self.tar_path, self.untarred_path, self.name\n )\n connection_exception_retry(download_untar_fn, 5)\n dest_path = self.get_installation_path(project, renderer)\n if os.path.exists(dest_path):\n if system.path_is_symlink(dest_path):\n system.remove_file(dest_path)\n else:\n system.rmdir(dest_path)\n system.move(self.untarred_path, dest_path)\n\n\nclass TarballUnpinnedPackage(TarballPackageMixin, UnpinnedPackage[TarballPinnedPackage]):\n def __init__(\n self,\n tarball: str,\n package: str,\n ) -> None:\n super().__init__(tarball)\n # setup to recycle RegistryPinnedPackage fns\n self.package = package\n self.version = \"tarball\"\n\n @classmethod\n def from_contract(cls, contract: TarballPackage) -> \"TarballUnpinnedPackage\":\n return cls(tarball=contract.tarball, package=contract.name)\n\n def incorporate(self, other: \"TarballUnpinnedPackage\") -> \"TarballUnpinnedPackage\":\n return TarballUnpinnedPackage(tarball=self.tarball, package=self.package)\n\n def resolved(self) -> TarballPinnedPackage:\n return TarballPinnedPackage(tarball=self.tarball, package=self.package)\n", "path": "core/dbt/deps/tarball.py"}]} | 1,732 | 944 |
gh_patches_debug_12566 | rasdani/github-patches | git_diff | netbox-community__netbox-4303 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
IP Prefix Family returned doesn't match swagger definition
### Environment
* Python version: 3.7.6
* NetBox version: v2.7.7
Swagger definition for Prefix.Family does not match the swagger definition.
### Steps to Reproduce
1. Get a prefix object `wget http://netbox/api/ipam/prefixes/210/`
2. Notice object is like
```
"family": {
"value": 4,
"label": "IPv4"
},
```
3. Notice definition is
```
"family": {
"label": "string",
"value": "string"
},
```
<!-- What did you expect to happen? -->
### Expected Behavior
Object returned matches definition. I'm not sure if the definition needs to be fixed or the returned value type needs to be changed.
<!-- What happened instead? -->
### Observed Behavior
Object doesn't match definition
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/utilities/custom_inspectors.py`
Content:
```
1 from drf_yasg import openapi
2 from drf_yasg.inspectors import FieldInspector, NotHandled, PaginatorInspector, FilterInspector, SwaggerAutoSchema
3 from drf_yasg.utils import get_serializer_ref_name
4 from rest_framework.fields import ChoiceField
5 from rest_framework.relations import ManyRelatedField
6 from taggit_serializer.serializers import TagListSerializerField
7
8 from dcim.api.serializers import InterfaceSerializer as DeviceInterfaceSerializer
9 from extras.api.customfields import CustomFieldsSerializer
10 from utilities.api import ChoiceField, SerializedPKRelatedField, WritableNestedSerializer
11 from virtualization.api.serializers import InterfaceSerializer as VirtualMachineInterfaceSerializer
12
13 # this might be ugly, but it limits drf_yasg-specific code to this file
14 DeviceInterfaceSerializer.Meta.ref_name = 'DeviceInterface'
15 VirtualMachineInterfaceSerializer.Meta.ref_name = 'VirtualMachineInterface'
16
17
18 class NetBoxSwaggerAutoSchema(SwaggerAutoSchema):
19 writable_serializers = {}
20
21 def get_request_serializer(self):
22 serializer = super().get_request_serializer()
23
24 if serializer is not None and self.method in self.implicit_body_methods:
25 properties = {}
26 for child_name, child in serializer.fields.items():
27 if isinstance(child, (ChoiceField, WritableNestedSerializer)):
28 properties[child_name] = None
29 elif isinstance(child, ManyRelatedField) and isinstance(child.child_relation, SerializedPKRelatedField):
30 properties[child_name] = None
31
32 if properties:
33 if type(serializer) not in self.writable_serializers:
34 writable_name = 'Writable' + type(serializer).__name__
35 meta_class = getattr(type(serializer), 'Meta', None)
36 if meta_class:
37 ref_name = 'Writable' + get_serializer_ref_name(serializer)
38 writable_meta = type('Meta', (meta_class,), {'ref_name': ref_name})
39 properties['Meta'] = writable_meta
40
41 self.writable_serializers[type(serializer)] = type(writable_name, (type(serializer),), properties)
42
43 writable_class = self.writable_serializers[type(serializer)]
44 serializer = writable_class()
45
46 return serializer
47
48
49 class SerializedPKRelatedFieldInspector(FieldInspector):
50 def field_to_swagger_object(self, field, swagger_object_type, use_references, **kwargs):
51 SwaggerType, ChildSwaggerType = self._get_partial_types(field, swagger_object_type, use_references, **kwargs)
52 if isinstance(field, SerializedPKRelatedField):
53 return self.probe_field_inspectors(field.serializer(), ChildSwaggerType, use_references)
54
55 return NotHandled
56
57
58 class TagListFieldInspector(FieldInspector):
59 def field_to_swagger_object(self, field, swagger_object_type, use_references, **kwargs):
60 SwaggerType, ChildSwaggerType = self._get_partial_types(field, swagger_object_type, use_references, **kwargs)
61 if isinstance(field, TagListSerializerField):
62 child_schema = self.probe_field_inspectors(field.child, ChildSwaggerType, use_references)
63 return SwaggerType(
64 type=openapi.TYPE_ARRAY,
65 items=child_schema,
66 )
67
68 return NotHandled
69
70
71 class CustomChoiceFieldInspector(FieldInspector):
72 def field_to_swagger_object(self, field, swagger_object_type, use_references, **kwargs):
73 # this returns a callable which extracts title, description and other stuff
74 # https://drf-yasg.readthedocs.io/en/stable/_modules/drf_yasg/inspectors/base.html#FieldInspector._get_partial_types
75 SwaggerType, _ = self._get_partial_types(field, swagger_object_type, use_references, **kwargs)
76
77 if isinstance(field, ChoiceField):
78 value_schema = openapi.Schema(type=openapi.TYPE_STRING)
79
80 choices = list(field._choices.keys())
81 if set([None] + choices) == {None, True, False}:
82 # DeviceType.subdevice_role, Device.face and InterfaceConnection.connection_status all need to be
83 # differentiated since they each have subtly different values in their choice keys.
84 # - subdevice_role and connection_status are booleans, although subdevice_role includes None
85 # - face is an integer set {0, 1} which is easily confused with {False, True}
86 schema_type = openapi.TYPE_STRING
87 if all(type(x) == bool for x in [c for c in choices if c is not None]):
88 schema_type = openapi.TYPE_BOOLEAN
89 value_schema = openapi.Schema(type=schema_type)
90 value_schema['x-nullable'] = True
91
92 schema = SwaggerType(type=openapi.TYPE_OBJECT, required=["label", "value"], properties={
93 "label": openapi.Schema(type=openapi.TYPE_STRING),
94 "value": value_schema
95 })
96
97 return schema
98
99 elif isinstance(field, CustomFieldsSerializer):
100 schema = SwaggerType(type=openapi.TYPE_OBJECT)
101 return schema
102
103 return NotHandled
104
105
106 class NullableBooleanFieldInspector(FieldInspector):
107 def process_result(self, result, method_name, obj, **kwargs):
108
109 if isinstance(result, openapi.Schema) and isinstance(obj, ChoiceField) and result.type == 'boolean':
110 keys = obj.choices.keys()
111 if set(keys) == {None, True, False}:
112 result['x-nullable'] = True
113 result.type = 'boolean'
114
115 return result
116
117
118 class IdInFilterInspector(FilterInspector):
119 def process_result(self, result, method_name, obj, **kwargs):
120 if isinstance(result, list):
121 params = [p for p in result if isinstance(p, openapi.Parameter) and p.name == 'id__in']
122 for p in params:
123 p.type = 'string'
124
125 return result
126
127
128 class NullablePaginatorInspector(PaginatorInspector):
129 def process_result(self, result, method_name, obj, **kwargs):
130 if method_name == 'get_paginated_response' and isinstance(result, openapi.Schema):
131 next = result.properties['next']
132 if isinstance(next, openapi.Schema):
133 next['x-nullable'] = True
134 previous = result.properties['previous']
135 if isinstance(previous, openapi.Schema):
136 previous['x-nullable'] = True
137
138 return result
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netbox/utilities/custom_inspectors.py b/netbox/utilities/custom_inspectors.py
--- a/netbox/utilities/custom_inspectors.py
+++ b/netbox/utilities/custom_inspectors.py
@@ -89,6 +89,10 @@
value_schema = openapi.Schema(type=schema_type)
value_schema['x-nullable'] = True
+ if isinstance(choices[0], int):
+ # Change value_schema for IPAddressFamilyChoices, RackWidthChoices
+ value_schema = openapi.Schema(type=openapi.TYPE_INTEGER)
+
schema = SwaggerType(type=openapi.TYPE_OBJECT, required=["label", "value"], properties={
"label": openapi.Schema(type=openapi.TYPE_STRING),
"value": value_schema
| {"golden_diff": "diff --git a/netbox/utilities/custom_inspectors.py b/netbox/utilities/custom_inspectors.py\n--- a/netbox/utilities/custom_inspectors.py\n+++ b/netbox/utilities/custom_inspectors.py\n@@ -89,6 +89,10 @@\n value_schema = openapi.Schema(type=schema_type)\n value_schema['x-nullable'] = True\n \n+ if isinstance(choices[0], int):\n+ # Change value_schema for IPAddressFamilyChoices, RackWidthChoices\n+ value_schema = openapi.Schema(type=openapi.TYPE_INTEGER)\n+\n schema = SwaggerType(type=openapi.TYPE_OBJECT, required=[\"label\", \"value\"], properties={\n \"label\": openapi.Schema(type=openapi.TYPE_STRING),\n \"value\": value_schema\n", "issue": "IP Prefix Family returned doesn't match swagger definition\n### Environment\r\n* Python version: 3.7.6\r\n* NetBox version: v2.7.7\r\n\r\nSwagger definition for Prefix.Family does not match the swagger definition.\r\n\r\n### Steps to Reproduce\r\n1. Get a prefix object `wget http://netbox/api/ipam/prefixes/210/`\r\n2. Notice object is like\r\n```\r\n \"family\": {\r\n \"value\": 4,\r\n \"label\": \"IPv4\"\r\n },\r\n```\r\n3. Notice definition is\r\n```\r\n \"family\": {\r\n \"label\": \"string\",\r\n \"value\": \"string\"\r\n },\r\n```\r\n\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\nObject returned matches definition. I'm not sure if the definition needs to be fixed or the returned value type needs to be changed.\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\nObject doesn't match definition\n", "before_files": [{"content": "from drf_yasg import openapi\nfrom drf_yasg.inspectors import FieldInspector, NotHandled, PaginatorInspector, FilterInspector, SwaggerAutoSchema\nfrom drf_yasg.utils import get_serializer_ref_name\nfrom rest_framework.fields import ChoiceField\nfrom rest_framework.relations import ManyRelatedField\nfrom taggit_serializer.serializers import TagListSerializerField\n\nfrom dcim.api.serializers import InterfaceSerializer as DeviceInterfaceSerializer\nfrom extras.api.customfields import CustomFieldsSerializer\nfrom utilities.api import ChoiceField, SerializedPKRelatedField, WritableNestedSerializer\nfrom virtualization.api.serializers import InterfaceSerializer as VirtualMachineInterfaceSerializer\n\n# this might be ugly, but it limits drf_yasg-specific code to this file\nDeviceInterfaceSerializer.Meta.ref_name = 'DeviceInterface'\nVirtualMachineInterfaceSerializer.Meta.ref_name = 'VirtualMachineInterface'\n\n\nclass NetBoxSwaggerAutoSchema(SwaggerAutoSchema):\n writable_serializers = {}\n\n def get_request_serializer(self):\n serializer = super().get_request_serializer()\n\n if serializer is not None and self.method in self.implicit_body_methods:\n properties = {}\n for child_name, child in serializer.fields.items():\n if isinstance(child, (ChoiceField, WritableNestedSerializer)):\n properties[child_name] = None\n elif isinstance(child, ManyRelatedField) and isinstance(child.child_relation, SerializedPKRelatedField):\n properties[child_name] = None\n\n if properties:\n if type(serializer) not in self.writable_serializers:\n writable_name = 'Writable' + type(serializer).__name__\n meta_class = getattr(type(serializer), 'Meta', None)\n if meta_class:\n ref_name = 'Writable' + get_serializer_ref_name(serializer)\n writable_meta = type('Meta', (meta_class,), {'ref_name': ref_name})\n properties['Meta'] = writable_meta\n\n self.writable_serializers[type(serializer)] = type(writable_name, (type(serializer),), properties)\n\n writable_class = self.writable_serializers[type(serializer)]\n serializer = writable_class()\n\n return serializer\n\n\nclass SerializedPKRelatedFieldInspector(FieldInspector):\n def field_to_swagger_object(self, field, swagger_object_type, use_references, **kwargs):\n SwaggerType, ChildSwaggerType = self._get_partial_types(field, swagger_object_type, use_references, **kwargs)\n if isinstance(field, SerializedPKRelatedField):\n return self.probe_field_inspectors(field.serializer(), ChildSwaggerType, use_references)\n\n return NotHandled\n\n\nclass TagListFieldInspector(FieldInspector):\n def field_to_swagger_object(self, field, swagger_object_type, use_references, **kwargs):\n SwaggerType, ChildSwaggerType = self._get_partial_types(field, swagger_object_type, use_references, **kwargs)\n if isinstance(field, TagListSerializerField):\n child_schema = self.probe_field_inspectors(field.child, ChildSwaggerType, use_references)\n return SwaggerType(\n type=openapi.TYPE_ARRAY,\n items=child_schema,\n )\n\n return NotHandled\n\n\nclass CustomChoiceFieldInspector(FieldInspector):\n def field_to_swagger_object(self, field, swagger_object_type, use_references, **kwargs):\n # this returns a callable which extracts title, description and other stuff\n # https://drf-yasg.readthedocs.io/en/stable/_modules/drf_yasg/inspectors/base.html#FieldInspector._get_partial_types\n SwaggerType, _ = self._get_partial_types(field, swagger_object_type, use_references, **kwargs)\n\n if isinstance(field, ChoiceField):\n value_schema = openapi.Schema(type=openapi.TYPE_STRING)\n\n choices = list(field._choices.keys())\n if set([None] + choices) == {None, True, False}:\n # DeviceType.subdevice_role, Device.face and InterfaceConnection.connection_status all need to be\n # differentiated since they each have subtly different values in their choice keys.\n # - subdevice_role and connection_status are booleans, although subdevice_role includes None\n # - face is an integer set {0, 1} which is easily confused with {False, True}\n schema_type = openapi.TYPE_STRING\n if all(type(x) == bool for x in [c for c in choices if c is not None]):\n schema_type = openapi.TYPE_BOOLEAN\n value_schema = openapi.Schema(type=schema_type)\n value_schema['x-nullable'] = True\n\n schema = SwaggerType(type=openapi.TYPE_OBJECT, required=[\"label\", \"value\"], properties={\n \"label\": openapi.Schema(type=openapi.TYPE_STRING),\n \"value\": value_schema\n })\n\n return schema\n\n elif isinstance(field, CustomFieldsSerializer):\n schema = SwaggerType(type=openapi.TYPE_OBJECT)\n return schema\n\n return NotHandled\n\n\nclass NullableBooleanFieldInspector(FieldInspector):\n def process_result(self, result, method_name, obj, **kwargs):\n\n if isinstance(result, openapi.Schema) and isinstance(obj, ChoiceField) and result.type == 'boolean':\n keys = obj.choices.keys()\n if set(keys) == {None, True, False}:\n result['x-nullable'] = True\n result.type = 'boolean'\n\n return result\n\n\nclass IdInFilterInspector(FilterInspector):\n def process_result(self, result, method_name, obj, **kwargs):\n if isinstance(result, list):\n params = [p for p in result if isinstance(p, openapi.Parameter) and p.name == 'id__in']\n for p in params:\n p.type = 'string'\n\n return result\n\n\nclass NullablePaginatorInspector(PaginatorInspector):\n def process_result(self, result, method_name, obj, **kwargs):\n if method_name == 'get_paginated_response' and isinstance(result, openapi.Schema):\n next = result.properties['next']\n if isinstance(next, openapi.Schema):\n next['x-nullable'] = True\n previous = result.properties['previous']\n if isinstance(previous, openapi.Schema):\n previous['x-nullable'] = True\n\n return result\n", "path": "netbox/utilities/custom_inspectors.py"}], "after_files": [{"content": "from drf_yasg import openapi\nfrom drf_yasg.inspectors import FieldInspector, NotHandled, PaginatorInspector, FilterInspector, SwaggerAutoSchema\nfrom drf_yasg.utils import get_serializer_ref_name\nfrom rest_framework.fields import ChoiceField\nfrom rest_framework.relations import ManyRelatedField\nfrom taggit_serializer.serializers import TagListSerializerField\n\nfrom dcim.api.serializers import InterfaceSerializer as DeviceInterfaceSerializer\nfrom extras.api.customfields import CustomFieldsSerializer\nfrom utilities.api import ChoiceField, SerializedPKRelatedField, WritableNestedSerializer\nfrom virtualization.api.serializers import InterfaceSerializer as VirtualMachineInterfaceSerializer\n\n# this might be ugly, but it limits drf_yasg-specific code to this file\nDeviceInterfaceSerializer.Meta.ref_name = 'DeviceInterface'\nVirtualMachineInterfaceSerializer.Meta.ref_name = 'VirtualMachineInterface'\n\n\nclass NetBoxSwaggerAutoSchema(SwaggerAutoSchema):\n writable_serializers = {}\n\n def get_request_serializer(self):\n serializer = super().get_request_serializer()\n\n if serializer is not None and self.method in self.implicit_body_methods:\n properties = {}\n for child_name, child in serializer.fields.items():\n if isinstance(child, (ChoiceField, WritableNestedSerializer)):\n properties[child_name] = None\n elif isinstance(child, ManyRelatedField) and isinstance(child.child_relation, SerializedPKRelatedField):\n properties[child_name] = None\n\n if properties:\n if type(serializer) not in self.writable_serializers:\n writable_name = 'Writable' + type(serializer).__name__\n meta_class = getattr(type(serializer), 'Meta', None)\n if meta_class:\n ref_name = 'Writable' + get_serializer_ref_name(serializer)\n writable_meta = type('Meta', (meta_class,), {'ref_name': ref_name})\n properties['Meta'] = writable_meta\n\n self.writable_serializers[type(serializer)] = type(writable_name, (type(serializer),), properties)\n\n writable_class = self.writable_serializers[type(serializer)]\n serializer = writable_class()\n\n return serializer\n\n\nclass SerializedPKRelatedFieldInspector(FieldInspector):\n def field_to_swagger_object(self, field, swagger_object_type, use_references, **kwargs):\n SwaggerType, ChildSwaggerType = self._get_partial_types(field, swagger_object_type, use_references, **kwargs)\n if isinstance(field, SerializedPKRelatedField):\n return self.probe_field_inspectors(field.serializer(), ChildSwaggerType, use_references)\n\n return NotHandled\n\n\nclass TagListFieldInspector(FieldInspector):\n def field_to_swagger_object(self, field, swagger_object_type, use_references, **kwargs):\n SwaggerType, ChildSwaggerType = self._get_partial_types(field, swagger_object_type, use_references, **kwargs)\n if isinstance(field, TagListSerializerField):\n child_schema = self.probe_field_inspectors(field.child, ChildSwaggerType, use_references)\n return SwaggerType(\n type=openapi.TYPE_ARRAY,\n items=child_schema,\n )\n\n return NotHandled\n\n\nclass CustomChoiceFieldInspector(FieldInspector):\n def field_to_swagger_object(self, field, swagger_object_type, use_references, **kwargs):\n # this returns a callable which extracts title, description and other stuff\n # https://drf-yasg.readthedocs.io/en/stable/_modules/drf_yasg/inspectors/base.html#FieldInspector._get_partial_types\n SwaggerType, _ = self._get_partial_types(field, swagger_object_type, use_references, **kwargs)\n\n if isinstance(field, ChoiceField):\n value_schema = openapi.Schema(type=openapi.TYPE_STRING)\n\n choices = list(field._choices.keys())\n if set([None] + choices) == {None, True, False}:\n # DeviceType.subdevice_role, Device.face and InterfaceConnection.connection_status all need to be\n # differentiated since they each have subtly different values in their choice keys.\n # - subdevice_role and connection_status are booleans, although subdevice_role includes None\n # - face is an integer set {0, 1} which is easily confused with {False, True}\n schema_type = openapi.TYPE_STRING\n if all(type(x) == bool for x in [c for c in choices if c is not None]):\n schema_type = openapi.TYPE_BOOLEAN\n value_schema = openapi.Schema(type=schema_type)\n value_schema['x-nullable'] = True\n\n if isinstance(choices[0], int):\n # Change value_schema for IPAddressFamilyChoices, RackWidthChoices\n value_schema = openapi.Schema(type=openapi.TYPE_INTEGER)\n\n schema = SwaggerType(type=openapi.TYPE_OBJECT, required=[\"label\", \"value\"], properties={\n \"label\": openapi.Schema(type=openapi.TYPE_STRING),\n \"value\": value_schema\n })\n\n return schema\n\n elif isinstance(field, CustomFieldsSerializer):\n schema = SwaggerType(type=openapi.TYPE_OBJECT)\n return schema\n\n return NotHandled\n\n\nclass NullableBooleanFieldInspector(FieldInspector):\n def process_result(self, result, method_name, obj, **kwargs):\n\n if isinstance(result, openapi.Schema) and isinstance(obj, ChoiceField) and result.type == 'boolean':\n keys = obj.choices.keys()\n if set(keys) == {None, True, False}:\n result['x-nullable'] = True\n result.type = 'boolean'\n\n return result\n\n\nclass IdInFilterInspector(FilterInspector):\n def process_result(self, result, method_name, obj, **kwargs):\n if isinstance(result, list):\n params = [p for p in result if isinstance(p, openapi.Parameter) and p.name == 'id__in']\n for p in params:\n p.type = 'string'\n\n return result\n\n\nclass NullablePaginatorInspector(PaginatorInspector):\n def process_result(self, result, method_name, obj, **kwargs):\n if method_name == 'get_paginated_response' and isinstance(result, openapi.Schema):\n next = result.properties['next']\n if isinstance(next, openapi.Schema):\n next['x-nullable'] = True\n previous = result.properties['previous']\n if isinstance(previous, openapi.Schema):\n previous['x-nullable'] = True\n\n return result\n", "path": "netbox/utilities/custom_inspectors.py"}]} | 2,066 | 164 |
gh_patches_debug_25038 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-715 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Better handle newly added ElasticSearch functions
When using the agent with an older version of ElasticSearch, the following warning is logged:
```
Failed to instrument elasticsearch.Elasticsearch.search_mvt: AttributeError("type object 'Elasticsearch' has no attribute 'search_mvt'")
```
When a client method doesn't exist, the agent should either ignore it or more quietly log that information.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/scout_apm/instruments/elasticsearch.py`
Content:
```
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import logging
5 from collections import namedtuple
6
7 import wrapt
8
9 from scout_apm.compat import get_pos_args, unwrap_decorators
10 from scout_apm.core.tracked_request import TrackedRequest
11
12 try:
13 from elasticsearch import Elasticsearch, Transport
14 except ImportError: # pragma: no cover
15 Elasticsearch = None
16 Transport = None
17
18 logger = logging.getLogger(__name__)
19
20
21 def ensure_installed():
22 logger.debug("Instrumenting elasticsearch.")
23
24 if Elasticsearch is None:
25 logger.debug(
26 "Couldn't import elasticsearch.Elasticsearch - probably not installed."
27 )
28 else:
29 ensure_client_instrumented()
30 ensure_transport_instrumented()
31
32
33 ClientMethod = namedtuple("ClientMethod", ["name", "takes_index_argument"])
34
35 CLIENT_METHODS = [
36 ClientMethod("bulk", True),
37 ClientMethod("clear_scroll", False),
38 ClientMethod("close", False),
39 ClientMethod("close_point_in_time", False),
40 ClientMethod("count", True),
41 ClientMethod("create", True),
42 ClientMethod("delete", True),
43 ClientMethod("delete_by_query", True),
44 ClientMethod("delete_by_query_rethrottle", False),
45 ClientMethod("delete_script", False),
46 ClientMethod("exists", True),
47 ClientMethod("exists_source", True),
48 ClientMethod("explain", True),
49 ClientMethod("field_caps", True),
50 ClientMethod("get", True),
51 ClientMethod("get_script", False),
52 ClientMethod("get_script_context", False),
53 ClientMethod("get_script_languages", False),
54 ClientMethod("get_source", True),
55 ClientMethod("index", True),
56 ClientMethod("info", False),
57 ClientMethod("mget", True),
58 ClientMethod("msearch", True),
59 ClientMethod("msearch_template", True),
60 ClientMethod("mtermvectors", True),
61 ClientMethod("open_point_in_time", True),
62 ClientMethod("ping", False),
63 ClientMethod("put_script", False),
64 ClientMethod("rank_eval", True),
65 ClientMethod("reindex", False),
66 ClientMethod("reindex_rethrottle", False),
67 ClientMethod("render_search_template", False),
68 ClientMethod("scripts_painless_execute", False),
69 ClientMethod("scroll", False),
70 ClientMethod("search", True),
71 ClientMethod("search_mvt", True),
72 ClientMethod("search_shards", True),
73 ClientMethod("search_template", True),
74 ClientMethod("termvectors", True),
75 ClientMethod("terms_enum", True),
76 ClientMethod("update", True),
77 ClientMethod("update_by_query", True),
78 ClientMethod("update_by_query_rethrottle", False),
79 ]
80
81
82 have_patched_client = False
83
84
85 def ensure_client_instrumented():
86 global have_patched_client
87
88 if not have_patched_client:
89 for name, takes_index_argument in CLIENT_METHODS:
90 try:
91 method = getattr(Elasticsearch, name)
92 if takes_index_argument:
93 wrapped = wrap_client_index_method(method)
94 else:
95 wrapped = wrap_client_method(method)
96 setattr(Elasticsearch, name, wrapped)
97 except Exception as exc:
98 logger.warning(
99 "Failed to instrument elasticsearch.Elasticsearch.%s: %r",
100 name,
101 exc,
102 exc_info=exc,
103 )
104
105 have_patched_client = True
106
107
108 @wrapt.decorator
109 def wrap_client_index_method(wrapped, instance, args, kwargs):
110 # elasticsearch-py 7.5.1 changed the order of arguments for client methods,
111 # so to be safe we need to inspect the wrapped method's positional
112 # arguments to see if we should pull it from there
113 if "index" in kwargs:
114 index = kwargs["index"]
115 else:
116 unwrapped = unwrap_decorators(wrapped)
117 pos_args = get_pos_args(unwrapped)
118 try:
119 index_index = pos_args.index("index")
120 except ValueError: # pragma: no cover
121 # This guards against the method not accepting an 'index' argument
122 # but they all do - for now
123 index = ""
124 else:
125 try:
126 index = args[index_index - 1] # subtract 'self'
127 except IndexError:
128 index = ""
129
130 if isinstance(index, (list, tuple)):
131 index = ",".join(index)
132 if index == "":
133 index = "Unknown"
134 index = index.title()
135
136 camel_name = "".join(c.title() for c in wrapped.__name__.split("_"))
137 operation = "Elasticsearch/{}/{}".format(index, camel_name)
138 tracked_request = TrackedRequest.instance()
139 with tracked_request.span(operation=operation, ignore_children=True):
140 return wrapped(*args, **kwargs)
141
142
143 @wrapt.decorator
144 def wrap_client_method(wrapped, instance, args, kwargs):
145 camel_name = "".join(c.title() for c in wrapped.__name__.split("_"))
146 operation = "Elasticsearch/{}".format(camel_name)
147 tracked_request = TrackedRequest.instance()
148 with tracked_request.span(operation=operation, ignore_children=True):
149 return wrapped(*args, **kwargs)
150
151
152 have_patched_transport = False
153
154
155 def ensure_transport_instrumented():
156 global have_patched_transport
157
158 if not have_patched_transport:
159 try:
160 Transport.perform_request = wrapped_perform_request(
161 Transport.perform_request
162 )
163 except Exception as exc:
164 logger.warning(
165 "Failed to instrument elasticsearch.Transport.perform_request: %r",
166 exc,
167 exc_info=exc,
168 )
169
170 have_patched_transport = True
171
172
173 def _sanitize_name(name):
174 try:
175 op = name.split("/")[-1]
176 op = op[1:] # chop leading '_' from op
177 known_names = (
178 "bench",
179 "bulk",
180 "count",
181 "exists",
182 "explain",
183 "field_stats",
184 "health",
185 "mget",
186 "mlt",
187 "mpercolate",
188 "msearch",
189 "mtermvectors",
190 "percolate",
191 "query",
192 "scroll",
193 "search_shards",
194 "source",
195 "suggest",
196 "template",
197 "termvectors",
198 "update",
199 "search",
200 )
201 if op in known_names:
202 return op.title()
203 return "Unknown"
204 except Exception:
205 return "Unknown"
206
207
208 @wrapt.decorator
209 def wrapped_perform_request(wrapped, instance, args, kwargs):
210 try:
211 op = _sanitize_name(args[1])
212 except IndexError:
213 op = "Unknown"
214
215 tracked_request = TrackedRequest.instance()
216 with tracked_request.span(
217 operation="Elasticsearch/{}".format(op),
218 ignore_children=True,
219 ):
220 return wrapped(*args, **kwargs)
221
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/scout_apm/instruments/elasticsearch.py b/src/scout_apm/instruments/elasticsearch.py
--- a/src/scout_apm/instruments/elasticsearch.py
+++ b/src/scout_apm/instruments/elasticsearch.py
@@ -86,6 +86,7 @@
global have_patched_client
if not have_patched_client:
+ instrumented_count = 0
for name, takes_index_argument in CLIENT_METHODS:
try:
method = getattr(Elasticsearch, name)
@@ -94,13 +95,19 @@
else:
wrapped = wrap_client_method(method)
setattr(Elasticsearch, name, wrapped)
+ instrumented_count += 1
except Exception as exc:
- logger.warning(
+ logger.debug(
"Failed to instrument elasticsearch.Elasticsearch.%s: %r",
name,
exc,
exc_info=exc,
)
+ if instrumented_count == 0:
+ logger.warning(
+ "Failed to instrument any elasticsearch.Elasticsearch methods."
+ " Enable debug logs to view root causes."
+ )
have_patched_client = True
| {"golden_diff": "diff --git a/src/scout_apm/instruments/elasticsearch.py b/src/scout_apm/instruments/elasticsearch.py\n--- a/src/scout_apm/instruments/elasticsearch.py\n+++ b/src/scout_apm/instruments/elasticsearch.py\n@@ -86,6 +86,7 @@\n global have_patched_client\n \n if not have_patched_client:\n+ instrumented_count = 0\n for name, takes_index_argument in CLIENT_METHODS:\n try:\n method = getattr(Elasticsearch, name)\n@@ -94,13 +95,19 @@\n else:\n wrapped = wrap_client_method(method)\n setattr(Elasticsearch, name, wrapped)\n+ instrumented_count += 1\n except Exception as exc:\n- logger.warning(\n+ logger.debug(\n \"Failed to instrument elasticsearch.Elasticsearch.%s: %r\",\n name,\n exc,\n exc_info=exc,\n )\n+ if instrumented_count == 0:\n+ logger.warning(\n+ \"Failed to instrument any elasticsearch.Elasticsearch methods.\"\n+ \" Enable debug logs to view root causes.\"\n+ )\n \n have_patched_client = True\n", "issue": "Better handle newly added ElasticSearch functions\nWhen using the agent with an older version of ElasticSearch, the following warning is logged:\r\n\r\n```\r\nFailed to instrument elasticsearch.Elasticsearch.search_mvt: AttributeError(\"type object 'Elasticsearch' has no attribute 'search_mvt'\")\r\n```\r\n\r\nWhen a client method doesn't exist, the agent should either ignore it or more quietly log that information.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nfrom collections import namedtuple\n\nimport wrapt\n\nfrom scout_apm.compat import get_pos_args, unwrap_decorators\nfrom scout_apm.core.tracked_request import TrackedRequest\n\ntry:\n from elasticsearch import Elasticsearch, Transport\nexcept ImportError: # pragma: no cover\n Elasticsearch = None\n Transport = None\n\nlogger = logging.getLogger(__name__)\n\n\ndef ensure_installed():\n logger.debug(\"Instrumenting elasticsearch.\")\n\n if Elasticsearch is None:\n logger.debug(\n \"Couldn't import elasticsearch.Elasticsearch - probably not installed.\"\n )\n else:\n ensure_client_instrumented()\n ensure_transport_instrumented()\n\n\nClientMethod = namedtuple(\"ClientMethod\", [\"name\", \"takes_index_argument\"])\n\nCLIENT_METHODS = [\n ClientMethod(\"bulk\", True),\n ClientMethod(\"clear_scroll\", False),\n ClientMethod(\"close\", False),\n ClientMethod(\"close_point_in_time\", False),\n ClientMethod(\"count\", True),\n ClientMethod(\"create\", True),\n ClientMethod(\"delete\", True),\n ClientMethod(\"delete_by_query\", True),\n ClientMethod(\"delete_by_query_rethrottle\", False),\n ClientMethod(\"delete_script\", False),\n ClientMethod(\"exists\", True),\n ClientMethod(\"exists_source\", True),\n ClientMethod(\"explain\", True),\n ClientMethod(\"field_caps\", True),\n ClientMethod(\"get\", True),\n ClientMethod(\"get_script\", False),\n ClientMethod(\"get_script_context\", False),\n ClientMethod(\"get_script_languages\", False),\n ClientMethod(\"get_source\", True),\n ClientMethod(\"index\", True),\n ClientMethod(\"info\", False),\n ClientMethod(\"mget\", True),\n ClientMethod(\"msearch\", True),\n ClientMethod(\"msearch_template\", True),\n ClientMethod(\"mtermvectors\", True),\n ClientMethod(\"open_point_in_time\", True),\n ClientMethod(\"ping\", False),\n ClientMethod(\"put_script\", False),\n ClientMethod(\"rank_eval\", True),\n ClientMethod(\"reindex\", False),\n ClientMethod(\"reindex_rethrottle\", False),\n ClientMethod(\"render_search_template\", False),\n ClientMethod(\"scripts_painless_execute\", False),\n ClientMethod(\"scroll\", False),\n ClientMethod(\"search\", True),\n ClientMethod(\"search_mvt\", True),\n ClientMethod(\"search_shards\", True),\n ClientMethod(\"search_template\", True),\n ClientMethod(\"termvectors\", True),\n ClientMethod(\"terms_enum\", True),\n ClientMethod(\"update\", True),\n ClientMethod(\"update_by_query\", True),\n ClientMethod(\"update_by_query_rethrottle\", False),\n]\n\n\nhave_patched_client = False\n\n\ndef ensure_client_instrumented():\n global have_patched_client\n\n if not have_patched_client:\n for name, takes_index_argument in CLIENT_METHODS:\n try:\n method = getattr(Elasticsearch, name)\n if takes_index_argument:\n wrapped = wrap_client_index_method(method)\n else:\n wrapped = wrap_client_method(method)\n setattr(Elasticsearch, name, wrapped)\n except Exception as exc:\n logger.warning(\n \"Failed to instrument elasticsearch.Elasticsearch.%s: %r\",\n name,\n exc,\n exc_info=exc,\n )\n\n have_patched_client = True\n\n\[email protected]\ndef wrap_client_index_method(wrapped, instance, args, kwargs):\n # elasticsearch-py 7.5.1 changed the order of arguments for client methods,\n # so to be safe we need to inspect the wrapped method's positional\n # arguments to see if we should pull it from there\n if \"index\" in kwargs:\n index = kwargs[\"index\"]\n else:\n unwrapped = unwrap_decorators(wrapped)\n pos_args = get_pos_args(unwrapped)\n try:\n index_index = pos_args.index(\"index\")\n except ValueError: # pragma: no cover\n # This guards against the method not accepting an 'index' argument\n # but they all do - for now\n index = \"\"\n else:\n try:\n index = args[index_index - 1] # subtract 'self'\n except IndexError:\n index = \"\"\n\n if isinstance(index, (list, tuple)):\n index = \",\".join(index)\n if index == \"\":\n index = \"Unknown\"\n index = index.title()\n\n camel_name = \"\".join(c.title() for c in wrapped.__name__.split(\"_\"))\n operation = \"Elasticsearch/{}/{}\".format(index, camel_name)\n tracked_request = TrackedRequest.instance()\n with tracked_request.span(operation=operation, ignore_children=True):\n return wrapped(*args, **kwargs)\n\n\[email protected]\ndef wrap_client_method(wrapped, instance, args, kwargs):\n camel_name = \"\".join(c.title() for c in wrapped.__name__.split(\"_\"))\n operation = \"Elasticsearch/{}\".format(camel_name)\n tracked_request = TrackedRequest.instance()\n with tracked_request.span(operation=operation, ignore_children=True):\n return wrapped(*args, **kwargs)\n\n\nhave_patched_transport = False\n\n\ndef ensure_transport_instrumented():\n global have_patched_transport\n\n if not have_patched_transport:\n try:\n Transport.perform_request = wrapped_perform_request(\n Transport.perform_request\n )\n except Exception as exc:\n logger.warning(\n \"Failed to instrument elasticsearch.Transport.perform_request: %r\",\n exc,\n exc_info=exc,\n )\n\n have_patched_transport = True\n\n\ndef _sanitize_name(name):\n try:\n op = name.split(\"/\")[-1]\n op = op[1:] # chop leading '_' from op\n known_names = (\n \"bench\",\n \"bulk\",\n \"count\",\n \"exists\",\n \"explain\",\n \"field_stats\",\n \"health\",\n \"mget\",\n \"mlt\",\n \"mpercolate\",\n \"msearch\",\n \"mtermvectors\",\n \"percolate\",\n \"query\",\n \"scroll\",\n \"search_shards\",\n \"source\",\n \"suggest\",\n \"template\",\n \"termvectors\",\n \"update\",\n \"search\",\n )\n if op in known_names:\n return op.title()\n return \"Unknown\"\n except Exception:\n return \"Unknown\"\n\n\[email protected]\ndef wrapped_perform_request(wrapped, instance, args, kwargs):\n try:\n op = _sanitize_name(args[1])\n except IndexError:\n op = \"Unknown\"\n\n tracked_request = TrackedRequest.instance()\n with tracked_request.span(\n operation=\"Elasticsearch/{}\".format(op),\n ignore_children=True,\n ):\n return wrapped(*args, **kwargs)\n", "path": "src/scout_apm/instruments/elasticsearch.py"}], "after_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport logging\nfrom collections import namedtuple\n\nimport wrapt\n\nfrom scout_apm.compat import get_pos_args, unwrap_decorators\nfrom scout_apm.core.tracked_request import TrackedRequest\n\ntry:\n from elasticsearch import Elasticsearch, Transport\nexcept ImportError: # pragma: no cover\n Elasticsearch = None\n Transport = None\n\nlogger = logging.getLogger(__name__)\n\n\ndef ensure_installed():\n logger.debug(\"Instrumenting elasticsearch.\")\n\n if Elasticsearch is None:\n logger.debug(\n \"Couldn't import elasticsearch.Elasticsearch - probably not installed.\"\n )\n else:\n ensure_client_instrumented()\n ensure_transport_instrumented()\n\n\nClientMethod = namedtuple(\"ClientMethod\", [\"name\", \"takes_index_argument\"])\n\nCLIENT_METHODS = [\n ClientMethod(\"bulk\", True),\n ClientMethod(\"clear_scroll\", False),\n ClientMethod(\"close\", False),\n ClientMethod(\"close_point_in_time\", False),\n ClientMethod(\"count\", True),\n ClientMethod(\"create\", True),\n ClientMethod(\"delete\", True),\n ClientMethod(\"delete_by_query\", True),\n ClientMethod(\"delete_by_query_rethrottle\", False),\n ClientMethod(\"delete_script\", False),\n ClientMethod(\"exists\", True),\n ClientMethod(\"exists_source\", True),\n ClientMethod(\"explain\", True),\n ClientMethod(\"field_caps\", True),\n ClientMethod(\"get\", True),\n ClientMethod(\"get_script\", False),\n ClientMethod(\"get_script_context\", False),\n ClientMethod(\"get_script_languages\", False),\n ClientMethod(\"get_source\", True),\n ClientMethod(\"index\", True),\n ClientMethod(\"info\", False),\n ClientMethod(\"mget\", True),\n ClientMethod(\"msearch\", True),\n ClientMethod(\"msearch_template\", True),\n ClientMethod(\"mtermvectors\", True),\n ClientMethod(\"open_point_in_time\", True),\n ClientMethod(\"ping\", False),\n ClientMethod(\"put_script\", False),\n ClientMethod(\"rank_eval\", True),\n ClientMethod(\"reindex\", False),\n ClientMethod(\"reindex_rethrottle\", False),\n ClientMethod(\"render_search_template\", False),\n ClientMethod(\"scripts_painless_execute\", False),\n ClientMethod(\"scroll\", False),\n ClientMethod(\"search\", True),\n ClientMethod(\"search_mvt\", True),\n ClientMethod(\"search_shards\", True),\n ClientMethod(\"search_template\", True),\n ClientMethod(\"termvectors\", True),\n ClientMethod(\"terms_enum\", True),\n ClientMethod(\"update\", True),\n ClientMethod(\"update_by_query\", True),\n ClientMethod(\"update_by_query_rethrottle\", False),\n]\n\n\nhave_patched_client = False\n\n\ndef ensure_client_instrumented():\n global have_patched_client\n\n if not have_patched_client:\n instrumented_count = 0\n for name, takes_index_argument in CLIENT_METHODS:\n try:\n method = getattr(Elasticsearch, name)\n if takes_index_argument:\n wrapped = wrap_client_index_method(method)\n else:\n wrapped = wrap_client_method(method)\n setattr(Elasticsearch, name, wrapped)\n instrumented_count += 1\n except Exception as exc:\n logger.debug(\n \"Failed to instrument elasticsearch.Elasticsearch.%s: %r\",\n name,\n exc,\n exc_info=exc,\n )\n if instrumented_count == 0:\n logger.warning(\n \"Failed to instrument any elasticsearch.Elasticsearch methods.\"\n \" Enable debug logs to view root causes.\"\n )\n\n have_patched_client = True\n\n\[email protected]\ndef wrap_client_index_method(wrapped, instance, args, kwargs):\n # elasticsearch-py 7.5.1 changed the order of arguments for client methods,\n # so to be safe we need to inspect the wrapped method's positional\n # arguments to see if we should pull it from there\n if \"index\" in kwargs:\n index = kwargs[\"index\"]\n else:\n unwrapped = unwrap_decorators(wrapped)\n pos_args = get_pos_args(unwrapped)\n try:\n index_index = pos_args.index(\"index\")\n except ValueError: # pragma: no cover\n # This guards against the method not accepting an 'index' argument\n # but they all do - for now\n index = \"\"\n else:\n try:\n index = args[index_index - 1] # subtract 'self'\n except IndexError:\n index = \"\"\n\n if isinstance(index, (list, tuple)):\n index = \",\".join(index)\n if index == \"\":\n index = \"Unknown\"\n index = index.title()\n\n camel_name = \"\".join(c.title() for c in wrapped.__name__.split(\"_\"))\n operation = \"Elasticsearch/{}/{}\".format(index, camel_name)\n tracked_request = TrackedRequest.instance()\n with tracked_request.span(operation=operation, ignore_children=True):\n return wrapped(*args, **kwargs)\n\n\[email protected]\ndef wrap_client_method(wrapped, instance, args, kwargs):\n camel_name = \"\".join(c.title() for c in wrapped.__name__.split(\"_\"))\n operation = \"Elasticsearch/{}\".format(camel_name)\n tracked_request = TrackedRequest.instance()\n with tracked_request.span(operation=operation, ignore_children=True):\n return wrapped(*args, **kwargs)\n\n\nhave_patched_transport = False\n\n\ndef ensure_transport_instrumented():\n global have_patched_transport\n\n if not have_patched_transport:\n try:\n Transport.perform_request = wrapped_perform_request(\n Transport.perform_request\n )\n except Exception as exc:\n logger.warning(\n \"Failed to instrument elasticsearch.Transport.perform_request: %r\",\n exc,\n exc_info=exc,\n )\n\n have_patched_transport = True\n\n\ndef _sanitize_name(name):\n try:\n op = name.split(\"/\")[-1]\n op = op[1:] # chop leading '_' from op\n known_names = (\n \"bench\",\n \"bulk\",\n \"count\",\n \"exists\",\n \"explain\",\n \"field_stats\",\n \"health\",\n \"mget\",\n \"mlt\",\n \"mpercolate\",\n \"msearch\",\n \"mtermvectors\",\n \"percolate\",\n \"query\",\n \"scroll\",\n \"search_shards\",\n \"source\",\n \"suggest\",\n \"template\",\n \"termvectors\",\n \"update\",\n \"search\",\n )\n if op in known_names:\n return op.title()\n return \"Unknown\"\n except Exception:\n return \"Unknown\"\n\n\[email protected]\ndef wrapped_perform_request(wrapped, instance, args, kwargs):\n try:\n op = _sanitize_name(args[1])\n except IndexError:\n op = \"Unknown\"\n\n tracked_request = TrackedRequest.instance()\n with tracked_request.span(\n operation=\"Elasticsearch/{}\".format(op),\n ignore_children=True,\n ):\n return wrapped(*args, **kwargs)\n", "path": "src/scout_apm/instruments/elasticsearch.py"}]} | 2,372 | 255 |
gh_patches_debug_27699 | rasdani/github-patches | git_diff | cowrie__cowrie-638 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
csirtg plugin no longer working
I'm not sure exactly when this happened, but just happend to check the logs today, and noticed the csirtg plugin has some errors.
```
2017-11-02T17:05:41-0400 [cowrie.telnet.transport.HoneyPotTelnetFactory] New connection: 45.32.221.61:59776 (x.x.x.x:23) [session: TT0]
2017-11-02T17:05:41-0400 [twisted.logger._observer#critical] Temporarily disabling observer LegacyLogObserverWrapper(<bound method Output.emit of <cowrie.output.csirtg.Output object at 0x7f3a5ce9bb50>>) due to exception: [Failure instance: Traceback: <type 'exceptions.TypeError'>: string indices must be integers
/home/cowrie/cowrie/cowrie/telnet/transport.py:218:connectionMade
/usr/local/lib/python2.7/dist-packages/twisted/python/threadable.py:53:sync
/usr/local/lib/python2.7/dist-packages/twisted/python/log.py:286:msg
/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py:154:publishToNewObserver
--- <exception caught here> ---
/usr/local/lib/python2.7/dist-packages/twisted/logger/_observer.py:131:__call__
/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py:93:__call__
/home/cowrie/cowrie/cowrie/core/output.py:190:emit
/home/cowrie/cowrie/cowrie/output/csirtg.py:82:write
]
Traceback (most recent call last):
File "/home/cowrie/cowrie/cowrie/telnet/transport.py", line 218, in connectionMade
session=self.transportId, sessionno='T'+str(sessionno))
File "/usr/local/lib/python2.7/dist-packages/twisted/python/threadable.py", line 53, in sync
return function(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/twisted/python/log.py", line 286, in msg
_publishNew(self._publishPublisher, actualEventDict, textFromEventDict)
File "/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py", line 154, in publishToNewObserver
observer(eventDict)
--- <exception caught here> ---
File "/usr/local/lib/python2.7/dist-packages/twisted/logger/_observer.py", line 131, in __call__
observer(event)
File "/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py", line 93, in __call__
self.legacyObserver(event)
File "/home/cowrie/cowrie/cowrie/core/output.py", line 190, in emit
self.write(ev)
File "/home/cowrie/cowrie/cowrie/output/csirtg.py", line 82, in write
logger.info('logged to csirtg %s ' % ret['indicator']['location'])
exceptions.TypeError: string indices must be integers
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cowrie/output/csirtg.py`
Content:
```
1 from __future__ import division, absolute_import
2
3 import cowrie.core.output
4
5 from csirtgsdk.indicator import Indicator
6 from csirtgsdk.client import Client
7 from datetime import datetime
8 import logging
9 import os
10
11 logger = logging.getLogger(__name__)
12
13 USERNAME = os.environ.get('CSIRTG_USER')
14 FEED = os.environ.get('CSIRTG_FEED')
15 TOKEN = os.environ.get('CSIRG_TOKEN')
16 DESCRIPTION = os.environ.get('CSIRTG_DESCRIPTION', 'random scanning activity')
17
18
19 class Output(cowrie.core.output.Output):
20 def __init__(self, cfg):
21 cowrie.core.output.Output.__init__(self, cfg)
22 self.user = cfg.get('output_csirtg', 'username') or USERNAME
23 self.feed = cfg.get('output_csirtg', 'feed') or FEED
24 self.token = cfg.get('output_csirtg', 'token') or TOKEN
25 try:
26 self.description = cfg.get('output_csirtg', 'description')
27 except Exception:
28 self.description = DESCRIPTION
29 self.context = {}
30 self.client = Client(token=self.token)
31
32 def start(self,):
33 pass
34
35 def stop(self):
36 pass
37
38 def write(self, e):
39 sid = e['session']
40 peerIP = e['src_ip']
41 ts = e['timestamp']
42 system = e['system']
43
44 if system not in ['cowrie.ssh.factory.CowrieSSHFactory', 'cowrie.telnet.transport.HoneyPotTelnetFactory']:
45 logger.debug('skipping {}'.format(system))
46 return
47
48 today = str(datetime.now().date())
49
50 if not self.context.get(today):
51 logger.debug('resetting context for %s' % today)
52 self.context = {}
53 self.context[today] = set()
54
55 key = ','.join([peerIP, system])
56
57 if key in self.context[today]:
58 logger.debug('skipping {}'.format(key))
59 return
60
61 self.context[today].add(key)
62
63 tags = 'scanner,ssh'
64 port = 22
65 if e['system'] == 'cowrie.telnet.transport.HoneyPotTelnetFactory':
66 tags = 'scanner,telnet'
67 port = 23
68
69 i = {
70 'user': self.user,
71 'feed': self.feed,
72 'indicator': peerIP,
73 'portlist': port,
74 'protocol': 'tcp',
75 'tags': tags,
76 'firsttime': ts,
77 'lasttime': ts,
78 'description': self.description
79 }
80
81 ret = Indicator(self.client, i).submit()
82 logger.info('logged to csirtg %s ' % ret['indicator']['location'])
83
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cowrie/output/csirtg.py b/cowrie/output/csirtg.py
--- a/cowrie/output/csirtg.py
+++ b/cowrie/output/csirtg.py
@@ -7,8 +7,7 @@
from datetime import datetime
import logging
import os
-
-logger = logging.getLogger(__name__)
+from twisted.python import log
USERNAME = os.environ.get('CSIRTG_USER')
FEED = os.environ.get('CSIRTG_FEED')
@@ -42,20 +41,17 @@
system = e['system']
if system not in ['cowrie.ssh.factory.CowrieSSHFactory', 'cowrie.telnet.transport.HoneyPotTelnetFactory']:
- logger.debug('skipping {}'.format(system))
return
today = str(datetime.now().date())
if not self.context.get(today):
- logger.debug('resetting context for %s' % today)
self.context = {}
self.context[today] = set()
key = ','.join([peerIP, system])
if key in self.context[today]:
- logger.debug('skipping {}'.format(key))
return
self.context[today].add(key)
@@ -79,5 +75,5 @@
}
ret = Indicator(self.client, i).submit()
- logger.info('logged to csirtg %s ' % ret['indicator']['location'])
+ log.msg('logged to csirtg %s ' % ret['location'])
| {"golden_diff": "diff --git a/cowrie/output/csirtg.py b/cowrie/output/csirtg.py\n--- a/cowrie/output/csirtg.py\n+++ b/cowrie/output/csirtg.py\n@@ -7,8 +7,7 @@\n from datetime import datetime\n import logging\n import os\n-\n-logger = logging.getLogger(__name__)\n+from twisted.python import log\n \n USERNAME = os.environ.get('CSIRTG_USER')\n FEED = os.environ.get('CSIRTG_FEED')\n@@ -42,20 +41,17 @@\n system = e['system']\n \n if system not in ['cowrie.ssh.factory.CowrieSSHFactory', 'cowrie.telnet.transport.HoneyPotTelnetFactory']:\n- logger.debug('skipping {}'.format(system))\n return\n \n today = str(datetime.now().date())\n \n if not self.context.get(today):\n- logger.debug('resetting context for %s' % today)\n self.context = {}\n self.context[today] = set()\n \n key = ','.join([peerIP, system])\n \n if key in self.context[today]:\n- logger.debug('skipping {}'.format(key))\n return\n \n self.context[today].add(key)\n@@ -79,5 +75,5 @@\n }\n \n ret = Indicator(self.client, i).submit()\n- logger.info('logged to csirtg %s ' % ret['indicator']['location'])\n+ log.msg('logged to csirtg %s ' % ret['location'])\n", "issue": "csirtg plugin no longer working\nI'm not sure exactly when this happened, but just happend to check the logs today, and noticed the csirtg plugin has some errors.\r\n\r\n```\r\n2017-11-02T17:05:41-0400 [cowrie.telnet.transport.HoneyPotTelnetFactory] New connection: 45.32.221.61:59776 (x.x.x.x:23) [session: TT0]\r\n2017-11-02T17:05:41-0400 [twisted.logger._observer#critical] Temporarily disabling observer LegacyLogObserverWrapper(<bound method Output.emit of <cowrie.output.csirtg.Output object at 0x7f3a5ce9bb50>>) due to exception: [Failure instance: Traceback: <type 'exceptions.TypeError'>: string indices must be integers\r\n\t/home/cowrie/cowrie/cowrie/telnet/transport.py:218:connectionMade\r\n\t/usr/local/lib/python2.7/dist-packages/twisted/python/threadable.py:53:sync\r\n\t/usr/local/lib/python2.7/dist-packages/twisted/python/log.py:286:msg\r\n\t/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py:154:publishToNewObserver\r\n\t--- <exception caught here> ---\r\n\t/usr/local/lib/python2.7/dist-packages/twisted/logger/_observer.py:131:__call__\r\n\t/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py:93:__call__\r\n\t/home/cowrie/cowrie/cowrie/core/output.py:190:emit\r\n\t/home/cowrie/cowrie/cowrie/output/csirtg.py:82:write\r\n\t]\r\n\tTraceback (most recent call last):\r\n\t File \"/home/cowrie/cowrie/cowrie/telnet/transport.py\", line 218, in connectionMade\r\n\t session=self.transportId, sessionno='T'+str(sessionno))\r\n\t File \"/usr/local/lib/python2.7/dist-packages/twisted/python/threadable.py\", line 53, in sync\r\n\t return function(self, *args, **kwargs)\r\n\t File \"/usr/local/lib/python2.7/dist-packages/twisted/python/log.py\", line 286, in msg\r\n\t _publishNew(self._publishPublisher, actualEventDict, textFromEventDict)\r\n\t File \"/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py\", line 154, in publishToNewObserver\r\n\t observer(eventDict)\r\n\t--- <exception caught here> ---\r\n\t File \"/usr/local/lib/python2.7/dist-packages/twisted/logger/_observer.py\", line 131, in __call__\r\n\t observer(event)\r\n\t File \"/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py\", line 93, in __call__\r\n\t self.legacyObserver(event)\r\n\t File \"/home/cowrie/cowrie/cowrie/core/output.py\", line 190, in emit\r\n\t self.write(ev)\r\n\t File \"/home/cowrie/cowrie/cowrie/output/csirtg.py\", line 82, in write\r\n\t logger.info('logged to csirtg %s ' % ret['indicator']['location'])\r\n\texceptions.TypeError: string indices must be integers\r\n```\n", "before_files": [{"content": "from __future__ import division, absolute_import\n\nimport cowrie.core.output\n\nfrom csirtgsdk.indicator import Indicator\nfrom csirtgsdk.client import Client\nfrom datetime import datetime\nimport logging\nimport os\n\nlogger = logging.getLogger(__name__)\n\nUSERNAME = os.environ.get('CSIRTG_USER')\nFEED = os.environ.get('CSIRTG_FEED')\nTOKEN = os.environ.get('CSIRG_TOKEN')\nDESCRIPTION = os.environ.get('CSIRTG_DESCRIPTION', 'random scanning activity')\n\n\nclass Output(cowrie.core.output.Output):\n def __init__(self, cfg):\n cowrie.core.output.Output.__init__(self, cfg)\n self.user = cfg.get('output_csirtg', 'username') or USERNAME\n self.feed = cfg.get('output_csirtg', 'feed') or FEED\n self.token = cfg.get('output_csirtg', 'token') or TOKEN\n try:\n self.description = cfg.get('output_csirtg', 'description')\n except Exception:\n self.description = DESCRIPTION\n self.context = {}\n self.client = Client(token=self.token)\n\n def start(self,):\n pass\n\n def stop(self):\n pass\n\n def write(self, e):\n sid = e['session']\n peerIP = e['src_ip']\n ts = e['timestamp']\n system = e['system']\n\n if system not in ['cowrie.ssh.factory.CowrieSSHFactory', 'cowrie.telnet.transport.HoneyPotTelnetFactory']:\n logger.debug('skipping {}'.format(system))\n return\n\n today = str(datetime.now().date())\n\n if not self.context.get(today):\n logger.debug('resetting context for %s' % today)\n self.context = {}\n self.context[today] = set()\n\n key = ','.join([peerIP, system])\n\n if key in self.context[today]:\n logger.debug('skipping {}'.format(key))\n return\n\n self.context[today].add(key)\n\n tags = 'scanner,ssh'\n port = 22\n if e['system'] == 'cowrie.telnet.transport.HoneyPotTelnetFactory':\n tags = 'scanner,telnet'\n port = 23\n\n i = {\n 'user': self.user,\n 'feed': self.feed,\n 'indicator': peerIP,\n 'portlist': port,\n 'protocol': 'tcp',\n 'tags': tags,\n 'firsttime': ts,\n 'lasttime': ts,\n 'description': self.description\n }\n\n ret = Indicator(self.client, i).submit()\n logger.info('logged to csirtg %s ' % ret['indicator']['location'])\n\n", "path": "cowrie/output/csirtg.py"}], "after_files": [{"content": "from __future__ import division, absolute_import\n\nimport cowrie.core.output\n\nfrom csirtgsdk.indicator import Indicator\nfrom csirtgsdk.client import Client\nfrom datetime import datetime\nimport logging\nimport os\nfrom twisted.python import log\n\nUSERNAME = os.environ.get('CSIRTG_USER')\nFEED = os.environ.get('CSIRTG_FEED')\nTOKEN = os.environ.get('CSIRG_TOKEN')\nDESCRIPTION = os.environ.get('CSIRTG_DESCRIPTION', 'random scanning activity')\n\n\nclass Output(cowrie.core.output.Output):\n def __init__(self, cfg):\n cowrie.core.output.Output.__init__(self, cfg)\n self.user = cfg.get('output_csirtg', 'username') or USERNAME\n self.feed = cfg.get('output_csirtg', 'feed') or FEED\n self.token = cfg.get('output_csirtg', 'token') or TOKEN\n try:\n self.description = cfg.get('output_csirtg', 'description')\n except Exception:\n self.description = DESCRIPTION\n self.context = {}\n self.client = Client(token=self.token)\n\n def start(self,):\n pass\n\n def stop(self):\n pass\n\n def write(self, e):\n sid = e['session']\n peerIP = e['src_ip']\n ts = e['timestamp']\n system = e['system']\n\n if system not in ['cowrie.ssh.factory.CowrieSSHFactory', 'cowrie.telnet.transport.HoneyPotTelnetFactory']:\n return\n\n today = str(datetime.now().date())\n\n if not self.context.get(today):\n self.context = {}\n self.context[today] = set()\n\n key = ','.join([peerIP, system])\n\n if key in self.context[today]:\n return\n\n self.context[today].add(key)\n\n tags = 'scanner,ssh'\n port = 22\n if e['system'] == 'cowrie.telnet.transport.HoneyPotTelnetFactory':\n tags = 'scanner,telnet'\n port = 23\n\n i = {\n 'user': self.user,\n 'feed': self.feed,\n 'indicator': peerIP,\n 'portlist': port,\n 'protocol': 'tcp',\n 'tags': tags,\n 'firsttime': ts,\n 'lasttime': ts,\n 'description': self.description\n }\n\n ret = Indicator(self.client, i).submit()\n log.msg('logged to csirtg %s ' % ret['location'])\n\n", "path": "cowrie/output/csirtg.py"}]} | 1,759 | 326 |
gh_patches_debug_1409 | rasdani/github-patches | git_diff | beetbox__beets-806 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mpdstats: single or last song isn't rated and counted
The `mpdstats` plugin won't update `play_count`+`rating` for the last (or only) song in the playlist. (paging @pscn and @kljohann)
mpdstats: single or last song isn't rated and counted
The `mpdstats` plugin won't update `play_count`+`rating` for the last (or only) song in the playlist. (paging @pscn and @kljohann)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `beetsplug/mpdstats.py`
Content:
```
1 # coding=utf-8
2 # This file is part of beets.
3 # Copyright 2013, Peter Schnebel and Johann Klähn.
4 #
5 # Permission is hereby granted, free of charge, to any person obtaining
6 # a copy of this software and associated documentation files (the
7 # "Software"), to deal in the Software without restriction, including
8 # without limitation the rights to use, copy, modify, merge, publish,
9 # distribute, sublicense, and/or sell copies of the Software, and to
10 # permit persons to whom the Software is furnished to do so, subject to
11 # the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be
14 # included in all copies or substantial portions of the Software.
15
16 import logging
17 import mpd
18 import socket
19 import select
20 import time
21 import os
22
23 from beets import ui
24 from beets import config
25 from beets import plugins
26 from beets import library
27 from beets.util import displayable_path
28
29 log = logging.getLogger('beets')
30
31 # If we lose the connection, how many times do we want to retry and how
32 # much time should we wait between retries?
33 RETRIES = 10
34 RETRY_INTERVAL = 5
35
36
37 def is_url(path):
38 """Try to determine if the path is an URL.
39 """
40 return path.split('://', 1)[0] in ['http', 'https']
41
42
43 # Use the MPDClient internals to get unicode.
44 # see http://www.tarmack.eu/code/mpdunicode.py for the general idea
45 class MPDClient(mpd.MPDClient):
46 def _write_command(self, command, args=[]):
47 args = [unicode(arg).encode('utf-8') for arg in args]
48 super(MPDClient, self)._write_command(command, args)
49
50 def _read_line(self):
51 line = super(MPDClient, self)._read_line()
52 if line is not None:
53 return line.decode('utf-8')
54 return None
55
56
57 class MPDClientWrapper(object):
58 def __init__(self):
59 self.music_directory = (
60 config['mpdstats']['music_directory'].get(unicode))
61
62 self.client = MPDClient()
63
64 def connect(self):
65 """Connect to the MPD.
66 """
67 host = config['mpd']['host'].get(unicode)
68 port = config['mpd']['port'].get(int)
69
70 if host[0] in ['/', '~']:
71 host = os.path.expanduser(host)
72
73 log.info(u'mpdstats: connecting to {0}:{1}'.format(host, port))
74 try:
75 self.client.connect(host, port)
76 except socket.error as e:
77 raise ui.UserError('could not connect to MPD: {0}'.format(e))
78
79 password = config['mpd']['password'].get(unicode)
80 if password:
81 try:
82 self.client.password(password)
83 except mpd.CommandError as e:
84 raise ui.UserError(
85 'could not authenticate to MPD: {0}'.format(e)
86 )
87
88 def disconnect(self):
89 """Disconnect from the MPD.
90 """
91 self.client.close()
92 self.client.disconnect()
93
94 def get(self, command, retries=RETRIES):
95 """Wrapper for requests to the MPD server. Tries to re-connect if the
96 connection was lost (f.ex. during MPD's library refresh).
97 """
98 try:
99 return getattr(self.client, command)()
100 except (select.error, mpd.ConnectionError) as err:
101 log.error(u'mpdstats: {0}'.format(err))
102
103 if retries <= 0:
104 # if we exited without breaking, we couldn't reconnect in time :(
105 raise ui.UserError(u'communication with MPD server failed')
106
107 time.sleep(RETRY_INTERVAL)
108
109 try:
110 self.disconnect()
111 except mpd.ConnectionError:
112 pass
113
114 self.connect()
115 return self.get(command, retries=retries - 1)
116
117 def playlist(self):
118 """Return the currently active playlist. Prefixes paths with the
119 music_directory, to get the absolute path.
120 """
121 result = {}
122 for entry in self.get('playlistinfo'):
123 if not is_url(entry['file']):
124 result[entry['id']] = os.path.join(
125 self.music_directory, entry['file'])
126 else:
127 result[entry['id']] = entry['file']
128 return result
129
130 def status(self):
131 """Return the current status of the MPD.
132 """
133 return self.get('status')
134
135 def events(self):
136 """Return list of events. This may block a long time while waiting for
137 an answer from MPD.
138 """
139 return self.get('idle')
140
141
142 class MPDStats(object):
143 def __init__(self, lib):
144 self.lib = lib
145
146 self.do_rating = config['mpdstats']['rating'].get(bool)
147 self.rating_mix = config['mpdstats']['rating_mix'].get(float)
148 self.time_threshold = 10.0 # TODO: maybe add config option?
149
150 self.now_playing = None
151 self.mpd = MPDClientWrapper()
152
153 def rating(self, play_count, skip_count, rating, skipped):
154 """Calculate a new rating for a song based on play count, skip count,
155 old rating and the fact if it was skipped or not.
156 """
157 if skipped:
158 rolling = (rating - rating / 2.0)
159 else:
160 rolling = (rating + (1.0 - rating) / 2.0)
161 stable = (play_count + 1.0) / (play_count + skip_count + 2.0)
162 return (self.rating_mix * stable
163 + (1.0 - self.rating_mix) * rolling)
164
165 def get_item(self, path):
166 """Return the beets item related to path.
167 """
168 query = library.PathQuery('path', path)
169 item = self.lib.items(query).get()
170 if item:
171 return item
172 else:
173 log.info(u'mpdstats: item not found: {0}'.format(
174 displayable_path(path)
175 ))
176
177 @staticmethod
178 def update_item(item, attribute, value=None, increment=None):
179 """Update the beets item. Set attribute to value or increment the value
180 of attribute. If the increment argument is used the value is cast to
181 the corresponding type.
182 """
183 if item is None:
184 return
185
186 if increment is not None:
187 item.load()
188 value = type(increment)(item.get(attribute, 0)) + increment
189
190 if value is not None:
191 item[attribute] = value
192 item.store()
193
194 log.debug(u'mpdstats: updated: {0} = {1} [{2}]'.format(
195 attribute,
196 item[attribute],
197 displayable_path(item.path),
198 ))
199
200 def update_rating(self, item, skipped):
201 """Update the rating for a beets item.
202 """
203 item.load()
204 rating = self.rating(
205 int(item.get('play_count', 0)),
206 int(item.get('skip_count', 0)),
207 float(item.get('rating', 0.5)),
208 skipped)
209
210 self.update_item(item, 'rating', rating)
211
212 def handle_song_change(self, song):
213 """Determine if a song was skipped or not and update its attributes.
214 To this end the difference between the song's supposed end time
215 and the current time is calculated. If it's greater than a threshold,
216 the song is considered skipped.
217 """
218 diff = abs(song['remaining'] - (time.time() - song['started']))
219
220 skipped = diff >= self.time_threshold
221
222 if skipped:
223 self.handle_skipped(song)
224 else:
225 self.handle_played(song)
226
227 if self.do_rating:
228 self.update_rating(song['beets_item'], skipped)
229
230 def handle_played(self, song):
231 """Updates the play count of a song.
232 """
233 self.update_item(song['beets_item'], 'play_count', increment=1)
234 log.info(u'mpdstats: played {0}'.format(
235 displayable_path(song['path'])
236 ))
237
238 def handle_skipped(self, song):
239 """Updates the skip count of a song.
240 """
241 self.update_item(song['beets_item'], 'skip_count', increment=1)
242 log.info(u'mpdstats: skipped {0}'.format(
243 displayable_path(song['path'])
244 ))
245
246 def on_stop(self, status):
247 log.info(u'mpdstats: stop')
248 self.now_playing = None
249
250 def on_pause(self, status):
251 log.info(u'mpdstats: pause')
252 self.now_playing = None
253
254 def on_play(self, status):
255 playlist = self.mpd.playlist()
256 path = playlist.get(status['songid'])
257
258 if not path:
259 return
260
261 if is_url(path):
262 log.info(u'mpdstats: playing stream {0}'.format(
263 displayable_path(path)
264 ))
265 return
266
267 played, duration = map(int, status['time'].split(':', 1))
268 remaining = duration - played
269
270 if self.now_playing and self.now_playing['path'] != path:
271 self.handle_song_change(self.now_playing)
272
273 log.info(u'mpdstats: playing {0}'.format(
274 displayable_path(path)
275 ))
276
277 self.now_playing = {
278 'started': time.time(),
279 'remaining': remaining,
280 'path': path,
281 'beets_item': self.get_item(path),
282 }
283
284 self.update_item(self.now_playing['beets_item'],
285 'last_played', value=int(time.time()))
286
287 def run(self):
288 self.mpd.connect()
289 events = ['player']
290
291 while True:
292 if 'player' in events:
293 status = self.mpd.status()
294
295 handler = getattr(self, 'on_' + status['state'], None)
296
297 if handler:
298 handler(status)
299 else:
300 log.debug(u'mpdstats: unhandled status "{0}"'.
301 format(status))
302
303 events = self.mpd.events()
304
305
306 class MPDStatsPlugin(plugins.BeetsPlugin):
307 def __init__(self):
308 super(MPDStatsPlugin, self).__init__()
309 self.config.add({
310 'music_directory': config['directory'].as_filename(),
311 'rating': True,
312 'rating_mix': 0.75,
313 })
314 config['mpd'].add({
315 'host': u'localhost',
316 'port': 6600,
317 'password': u'',
318 })
319
320 def commands(self):
321 cmd = ui.Subcommand(
322 'mpdstats',
323 help='run a MPD client to gather play statistics')
324 cmd.parser.add_option(
325 '--host', dest='host', type='string',
326 help='set the hostname of the server to connect to')
327 cmd.parser.add_option(
328 '--port', dest='port', type='int',
329 help='set the port of the MPD server to connect to')
330 cmd.parser.add_option(
331 '--password', dest='password', type='string',
332 help='set the password of the MPD server to connect to')
333
334 def func(lib, opts, args):
335 self.config.set_args(opts)
336
337 # Overrides for MPD settings.
338 if opts.host:
339 config['mpd']['host'] = opts.host.decode('utf8')
340 if opts.port:
341 config['mpd']['host'] = int(opts.port)
342 if opts.password:
343 config['mpd']['password'] = opts.password.decode('utf8')
344
345 try:
346 MPDStats(lib).run()
347 except KeyboardInterrupt:
348 pass
349
350 cmd.func = func
351 return [cmd]
352
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/beetsplug/mpdstats.py b/beetsplug/mpdstats.py
--- a/beetsplug/mpdstats.py
+++ b/beetsplug/mpdstats.py
@@ -245,6 +245,10 @@
def on_stop(self, status):
log.info(u'mpdstats: stop')
+
+ if self.now_playing:
+ self.handle_song_change(self.now_playing)
+
self.now_playing = None
def on_pause(self, status):
| {"golden_diff": "diff --git a/beetsplug/mpdstats.py b/beetsplug/mpdstats.py\n--- a/beetsplug/mpdstats.py\n+++ b/beetsplug/mpdstats.py\n@@ -245,6 +245,10 @@\n \n def on_stop(self, status):\n log.info(u'mpdstats: stop')\n+\n+ if self.now_playing:\n+ self.handle_song_change(self.now_playing)\n+\n self.now_playing = None\n \n def on_pause(self, status):\n", "issue": "mpdstats: single or last song isn't rated and counted\nThe `mpdstats` plugin won't update `play_count`+`rating` for the last (or only) song in the playlist. (paging @pscn and @kljohann)\n\nmpdstats: single or last song isn't rated and counted\nThe `mpdstats` plugin won't update `play_count`+`rating` for the last (or only) song in the playlist. (paging @pscn and @kljohann)\n\n", "before_files": [{"content": "# coding=utf-8\n# This file is part of beets.\n# Copyright 2013, Peter Schnebel and Johann Kl\u00e4hn.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\nimport logging\nimport mpd\nimport socket\nimport select\nimport time\nimport os\n\nfrom beets import ui\nfrom beets import config\nfrom beets import plugins\nfrom beets import library\nfrom beets.util import displayable_path\n\nlog = logging.getLogger('beets')\n\n# If we lose the connection, how many times do we want to retry and how\n# much time should we wait between retries?\nRETRIES = 10\nRETRY_INTERVAL = 5\n\n\ndef is_url(path):\n \"\"\"Try to determine if the path is an URL.\n \"\"\"\n return path.split('://', 1)[0] in ['http', 'https']\n\n\n# Use the MPDClient internals to get unicode.\n# see http://www.tarmack.eu/code/mpdunicode.py for the general idea\nclass MPDClient(mpd.MPDClient):\n def _write_command(self, command, args=[]):\n args = [unicode(arg).encode('utf-8') for arg in args]\n super(MPDClient, self)._write_command(command, args)\n\n def _read_line(self):\n line = super(MPDClient, self)._read_line()\n if line is not None:\n return line.decode('utf-8')\n return None\n\n\nclass MPDClientWrapper(object):\n def __init__(self):\n self.music_directory = (\n config['mpdstats']['music_directory'].get(unicode))\n\n self.client = MPDClient()\n\n def connect(self):\n \"\"\"Connect to the MPD.\n \"\"\"\n host = config['mpd']['host'].get(unicode)\n port = config['mpd']['port'].get(int)\n\n if host[0] in ['/', '~']:\n host = os.path.expanduser(host)\n\n log.info(u'mpdstats: connecting to {0}:{1}'.format(host, port))\n try:\n self.client.connect(host, port)\n except socket.error as e:\n raise ui.UserError('could not connect to MPD: {0}'.format(e))\n\n password = config['mpd']['password'].get(unicode)\n if password:\n try:\n self.client.password(password)\n except mpd.CommandError as e:\n raise ui.UserError(\n 'could not authenticate to MPD: {0}'.format(e)\n )\n\n def disconnect(self):\n \"\"\"Disconnect from the MPD.\n \"\"\"\n self.client.close()\n self.client.disconnect()\n\n def get(self, command, retries=RETRIES):\n \"\"\"Wrapper for requests to the MPD server. Tries to re-connect if the\n connection was lost (f.ex. during MPD's library refresh).\n \"\"\"\n try:\n return getattr(self.client, command)()\n except (select.error, mpd.ConnectionError) as err:\n log.error(u'mpdstats: {0}'.format(err))\n\n if retries <= 0:\n # if we exited without breaking, we couldn't reconnect in time :(\n raise ui.UserError(u'communication with MPD server failed')\n\n time.sleep(RETRY_INTERVAL)\n\n try:\n self.disconnect()\n except mpd.ConnectionError:\n pass\n\n self.connect()\n return self.get(command, retries=retries - 1)\n\n def playlist(self):\n \"\"\"Return the currently active playlist. Prefixes paths with the\n music_directory, to get the absolute path.\n \"\"\"\n result = {}\n for entry in self.get('playlistinfo'):\n if not is_url(entry['file']):\n result[entry['id']] = os.path.join(\n self.music_directory, entry['file'])\n else:\n result[entry['id']] = entry['file']\n return result\n\n def status(self):\n \"\"\"Return the current status of the MPD.\n \"\"\"\n return self.get('status')\n\n def events(self):\n \"\"\"Return list of events. This may block a long time while waiting for\n an answer from MPD.\n \"\"\"\n return self.get('idle')\n\n\nclass MPDStats(object):\n def __init__(self, lib):\n self.lib = lib\n\n self.do_rating = config['mpdstats']['rating'].get(bool)\n self.rating_mix = config['mpdstats']['rating_mix'].get(float)\n self.time_threshold = 10.0 # TODO: maybe add config option?\n\n self.now_playing = None\n self.mpd = MPDClientWrapper()\n\n def rating(self, play_count, skip_count, rating, skipped):\n \"\"\"Calculate a new rating for a song based on play count, skip count,\n old rating and the fact if it was skipped or not.\n \"\"\"\n if skipped:\n rolling = (rating - rating / 2.0)\n else:\n rolling = (rating + (1.0 - rating) / 2.0)\n stable = (play_count + 1.0) / (play_count + skip_count + 2.0)\n return (self.rating_mix * stable\n + (1.0 - self.rating_mix) * rolling)\n\n def get_item(self, path):\n \"\"\"Return the beets item related to path.\n \"\"\"\n query = library.PathQuery('path', path)\n item = self.lib.items(query).get()\n if item:\n return item\n else:\n log.info(u'mpdstats: item not found: {0}'.format(\n displayable_path(path)\n ))\n\n @staticmethod\n def update_item(item, attribute, value=None, increment=None):\n \"\"\"Update the beets item. Set attribute to value or increment the value\n of attribute. If the increment argument is used the value is cast to\n the corresponding type.\n \"\"\"\n if item is None:\n return\n\n if increment is not None:\n item.load()\n value = type(increment)(item.get(attribute, 0)) + increment\n\n if value is not None:\n item[attribute] = value\n item.store()\n\n log.debug(u'mpdstats: updated: {0} = {1} [{2}]'.format(\n attribute,\n item[attribute],\n displayable_path(item.path),\n ))\n\n def update_rating(self, item, skipped):\n \"\"\"Update the rating for a beets item.\n \"\"\"\n item.load()\n rating = self.rating(\n int(item.get('play_count', 0)),\n int(item.get('skip_count', 0)),\n float(item.get('rating', 0.5)),\n skipped)\n\n self.update_item(item, 'rating', rating)\n\n def handle_song_change(self, song):\n \"\"\"Determine if a song was skipped or not and update its attributes.\n To this end the difference between the song's supposed end time\n and the current time is calculated. If it's greater than a threshold,\n the song is considered skipped.\n \"\"\"\n diff = abs(song['remaining'] - (time.time() - song['started']))\n\n skipped = diff >= self.time_threshold\n\n if skipped:\n self.handle_skipped(song)\n else:\n self.handle_played(song)\n\n if self.do_rating:\n self.update_rating(song['beets_item'], skipped)\n\n def handle_played(self, song):\n \"\"\"Updates the play count of a song.\n \"\"\"\n self.update_item(song['beets_item'], 'play_count', increment=1)\n log.info(u'mpdstats: played {0}'.format(\n displayable_path(song['path'])\n ))\n\n def handle_skipped(self, song):\n \"\"\"Updates the skip count of a song.\n \"\"\"\n self.update_item(song['beets_item'], 'skip_count', increment=1)\n log.info(u'mpdstats: skipped {0}'.format(\n displayable_path(song['path'])\n ))\n\n def on_stop(self, status):\n log.info(u'mpdstats: stop')\n self.now_playing = None\n\n def on_pause(self, status):\n log.info(u'mpdstats: pause')\n self.now_playing = None\n\n def on_play(self, status):\n playlist = self.mpd.playlist()\n path = playlist.get(status['songid'])\n\n if not path:\n return\n\n if is_url(path):\n log.info(u'mpdstats: playing stream {0}'.format(\n displayable_path(path)\n ))\n return\n\n played, duration = map(int, status['time'].split(':', 1))\n remaining = duration - played\n\n if self.now_playing and self.now_playing['path'] != path:\n self.handle_song_change(self.now_playing)\n\n log.info(u'mpdstats: playing {0}'.format(\n displayable_path(path)\n ))\n\n self.now_playing = {\n 'started': time.time(),\n 'remaining': remaining,\n 'path': path,\n 'beets_item': self.get_item(path),\n }\n\n self.update_item(self.now_playing['beets_item'],\n 'last_played', value=int(time.time()))\n\n def run(self):\n self.mpd.connect()\n events = ['player']\n\n while True:\n if 'player' in events:\n status = self.mpd.status()\n\n handler = getattr(self, 'on_' + status['state'], None)\n\n if handler:\n handler(status)\n else:\n log.debug(u'mpdstats: unhandled status \"{0}\"'.\n format(status))\n\n events = self.mpd.events()\n\n\nclass MPDStatsPlugin(plugins.BeetsPlugin):\n def __init__(self):\n super(MPDStatsPlugin, self).__init__()\n self.config.add({\n 'music_directory': config['directory'].as_filename(),\n 'rating': True,\n 'rating_mix': 0.75,\n })\n config['mpd'].add({\n 'host': u'localhost',\n 'port': 6600,\n 'password': u'',\n })\n\n def commands(self):\n cmd = ui.Subcommand(\n 'mpdstats',\n help='run a MPD client to gather play statistics')\n cmd.parser.add_option(\n '--host', dest='host', type='string',\n help='set the hostname of the server to connect to')\n cmd.parser.add_option(\n '--port', dest='port', type='int',\n help='set the port of the MPD server to connect to')\n cmd.parser.add_option(\n '--password', dest='password', type='string',\n help='set the password of the MPD server to connect to')\n\n def func(lib, opts, args):\n self.config.set_args(opts)\n\n # Overrides for MPD settings.\n if opts.host:\n config['mpd']['host'] = opts.host.decode('utf8')\n if opts.port:\n config['mpd']['host'] = int(opts.port)\n if opts.password:\n config['mpd']['password'] = opts.password.decode('utf8')\n\n try:\n MPDStats(lib).run()\n except KeyboardInterrupt:\n pass\n\n cmd.func = func\n return [cmd]\n", "path": "beetsplug/mpdstats.py"}], "after_files": [{"content": "# coding=utf-8\n# This file is part of beets.\n# Copyright 2013, Peter Schnebel and Johann Kl\u00e4hn.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\nimport logging\nimport mpd\nimport socket\nimport select\nimport time\nimport os\n\nfrom beets import ui\nfrom beets import config\nfrom beets import plugins\nfrom beets import library\nfrom beets.util import displayable_path\n\nlog = logging.getLogger('beets')\n\n# If we lose the connection, how many times do we want to retry and how\n# much time should we wait between retries?\nRETRIES = 10\nRETRY_INTERVAL = 5\n\n\ndef is_url(path):\n \"\"\"Try to determine if the path is an URL.\n \"\"\"\n return path.split('://', 1)[0] in ['http', 'https']\n\n\n# Use the MPDClient internals to get unicode.\n# see http://www.tarmack.eu/code/mpdunicode.py for the general idea\nclass MPDClient(mpd.MPDClient):\n def _write_command(self, command, args=[]):\n args = [unicode(arg).encode('utf-8') for arg in args]\n super(MPDClient, self)._write_command(command, args)\n\n def _read_line(self):\n line = super(MPDClient, self)._read_line()\n if line is not None:\n return line.decode('utf-8')\n return None\n\n\nclass MPDClientWrapper(object):\n def __init__(self):\n self.music_directory = (\n config['mpdstats']['music_directory'].get(unicode))\n\n self.client = MPDClient()\n\n def connect(self):\n \"\"\"Connect to the MPD.\n \"\"\"\n host = config['mpd']['host'].get(unicode)\n port = config['mpd']['port'].get(int)\n\n if host[0] in ['/', '~']:\n host = os.path.expanduser(host)\n\n log.info(u'mpdstats: connecting to {0}:{1}'.format(host, port))\n try:\n self.client.connect(host, port)\n except socket.error as e:\n raise ui.UserError('could not connect to MPD: {0}'.format(e))\n\n password = config['mpd']['password'].get(unicode)\n if password:\n try:\n self.client.password(password)\n except mpd.CommandError as e:\n raise ui.UserError(\n 'could not authenticate to MPD: {0}'.format(e)\n )\n\n def disconnect(self):\n \"\"\"Disconnect from the MPD.\n \"\"\"\n self.client.close()\n self.client.disconnect()\n\n def get(self, command, retries=RETRIES):\n \"\"\"Wrapper for requests to the MPD server. Tries to re-connect if the\n connection was lost (f.ex. during MPD's library refresh).\n \"\"\"\n try:\n return getattr(self.client, command)()\n except (select.error, mpd.ConnectionError) as err:\n log.error(u'mpdstats: {0}'.format(err))\n\n if retries <= 0:\n # if we exited without breaking, we couldn't reconnect in time :(\n raise ui.UserError(u'communication with MPD server failed')\n\n time.sleep(RETRY_INTERVAL)\n\n try:\n self.disconnect()\n except mpd.ConnectionError:\n pass\n\n self.connect()\n return self.get(command, retries=retries - 1)\n\n def playlist(self):\n \"\"\"Return the currently active playlist. Prefixes paths with the\n music_directory, to get the absolute path.\n \"\"\"\n result = {}\n for entry in self.get('playlistinfo'):\n if not is_url(entry['file']):\n result[entry['id']] = os.path.join(\n self.music_directory, entry['file'])\n else:\n result[entry['id']] = entry['file']\n return result\n\n def status(self):\n \"\"\"Return the current status of the MPD.\n \"\"\"\n return self.get('status')\n\n def events(self):\n \"\"\"Return list of events. This may block a long time while waiting for\n an answer from MPD.\n \"\"\"\n return self.get('idle')\n\n\nclass MPDStats(object):\n def __init__(self, lib):\n self.lib = lib\n\n self.do_rating = config['mpdstats']['rating'].get(bool)\n self.rating_mix = config['mpdstats']['rating_mix'].get(float)\n self.time_threshold = 10.0 # TODO: maybe add config option?\n\n self.now_playing = None\n self.mpd = MPDClientWrapper()\n\n def rating(self, play_count, skip_count, rating, skipped):\n \"\"\"Calculate a new rating for a song based on play count, skip count,\n old rating and the fact if it was skipped or not.\n \"\"\"\n if skipped:\n rolling = (rating - rating / 2.0)\n else:\n rolling = (rating + (1.0 - rating) / 2.0)\n stable = (play_count + 1.0) / (play_count + skip_count + 2.0)\n return (self.rating_mix * stable\n + (1.0 - self.rating_mix) * rolling)\n\n def get_item(self, path):\n \"\"\"Return the beets item related to path.\n \"\"\"\n query = library.PathQuery('path', path)\n item = self.lib.items(query).get()\n if item:\n return item\n else:\n log.info(u'mpdstats: item not found: {0}'.format(\n displayable_path(path)\n ))\n\n @staticmethod\n def update_item(item, attribute, value=None, increment=None):\n \"\"\"Update the beets item. Set attribute to value or increment the value\n of attribute. If the increment argument is used the value is cast to\n the corresponding type.\n \"\"\"\n if item is None:\n return\n\n if increment is not None:\n item.load()\n value = type(increment)(item.get(attribute, 0)) + increment\n\n if value is not None:\n item[attribute] = value\n item.store()\n\n log.debug(u'mpdstats: updated: {0} = {1} [{2}]'.format(\n attribute,\n item[attribute],\n displayable_path(item.path),\n ))\n\n def update_rating(self, item, skipped):\n \"\"\"Update the rating for a beets item.\n \"\"\"\n item.load()\n rating = self.rating(\n int(item.get('play_count', 0)),\n int(item.get('skip_count', 0)),\n float(item.get('rating', 0.5)),\n skipped)\n\n self.update_item(item, 'rating', rating)\n\n def handle_song_change(self, song):\n \"\"\"Determine if a song was skipped or not and update its attributes.\n To this end the difference between the song's supposed end time\n and the current time is calculated. If it's greater than a threshold,\n the song is considered skipped.\n \"\"\"\n diff = abs(song['remaining'] - (time.time() - song['started']))\n\n skipped = diff >= self.time_threshold\n\n if skipped:\n self.handle_skipped(song)\n else:\n self.handle_played(song)\n\n if self.do_rating:\n self.update_rating(song['beets_item'], skipped)\n\n def handle_played(self, song):\n \"\"\"Updates the play count of a song.\n \"\"\"\n self.update_item(song['beets_item'], 'play_count', increment=1)\n log.info(u'mpdstats: played {0}'.format(\n displayable_path(song['path'])\n ))\n\n def handle_skipped(self, song):\n \"\"\"Updates the skip count of a song.\n \"\"\"\n self.update_item(song['beets_item'], 'skip_count', increment=1)\n log.info(u'mpdstats: skipped {0}'.format(\n displayable_path(song['path'])\n ))\n\n def on_stop(self, status):\n log.info(u'mpdstats: stop')\n\n if self.now_playing:\n self.handle_song_change(self.now_playing)\n\n self.now_playing = None\n\n def on_pause(self, status):\n log.info(u'mpdstats: pause')\n self.now_playing = None\n\n def on_play(self, status):\n playlist = self.mpd.playlist()\n path = playlist.get(status['songid'])\n\n if not path:\n return\n\n if is_url(path):\n log.info(u'mpdstats: playing stream {0}'.format(\n displayable_path(path)\n ))\n return\n\n played, duration = map(int, status['time'].split(':', 1))\n remaining = duration - played\n\n if self.now_playing and self.now_playing['path'] != path:\n self.handle_song_change(self.now_playing)\n\n log.info(u'mpdstats: playing {0}'.format(\n displayable_path(path)\n ))\n\n self.now_playing = {\n 'started': time.time(),\n 'remaining': remaining,\n 'path': path,\n 'beets_item': self.get_item(path),\n }\n\n self.update_item(self.now_playing['beets_item'],\n 'last_played', value=int(time.time()))\n\n def run(self):\n self.mpd.connect()\n events = ['player']\n\n while True:\n if 'player' in events:\n status = self.mpd.status()\n\n handler = getattr(self, 'on_' + status['state'], None)\n\n if handler:\n handler(status)\n else:\n log.debug(u'mpdstats: unhandled status \"{0}\"'.\n format(status))\n\n events = self.mpd.events()\n\n\nclass MPDStatsPlugin(plugins.BeetsPlugin):\n def __init__(self):\n super(MPDStatsPlugin, self).__init__()\n self.config.add({\n 'music_directory': config['directory'].as_filename(),\n 'rating': True,\n 'rating_mix': 0.75,\n })\n config['mpd'].add({\n 'host': u'localhost',\n 'port': 6600,\n 'password': u'',\n })\n\n def commands(self):\n cmd = ui.Subcommand(\n 'mpdstats',\n help='run a MPD client to gather play statistics')\n cmd.parser.add_option(\n '--host', dest='host', type='string',\n help='set the hostname of the server to connect to')\n cmd.parser.add_option(\n '--port', dest='port', type='int',\n help='set the port of the MPD server to connect to')\n cmd.parser.add_option(\n '--password', dest='password', type='string',\n help='set the password of the MPD server to connect to')\n\n def func(lib, opts, args):\n self.config.set_args(opts)\n\n # Overrides for MPD settings.\n if opts.host:\n config['mpd']['host'] = opts.host.decode('utf8')\n if opts.port:\n config['mpd']['host'] = int(opts.port)\n if opts.password:\n config['mpd']['password'] = opts.password.decode('utf8')\n\n try:\n MPDStats(lib).run()\n except KeyboardInterrupt:\n pass\n\n cmd.func = func\n return [cmd]\n", "path": "beetsplug/mpdstats.py"}]} | 3,852 | 108 |
gh_patches_debug_26662 | rasdani/github-patches | git_diff | chainer__chainer-903 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Stream object should have .ptr set to 0, not None.
The event object expects the stream.ptr to be an integer (size_t) here:
https://github.com/pfnet/chainer/blob/master/cupy/cuda/stream.py#L56
https://github.com/pfnet/chainer/blob/master/cupy/cuda/runtime.pyx#L309
In trunk at the moment, recording events with default stream fails via:
Traceback (most recent call last):
File "train_imagenet.py", line 85, in <module>
train_loop()
File "train_imagenet.py", line 67, in train_loop
start.record()
File "/home/awesomebox/anaconda/lib/python2.7/site-packages/chainer-1.5.1-py2.7-linux-x86_64.egg/cupy/cuda/stream.py", line 56, in record
runtime.eventRecord(self.ptr, stream.ptr)
File "cupy/cuda/runtime.pyx", line 309, in cupy.cuda.runtime.eventRecord (cupy/cuda/runtime.cpp:6139)
TypeError: an integer is required
The fix seems simple:
https://github.com/pfnet/chainer/blob/master/cupy/cuda/stream.py#L103
self.ptr = 0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cupy/cuda/stream.py`
Content:
```
1 from cupy.cuda import runtime
2
3
4 class Event(object):
5
6 """CUDA event, a synchronization point of CUDA streams.
7
8 This class handles the CUDA event handle in RAII way, i.e., when an Event
9 instance is destroyed by the GC, its handle is also destroyed.
10
11 Args:
12 block (bool): If True, the event blocks on the
13 :meth:`~cupy.cuda.Event.synchronize` method.
14 disable_timing (bool): If True, the event does not prepare the timing
15 data.
16 interprocess (bool): If True, the event can be passed to other
17 processes.
18
19 Attributes:
20 ptr (cupy.cuda.runtime.Stream): Raw stream handle. It can be passed to
21 the CUDA Runtime API via ctypes.
22
23 """
24 def __init__(self, block=False, disable_timing=False, interprocess=False):
25 self.ptr = None
26
27 if interprocess and not disable_timing:
28 raise ValueError('Timing must be disabled for interprocess events')
29 flag = ((block and runtime.eventBlockingSync) |
30 (disable_timing and runtime.eventDisableTiming) |
31 (interprocess and runtime.eventInterprocess))
32 self.ptr = runtime.eventCreateWithFlags(flag)
33
34 def __del__(self):
35 if self.ptr:
36 runtime.eventDestroy(self.ptr)
37 self.ptr = None
38
39 @property
40 def done(self):
41 """True if the event is done."""
42 return bool(runtime.eventQuery(self.ptr))
43
44 def record(self, stream=None):
45 """Records the event to a stream.
46
47 Args:
48 stream (cupy.cuda.Stream): CUDA stream to record event. The null
49 stream is used by default.
50
51 .. seealso:: :meth:`cupy.cuda.Stream.record`
52
53 """
54 if stream is None:
55 stream = Stream(null=True)
56 runtime.eventRecord(self.ptr, stream.ptr)
57
58 def synchronize(self):
59 """Synchronizes all device work to the event.
60
61 If the event is created as a blocking event, it also blocks the CPU
62 thread until the event is done.
63
64 """
65 runtime.eventSynchronize(self.ptr)
66
67
68 def get_elapsed_time(start_event, end_event):
69 """Gets the elapsed time between two events.
70
71 Args:
72 start_event (Event): Earlier event.
73 end_event (Event): Later event.
74
75 Returns:
76 float: Elapsed time in milliseconds.
77
78 """
79 return runtime.eventElapsedTime(start_event.ptr, end_event.ptr)
80
81
82 class Stream(object):
83
84 """CUDA stream.
85
86 This class handles the CUDA stream handle in RAII way, i.e., when an Stream
87 instance is destroyed by the GC, its handle is also destroyed.
88
89 Args:
90 null (bool): If True, the stream is a null stream (i.e. the default
91 stream that synchronizes with all streams). Otherwise, a plain new
92 stream is created.
93 non_blocking (bool): If True, the stream does not synchronize with the
94 NULL stream.
95
96 Attributes:
97 ptr (cupy.cuda.runtime.Stream): Raw stream handle. It can be passed to
98 the CUDA Runtime API via ctypes.
99
100 """
101 def __init__(self, null=False, non_blocking=False):
102 if null:
103 self.ptr = None
104 elif non_blocking:
105 self.ptr = runtime.streamCreateWithFlags(runtime.streamNonBlocking)
106 else:
107 self.ptr = runtime.streamCreate()
108
109 def __del__(self):
110 if self.ptr:
111 runtime.streamDestroy(self.ptr)
112 self.ptr = None
113
114 @property
115 def done(self):
116 """True if all work on this stream has been done."""
117 return bool(runtime.streamQuery(self.ptr))
118
119 def synchronize(self):
120 """Waits for the stream completing all queued work."""
121 runtime.streamSynchronize(self.ptr)
122
123 def add_callback(self, callback, arg):
124 """Adds a callback that is called when all queued work is done.
125
126 Args:
127 callback (function): Callback function. It must take three
128 arguments (Stream object, int error status, and user data
129 object), and returns nothing.
130 arg (object): Argument to the callback.
131
132 """
133 runtime.streamAddCallback(self.ptr, callback, arg)
134
135 def record(self, event=None):
136 """Records an event on the stream.
137
138 Args:
139 event (None or cupy.cuda.Event): CUDA event. If None, then a new
140 plain event is created and used.
141
142 Returns:
143 cupy.cuda.Event: The recorded event.
144
145 .. seealso:: :meth:`cupy.cuda.Event.record`
146
147 """
148 if event is None:
149 event = Event()
150 runtime.eventRecord(event.ptr, self.ptr)
151 return event
152
153 def wait_event(self, event):
154 """Makes the stream wait for an event.
155
156 The future work on this stream will be done after the event.
157
158 Args:
159 event (cupy.cuda.Event): CUDA event.
160
161 """
162 runtime.streamWaitEvent(self.ptr, event)
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cupy/cuda/stream.py b/cupy/cuda/stream.py
--- a/cupy/cuda/stream.py
+++ b/cupy/cuda/stream.py
@@ -22,7 +22,7 @@
"""
def __init__(self, block=False, disable_timing=False, interprocess=False):
- self.ptr = None
+ self.ptr = 0
if interprocess and not disable_timing:
raise ValueError('Timing must be disabled for interprocess events')
@@ -34,7 +34,6 @@
def __del__(self):
if self.ptr:
runtime.eventDestroy(self.ptr)
- self.ptr = None
@property
def done(self):
@@ -100,7 +99,7 @@
"""
def __init__(self, null=False, non_blocking=False):
if null:
- self.ptr = None
+ self.ptr = 0
elif non_blocking:
self.ptr = runtime.streamCreateWithFlags(runtime.streamNonBlocking)
else:
@@ -109,7 +108,6 @@
def __del__(self):
if self.ptr:
runtime.streamDestroy(self.ptr)
- self.ptr = None
@property
def done(self):
| {"golden_diff": "diff --git a/cupy/cuda/stream.py b/cupy/cuda/stream.py\n--- a/cupy/cuda/stream.py\n+++ b/cupy/cuda/stream.py\n@@ -22,7 +22,7 @@\n \n \"\"\"\n def __init__(self, block=False, disable_timing=False, interprocess=False):\n- self.ptr = None\n+ self.ptr = 0\n \n if interprocess and not disable_timing:\n raise ValueError('Timing must be disabled for interprocess events')\n@@ -34,7 +34,6 @@\n def __del__(self):\n if self.ptr:\n runtime.eventDestroy(self.ptr)\n- self.ptr = None\n \n @property\n def done(self):\n@@ -100,7 +99,7 @@\n \"\"\"\n def __init__(self, null=False, non_blocking=False):\n if null:\n- self.ptr = None\n+ self.ptr = 0\n elif non_blocking:\n self.ptr = runtime.streamCreateWithFlags(runtime.streamNonBlocking)\n else:\n@@ -109,7 +108,6 @@\n def __del__(self):\n if self.ptr:\n runtime.streamDestroy(self.ptr)\n- self.ptr = None\n \n @property\n def done(self):\n", "issue": "Stream object should have .ptr set to 0, not None.\nThe event object expects the stream.ptr to be an integer (size_t) here:\nhttps://github.com/pfnet/chainer/blob/master/cupy/cuda/stream.py#L56\nhttps://github.com/pfnet/chainer/blob/master/cupy/cuda/runtime.pyx#L309\n\nIn trunk at the moment, recording events with default stream fails via:\nTraceback (most recent call last):\n File \"train_imagenet.py\", line 85, in <module>\n train_loop()\n File \"train_imagenet.py\", line 67, in train_loop\n start.record()\n File \"/home/awesomebox/anaconda/lib/python2.7/site-packages/chainer-1.5.1-py2.7-linux-x86_64.egg/cupy/cuda/stream.py\", line 56, in record\n runtime.eventRecord(self.ptr, stream.ptr)\n File \"cupy/cuda/runtime.pyx\", line 309, in cupy.cuda.runtime.eventRecord (cupy/cuda/runtime.cpp:6139)\nTypeError: an integer is required\n\nThe fix seems simple:\n\nhttps://github.com/pfnet/chainer/blob/master/cupy/cuda/stream.py#L103\nself.ptr = 0\n\n", "before_files": [{"content": "from cupy.cuda import runtime\n\n\nclass Event(object):\n\n \"\"\"CUDA event, a synchronization point of CUDA streams.\n\n This class handles the CUDA event handle in RAII way, i.e., when an Event\n instance is destroyed by the GC, its handle is also destroyed.\n\n Args:\n block (bool): If True, the event blocks on the\n :meth:`~cupy.cuda.Event.synchronize` method.\n disable_timing (bool): If True, the event does not prepare the timing\n data.\n interprocess (bool): If True, the event can be passed to other\n processes.\n\n Attributes:\n ptr (cupy.cuda.runtime.Stream): Raw stream handle. It can be passed to\n the CUDA Runtime API via ctypes.\n\n \"\"\"\n def __init__(self, block=False, disable_timing=False, interprocess=False):\n self.ptr = None\n\n if interprocess and not disable_timing:\n raise ValueError('Timing must be disabled for interprocess events')\n flag = ((block and runtime.eventBlockingSync) |\n (disable_timing and runtime.eventDisableTiming) |\n (interprocess and runtime.eventInterprocess))\n self.ptr = runtime.eventCreateWithFlags(flag)\n\n def __del__(self):\n if self.ptr:\n runtime.eventDestroy(self.ptr)\n self.ptr = None\n\n @property\n def done(self):\n \"\"\"True if the event is done.\"\"\"\n return bool(runtime.eventQuery(self.ptr))\n\n def record(self, stream=None):\n \"\"\"Records the event to a stream.\n\n Args:\n stream (cupy.cuda.Stream): CUDA stream to record event. The null\n stream is used by default.\n\n .. seealso:: :meth:`cupy.cuda.Stream.record`\n\n \"\"\"\n if stream is None:\n stream = Stream(null=True)\n runtime.eventRecord(self.ptr, stream.ptr)\n\n def synchronize(self):\n \"\"\"Synchronizes all device work to the event.\n\n If the event is created as a blocking event, it also blocks the CPU\n thread until the event is done.\n\n \"\"\"\n runtime.eventSynchronize(self.ptr)\n\n\ndef get_elapsed_time(start_event, end_event):\n \"\"\"Gets the elapsed time between two events.\n\n Args:\n start_event (Event): Earlier event.\n end_event (Event): Later event.\n\n Returns:\n float: Elapsed time in milliseconds.\n\n \"\"\"\n return runtime.eventElapsedTime(start_event.ptr, end_event.ptr)\n\n\nclass Stream(object):\n\n \"\"\"CUDA stream.\n\n This class handles the CUDA stream handle in RAII way, i.e., when an Stream\n instance is destroyed by the GC, its handle is also destroyed.\n\n Args:\n null (bool): If True, the stream is a null stream (i.e. the default\n stream that synchronizes with all streams). Otherwise, a plain new\n stream is created.\n non_blocking (bool): If True, the stream does not synchronize with the\n NULL stream.\n\n Attributes:\n ptr (cupy.cuda.runtime.Stream): Raw stream handle. It can be passed to\n the CUDA Runtime API via ctypes.\n\n \"\"\"\n def __init__(self, null=False, non_blocking=False):\n if null:\n self.ptr = None\n elif non_blocking:\n self.ptr = runtime.streamCreateWithFlags(runtime.streamNonBlocking)\n else:\n self.ptr = runtime.streamCreate()\n\n def __del__(self):\n if self.ptr:\n runtime.streamDestroy(self.ptr)\n self.ptr = None\n\n @property\n def done(self):\n \"\"\"True if all work on this stream has been done.\"\"\"\n return bool(runtime.streamQuery(self.ptr))\n\n def synchronize(self):\n \"\"\"Waits for the stream completing all queued work.\"\"\"\n runtime.streamSynchronize(self.ptr)\n\n def add_callback(self, callback, arg):\n \"\"\"Adds a callback that is called when all queued work is done.\n\n Args:\n callback (function): Callback function. It must take three\n arguments (Stream object, int error status, and user data\n object), and returns nothing.\n arg (object): Argument to the callback.\n\n \"\"\"\n runtime.streamAddCallback(self.ptr, callback, arg)\n\n def record(self, event=None):\n \"\"\"Records an event on the stream.\n\n Args:\n event (None or cupy.cuda.Event): CUDA event. If None, then a new\n plain event is created and used.\n\n Returns:\n cupy.cuda.Event: The recorded event.\n\n .. seealso:: :meth:`cupy.cuda.Event.record`\n\n \"\"\"\n if event is None:\n event = Event()\n runtime.eventRecord(event.ptr, self.ptr)\n return event\n\n def wait_event(self, event):\n \"\"\"Makes the stream wait for an event.\n\n The future work on this stream will be done after the event.\n\n Args:\n event (cupy.cuda.Event): CUDA event.\n\n \"\"\"\n runtime.streamWaitEvent(self.ptr, event)\n", "path": "cupy/cuda/stream.py"}], "after_files": [{"content": "from cupy.cuda import runtime\n\n\nclass Event(object):\n\n \"\"\"CUDA event, a synchronization point of CUDA streams.\n\n This class handles the CUDA event handle in RAII way, i.e., when an Event\n instance is destroyed by the GC, its handle is also destroyed.\n\n Args:\n block (bool): If True, the event blocks on the\n :meth:`~cupy.cuda.Event.synchronize` method.\n disable_timing (bool): If True, the event does not prepare the timing\n data.\n interprocess (bool): If True, the event can be passed to other\n processes.\n\n Attributes:\n ptr (cupy.cuda.runtime.Stream): Raw stream handle. It can be passed to\n the CUDA Runtime API via ctypes.\n\n \"\"\"\n def __init__(self, block=False, disable_timing=False, interprocess=False):\n self.ptr = 0\n\n if interprocess and not disable_timing:\n raise ValueError('Timing must be disabled for interprocess events')\n flag = ((block and runtime.eventBlockingSync) |\n (disable_timing and runtime.eventDisableTiming) |\n (interprocess and runtime.eventInterprocess))\n self.ptr = runtime.eventCreateWithFlags(flag)\n\n def __del__(self):\n if self.ptr:\n runtime.eventDestroy(self.ptr)\n\n @property\n def done(self):\n \"\"\"True if the event is done.\"\"\"\n return bool(runtime.eventQuery(self.ptr))\n\n def record(self, stream=None):\n \"\"\"Records the event to a stream.\n\n Args:\n stream (cupy.cuda.Stream): CUDA stream to record event. The null\n stream is used by default.\n\n .. seealso:: :meth:`cupy.cuda.Stream.record`\n\n \"\"\"\n if stream is None:\n stream = Stream(null=True)\n runtime.eventRecord(self.ptr, stream.ptr)\n\n def synchronize(self):\n \"\"\"Synchronizes all device work to the event.\n\n If the event is created as a blocking event, it also blocks the CPU\n thread until the event is done.\n\n \"\"\"\n runtime.eventSynchronize(self.ptr)\n\n\ndef get_elapsed_time(start_event, end_event):\n \"\"\"Gets the elapsed time between two events.\n\n Args:\n start_event (Event): Earlier event.\n end_event (Event): Later event.\n\n Returns:\n float: Elapsed time in milliseconds.\n\n \"\"\"\n return runtime.eventElapsedTime(start_event.ptr, end_event.ptr)\n\n\nclass Stream(object):\n\n \"\"\"CUDA stream.\n\n This class handles the CUDA stream handle in RAII way, i.e., when an Stream\n instance is destroyed by the GC, its handle is also destroyed.\n\n Args:\n null (bool): If True, the stream is a null stream (i.e. the default\n stream that synchronizes with all streams). Otherwise, a plain new\n stream is created.\n non_blocking (bool): If True, the stream does not synchronize with the\n NULL stream.\n\n Attributes:\n ptr (cupy.cuda.runtime.Stream): Raw stream handle. It can be passed to\n the CUDA Runtime API via ctypes.\n\n \"\"\"\n def __init__(self, null=False, non_blocking=False):\n if null:\n self.ptr = 0\n elif non_blocking:\n self.ptr = runtime.streamCreateWithFlags(runtime.streamNonBlocking)\n else:\n self.ptr = runtime.streamCreate()\n\n def __del__(self):\n if self.ptr:\n runtime.streamDestroy(self.ptr)\n\n @property\n def done(self):\n \"\"\"True if all work on this stream has been done.\"\"\"\n return bool(runtime.streamQuery(self.ptr))\n\n def synchronize(self):\n \"\"\"Waits for the stream completing all queued work.\"\"\"\n runtime.streamSynchronize(self.ptr)\n\n def add_callback(self, callback, arg):\n \"\"\"Adds a callback that is called when all queued work is done.\n\n Args:\n callback (function): Callback function. It must take three\n arguments (Stream object, int error status, and user data\n object), and returns nothing.\n arg (object): Argument to the callback.\n\n \"\"\"\n runtime.streamAddCallback(self.ptr, callback, arg)\n\n def record(self, event=None):\n \"\"\"Records an event on the stream.\n\n Args:\n event (None or cupy.cuda.Event): CUDA event. If None, then a new\n plain event is created and used.\n\n Returns:\n cupy.cuda.Event: The recorded event.\n\n .. seealso:: :meth:`cupy.cuda.Event.record`\n\n \"\"\"\n if event is None:\n event = Event()\n runtime.eventRecord(event.ptr, self.ptr)\n return event\n\n def wait_event(self, event):\n \"\"\"Makes the stream wait for an event.\n\n The future work on this stream will be done after the event.\n\n Args:\n event (cupy.cuda.Event): CUDA event.\n\n \"\"\"\n runtime.streamWaitEvent(self.ptr, event)\n", "path": "cupy/cuda/stream.py"}]} | 1,995 | 272 |
gh_patches_debug_25005 | rasdani/github-patches | git_diff | meltano__meltano-5980 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Meltano remove lock file bug
From #5977 I was removing an installed plugin and got this on the command line:
```
❯ meltano remove orchestrator airflow
2022-06-02T15:20:50.385299Z [info ] Environment 'dev' is active
Removing orchestrator 'airflow'...
Reset orchestrator 'airflow' plugin settings in the system database
Removed orchestrator 'airflow' from meltano.yml
Removed orchestrator 'airflow' from .meltano/orchestrators
Could not find orchestrator 'airflow' in /Users/taylormurphy/Documents/Projects/dev/meltano/addfromhub/plugins/orchestrators/airflow--original.lock to remove
```
In my `plugins/` folder I still have:
```
plugins
files/
airflow--meltano.lock
orchestrators/
airflow--apache.lock
```
@edgarrmondragon cc @aaronsteers
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/meltano/core/plugin_location_remove.py`
Content:
```
1 """Defines PluginLocationRemoveStatus, PluginLocationRemoveManager, DbRemoveManager, MeltanoYmlRemoveManager and InstallationRemoveManager."""
2
3 import shutil
4 from abc import ABC, abstractmethod
5 from enum import Enum
6
7 import sqlalchemy
8
9 from meltano.core.db import project_engine
10 from meltano.core.plugin.error import PluginNotFoundError
11 from meltano.core.plugin.project_plugin import ProjectPlugin
12 from meltano.core.plugin.settings_service import PluginSettingsService
13 from meltano.core.project_plugins_service import ProjectPluginsService
14
15 from .project import Project
16 from .settings_store import SettingValueStore
17
18
19 class PluginLocationRemoveStatus(Enum):
20 """Possible remove statuses."""
21
22 REMOVED = "removed"
23 ERROR = "error"
24 NOT_FOUND = "not found"
25
26
27 class PluginLocationRemoveManager(ABC):
28 """Handle removal of a plugin from a given location."""
29
30 def __init__(self, plugin: ProjectPlugin, location):
31 """Construct a PluginLocationRemoveManager instance.
32
33 Args:
34 plugin: The plugin to remove.
35 location: The location to remove the plugin from.
36 """
37 self.plugin = plugin
38 self.plugin_descriptor = f"{plugin.type.descriptor} '{plugin.name}'"
39 self.location = location
40 self.remove_status = None
41 self.remove_message = None
42
43 @abstractmethod
44 def remove(self):
45 """Abstract remove method."""
46 pass
47
48 @property
49 def plugin_removed(self) -> bool:
50 """Wether or not the plugin was successfully removed.
51
52 Returns:
53 True if the plugin was successfully removed, False otherwise.
54 """
55 return self.remove_status is PluginLocationRemoveStatus.REMOVED
56
57 @property
58 def plugin_not_found(self) -> bool:
59 """Wether or not the plugin was not found to remove.
60
61 Returns:
62 True if the plugin was not found, False otherwise.
63 """
64 return self.remove_status is PluginLocationRemoveStatus.NOT_FOUND
65
66 @property
67 def plugin_error(self) -> bool:
68 """Wether or not an error was encountered the plugin removal process.
69
70 Returns:
71 True if an error was encountered, False otherwise.
72 """
73 return self.remove_status is PluginLocationRemoveStatus.ERROR
74
75
76 class DbRemoveManager(PluginLocationRemoveManager):
77 """Handle removal of a plugin's settings from the system database `plugin_settings` table."""
78
79 def __init__(self, plugin, project):
80 """Construct a DbRemoveManager instance.
81
82 Args:
83 plugin: The plugin to remove.
84 project: The Meltano project.
85 """
86 super().__init__(plugin, "system database")
87 self.plugins_settings_service = PluginSettingsService(project, plugin)
88 self.session = project_engine(project)[1]
89
90 def remove(self):
91 """Remove the plugin's settings from the system database `plugin_settings` table.
92
93 Returns:
94 The remove status.
95 """
96 session = self.session()
97 try:
98 self.plugins_settings_service.reset(
99 store=SettingValueStore.DB, session=session
100 )
101 except sqlalchemy.exc.OperationalError as err:
102 self.remove_status = PluginLocationRemoveStatus.ERROR
103 self.message = err.orig
104 return
105
106 self.remove_status = PluginLocationRemoveStatus.REMOVED
107
108
109 class MeltanoYmlRemoveManager(PluginLocationRemoveManager):
110 """Handle removal of a plugin from `meltano.yml`."""
111
112 def __init__(self, plugin, project: Project):
113 """Construct a MeltanoYmlRemoveManager instance.
114
115 Args:
116 plugin: The plugin to remove.
117 project: The Meltano project.
118 """
119 super().__init__(plugin, str(project.meltanofile.relative_to(project.root)))
120 self.project_plugins_service = ProjectPluginsService(project)
121
122 def remove(self):
123 """Remove the plugin from `meltano.yml`."""
124 try:
125 self.project_plugins_service.remove_from_file(self.plugin)
126 except PluginNotFoundError:
127 self.remove_status = PluginLocationRemoveStatus.NOT_FOUND
128 return
129 except OSError as err:
130 self.remove_status = PluginLocationRemoveStatus.ERROR
131 self.message = err.strerror
132 return
133
134 self.remove_status = PluginLocationRemoveStatus.REMOVED
135
136
137 class LockedDefinitionRemoveManager(PluginLocationRemoveManager):
138 """Handle removal of a plugin locked definition from `plugins/`."""
139
140 def __init__(self, plugin, project: Project):
141 """Construct a LockedDefinitionRemoveManager instance.
142
143 Args:
144 plugin: The plugin to remove.
145 project: The Meltano project.
146 """
147 path = project.plugin_lock_path(plugin.type, plugin.name, plugin.variant)
148 super().__init__(plugin, str(path))
149 self.path = path
150
151 def remove(self):
152 """Remove the plugin from `plugins/`."""
153 try:
154 self.path.unlink()
155 except FileNotFoundError:
156 self.remove_status = PluginLocationRemoveStatus.NOT_FOUND
157 return
158 except OSError as err:
159 self.remove_status = PluginLocationRemoveStatus.ERROR
160 self.message = err.strerror
161 return
162
163 self.remove_status = PluginLocationRemoveStatus.REMOVED
164
165
166 class InstallationRemoveManager(PluginLocationRemoveManager):
167 """Handle removal of a plugin installation from `.meltano`."""
168
169 def __init__(self, plugin, project: Project):
170 """Construct a InstallationRemoveManager instance.
171
172 Args:
173 plugin: The plugin to remove.
174 project: The Meltano project.
175 """
176 path = project.plugin_dir(plugin, make_dirs=False)
177 super().__init__(plugin, str(path.parent.relative_to(project.root)))
178 self.path = path
179
180 def remove(self):
181 """Remove the plugin installation from `.meltano`."""
182 if not self.path.exists():
183 self.remove_status = PluginLocationRemoveStatus.NOT_FOUND
184 self.message = f"{self.plugin_descriptor} not found in {self.path.parent}"
185 return
186
187 try:
188 shutil.rmtree(self.path)
189 except OSError as err:
190 self.remove_status = PluginLocationRemoveStatus.ERROR
191 self.message = err.strerror
192 return
193
194 self.remove_status = PluginLocationRemoveStatus.REMOVED
195
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/meltano/core/plugin_location_remove.py b/src/meltano/core/plugin_location_remove.py
--- a/src/meltano/core/plugin_location_remove.py
+++ b/src/meltano/core/plugin_location_remove.py
@@ -144,21 +144,28 @@
plugin: The plugin to remove.
project: The Meltano project.
"""
- path = project.plugin_lock_path(plugin.type, plugin.name, plugin.variant)
- super().__init__(plugin, str(path))
- self.path = path
+ lockfile_dir = project.root_plugins_dir(plugin.type)
+ glob_expr = f"{plugin.name}*.lock"
+ super().__init__(
+ plugin,
+ str(lockfile_dir.relative_to(project.root).joinpath(glob_expr)),
+ )
+
+ self.paths = list(lockfile_dir.glob(glob_expr))
def remove(self):
"""Remove the plugin from `plugins/`."""
- try:
- self.path.unlink()
- except FileNotFoundError:
+ if not self.paths:
self.remove_status = PluginLocationRemoveStatus.NOT_FOUND
return
- except OSError as err:
- self.remove_status = PluginLocationRemoveStatus.ERROR
- self.message = err.strerror
- return
+
+ for path in self.paths:
+ try:
+ path.unlink()
+ except OSError as err:
+ self.remove_status = PluginLocationRemoveStatus.ERROR
+ self.message = err.strerror
+ return
self.remove_status = PluginLocationRemoveStatus.REMOVED
| {"golden_diff": "diff --git a/src/meltano/core/plugin_location_remove.py b/src/meltano/core/plugin_location_remove.py\n--- a/src/meltano/core/plugin_location_remove.py\n+++ b/src/meltano/core/plugin_location_remove.py\n@@ -144,21 +144,28 @@\n plugin: The plugin to remove.\n project: The Meltano project.\n \"\"\"\n- path = project.plugin_lock_path(plugin.type, plugin.name, plugin.variant)\n- super().__init__(plugin, str(path))\n- self.path = path\n+ lockfile_dir = project.root_plugins_dir(plugin.type)\n+ glob_expr = f\"{plugin.name}*.lock\"\n+ super().__init__(\n+ plugin,\n+ str(lockfile_dir.relative_to(project.root).joinpath(glob_expr)),\n+ )\n+\n+ self.paths = list(lockfile_dir.glob(glob_expr))\n \n def remove(self):\n \"\"\"Remove the plugin from `plugins/`.\"\"\"\n- try:\n- self.path.unlink()\n- except FileNotFoundError:\n+ if not self.paths:\n self.remove_status = PluginLocationRemoveStatus.NOT_FOUND\n return\n- except OSError as err:\n- self.remove_status = PluginLocationRemoveStatus.ERROR\n- self.message = err.strerror\n- return\n+\n+ for path in self.paths:\n+ try:\n+ path.unlink()\n+ except OSError as err:\n+ self.remove_status = PluginLocationRemoveStatus.ERROR\n+ self.message = err.strerror\n+ return\n \n self.remove_status = PluginLocationRemoveStatus.REMOVED\n", "issue": "Meltano remove lock file bug\nFrom #5977 I was removing an installed plugin and got this on the command line:\r\n\r\n```\r\n\u276f meltano remove orchestrator airflow\r\n2022-06-02T15:20:50.385299Z [info ] Environment 'dev' is active\r\n\r\nRemoving orchestrator 'airflow'...\r\n\r\nReset orchestrator 'airflow' plugin settings in the system database\r\nRemoved orchestrator 'airflow' from meltano.yml\r\nRemoved orchestrator 'airflow' from .meltano/orchestrators\r\nCould not find orchestrator 'airflow' in /Users/taylormurphy/Documents/Projects/dev/meltano/addfromhub/plugins/orchestrators/airflow--original.lock to remove\r\n```\r\n\r\nIn my `plugins/` folder I still have:\r\n\r\n```\r\nplugins\r\n files/\r\n airflow--meltano.lock\r\n orchestrators/\r\n airflow--apache.lock\r\n```\r\n\r\n@edgarrmondragon cc @aaronsteers \n", "before_files": [{"content": "\"\"\"Defines PluginLocationRemoveStatus, PluginLocationRemoveManager, DbRemoveManager, MeltanoYmlRemoveManager and InstallationRemoveManager.\"\"\"\n\nimport shutil\nfrom abc import ABC, abstractmethod\nfrom enum import Enum\n\nimport sqlalchemy\n\nfrom meltano.core.db import project_engine\nfrom meltano.core.plugin.error import PluginNotFoundError\nfrom meltano.core.plugin.project_plugin import ProjectPlugin\nfrom meltano.core.plugin.settings_service import PluginSettingsService\nfrom meltano.core.project_plugins_service import ProjectPluginsService\n\nfrom .project import Project\nfrom .settings_store import SettingValueStore\n\n\nclass PluginLocationRemoveStatus(Enum):\n \"\"\"Possible remove statuses.\"\"\"\n\n REMOVED = \"removed\"\n ERROR = \"error\"\n NOT_FOUND = \"not found\"\n\n\nclass PluginLocationRemoveManager(ABC):\n \"\"\"Handle removal of a plugin from a given location.\"\"\"\n\n def __init__(self, plugin: ProjectPlugin, location):\n \"\"\"Construct a PluginLocationRemoveManager instance.\n\n Args:\n plugin: The plugin to remove.\n location: The location to remove the plugin from.\n \"\"\"\n self.plugin = plugin\n self.plugin_descriptor = f\"{plugin.type.descriptor} '{plugin.name}'\"\n self.location = location\n self.remove_status = None\n self.remove_message = None\n\n @abstractmethod\n def remove(self):\n \"\"\"Abstract remove method.\"\"\"\n pass\n\n @property\n def plugin_removed(self) -> bool:\n \"\"\"Wether or not the plugin was successfully removed.\n\n Returns:\n True if the plugin was successfully removed, False otherwise.\n \"\"\"\n return self.remove_status is PluginLocationRemoveStatus.REMOVED\n\n @property\n def plugin_not_found(self) -> bool:\n \"\"\"Wether or not the plugin was not found to remove.\n\n Returns:\n True if the plugin was not found, False otherwise.\n \"\"\"\n return self.remove_status is PluginLocationRemoveStatus.NOT_FOUND\n\n @property\n def plugin_error(self) -> bool:\n \"\"\"Wether or not an error was encountered the plugin removal process.\n\n Returns:\n True if an error was encountered, False otherwise.\n \"\"\"\n return self.remove_status is PluginLocationRemoveStatus.ERROR\n\n\nclass DbRemoveManager(PluginLocationRemoveManager):\n \"\"\"Handle removal of a plugin's settings from the system database `plugin_settings` table.\"\"\"\n\n def __init__(self, plugin, project):\n \"\"\"Construct a DbRemoveManager instance.\n\n Args:\n plugin: The plugin to remove.\n project: The Meltano project.\n \"\"\"\n super().__init__(plugin, \"system database\")\n self.plugins_settings_service = PluginSettingsService(project, plugin)\n self.session = project_engine(project)[1]\n\n def remove(self):\n \"\"\"Remove the plugin's settings from the system database `plugin_settings` table.\n\n Returns:\n The remove status.\n \"\"\"\n session = self.session()\n try:\n self.plugins_settings_service.reset(\n store=SettingValueStore.DB, session=session\n )\n except sqlalchemy.exc.OperationalError as err:\n self.remove_status = PluginLocationRemoveStatus.ERROR\n self.message = err.orig\n return\n\n self.remove_status = PluginLocationRemoveStatus.REMOVED\n\n\nclass MeltanoYmlRemoveManager(PluginLocationRemoveManager):\n \"\"\"Handle removal of a plugin from `meltano.yml`.\"\"\"\n\n def __init__(self, plugin, project: Project):\n \"\"\"Construct a MeltanoYmlRemoveManager instance.\n\n Args:\n plugin: The plugin to remove.\n project: The Meltano project.\n \"\"\"\n super().__init__(plugin, str(project.meltanofile.relative_to(project.root)))\n self.project_plugins_service = ProjectPluginsService(project)\n\n def remove(self):\n \"\"\"Remove the plugin from `meltano.yml`.\"\"\"\n try:\n self.project_plugins_service.remove_from_file(self.plugin)\n except PluginNotFoundError:\n self.remove_status = PluginLocationRemoveStatus.NOT_FOUND\n return\n except OSError as err:\n self.remove_status = PluginLocationRemoveStatus.ERROR\n self.message = err.strerror\n return\n\n self.remove_status = PluginLocationRemoveStatus.REMOVED\n\n\nclass LockedDefinitionRemoveManager(PluginLocationRemoveManager):\n \"\"\"Handle removal of a plugin locked definition from `plugins/`.\"\"\"\n\n def __init__(self, plugin, project: Project):\n \"\"\"Construct a LockedDefinitionRemoveManager instance.\n\n Args:\n plugin: The plugin to remove.\n project: The Meltano project.\n \"\"\"\n path = project.plugin_lock_path(plugin.type, plugin.name, plugin.variant)\n super().__init__(plugin, str(path))\n self.path = path\n\n def remove(self):\n \"\"\"Remove the plugin from `plugins/`.\"\"\"\n try:\n self.path.unlink()\n except FileNotFoundError:\n self.remove_status = PluginLocationRemoveStatus.NOT_FOUND\n return\n except OSError as err:\n self.remove_status = PluginLocationRemoveStatus.ERROR\n self.message = err.strerror\n return\n\n self.remove_status = PluginLocationRemoveStatus.REMOVED\n\n\nclass InstallationRemoveManager(PluginLocationRemoveManager):\n \"\"\"Handle removal of a plugin installation from `.meltano`.\"\"\"\n\n def __init__(self, plugin, project: Project):\n \"\"\"Construct a InstallationRemoveManager instance.\n\n Args:\n plugin: The plugin to remove.\n project: The Meltano project.\n \"\"\"\n path = project.plugin_dir(plugin, make_dirs=False)\n super().__init__(plugin, str(path.parent.relative_to(project.root)))\n self.path = path\n\n def remove(self):\n \"\"\"Remove the plugin installation from `.meltano`.\"\"\"\n if not self.path.exists():\n self.remove_status = PluginLocationRemoveStatus.NOT_FOUND\n self.message = f\"{self.plugin_descriptor} not found in {self.path.parent}\"\n return\n\n try:\n shutil.rmtree(self.path)\n except OSError as err:\n self.remove_status = PluginLocationRemoveStatus.ERROR\n self.message = err.strerror\n return\n\n self.remove_status = PluginLocationRemoveStatus.REMOVED\n", "path": "src/meltano/core/plugin_location_remove.py"}], "after_files": [{"content": "\"\"\"Defines PluginLocationRemoveStatus, PluginLocationRemoveManager, DbRemoveManager, MeltanoYmlRemoveManager and InstallationRemoveManager.\"\"\"\n\nimport shutil\nfrom abc import ABC, abstractmethod\nfrom enum import Enum\n\nimport sqlalchemy\n\nfrom meltano.core.db import project_engine\nfrom meltano.core.plugin.error import PluginNotFoundError\nfrom meltano.core.plugin.project_plugin import ProjectPlugin\nfrom meltano.core.plugin.settings_service import PluginSettingsService\nfrom meltano.core.project_plugins_service import ProjectPluginsService\n\nfrom .project import Project\nfrom .settings_store import SettingValueStore\n\n\nclass PluginLocationRemoveStatus(Enum):\n \"\"\"Possible remove statuses.\"\"\"\n\n REMOVED = \"removed\"\n ERROR = \"error\"\n NOT_FOUND = \"not found\"\n\n\nclass PluginLocationRemoveManager(ABC):\n \"\"\"Handle removal of a plugin from a given location.\"\"\"\n\n def __init__(self, plugin: ProjectPlugin, location):\n \"\"\"Construct a PluginLocationRemoveManager instance.\n\n Args:\n plugin: The plugin to remove.\n location: The location to remove the plugin from.\n \"\"\"\n self.plugin = plugin\n self.plugin_descriptor = f\"{plugin.type.descriptor} '{plugin.name}'\"\n self.location = location\n self.remove_status = None\n self.remove_message = None\n\n @abstractmethod\n def remove(self):\n \"\"\"Abstract remove method.\"\"\"\n pass\n\n @property\n def plugin_removed(self) -> bool:\n \"\"\"Wether or not the plugin was successfully removed.\n\n Returns:\n True if the plugin was successfully removed, False otherwise.\n \"\"\"\n return self.remove_status is PluginLocationRemoveStatus.REMOVED\n\n @property\n def plugin_not_found(self) -> bool:\n \"\"\"Wether or not the plugin was not found to remove.\n\n Returns:\n True if the plugin was not found, False otherwise.\n \"\"\"\n return self.remove_status is PluginLocationRemoveStatus.NOT_FOUND\n\n @property\n def plugin_error(self) -> bool:\n \"\"\"Wether or not an error was encountered the plugin removal process.\n\n Returns:\n True if an error was encountered, False otherwise.\n \"\"\"\n return self.remove_status is PluginLocationRemoveStatus.ERROR\n\n\nclass DbRemoveManager(PluginLocationRemoveManager):\n \"\"\"Handle removal of a plugin's settings from the system database `plugin_settings` table.\"\"\"\n\n def __init__(self, plugin, project):\n \"\"\"Construct a DbRemoveManager instance.\n\n Args:\n plugin: The plugin to remove.\n project: The Meltano project.\n \"\"\"\n super().__init__(plugin, \"system database\")\n self.plugins_settings_service = PluginSettingsService(project, plugin)\n self.session = project_engine(project)[1]\n\n def remove(self):\n \"\"\"Remove the plugin's settings from the system database `plugin_settings` table.\n\n Returns:\n The remove status.\n \"\"\"\n session = self.session()\n try:\n self.plugins_settings_service.reset(\n store=SettingValueStore.DB, session=session\n )\n except sqlalchemy.exc.OperationalError as err:\n self.remove_status = PluginLocationRemoveStatus.ERROR\n self.message = err.orig\n return\n\n self.remove_status = PluginLocationRemoveStatus.REMOVED\n\n\nclass MeltanoYmlRemoveManager(PluginLocationRemoveManager):\n \"\"\"Handle removal of a plugin from `meltano.yml`.\"\"\"\n\n def __init__(self, plugin, project: Project):\n \"\"\"Construct a MeltanoYmlRemoveManager instance.\n\n Args:\n plugin: The plugin to remove.\n project: The Meltano project.\n \"\"\"\n super().__init__(plugin, str(project.meltanofile.relative_to(project.root)))\n self.project_plugins_service = ProjectPluginsService(project)\n\n def remove(self):\n \"\"\"Remove the plugin from `meltano.yml`.\"\"\"\n try:\n self.project_plugins_service.remove_from_file(self.plugin)\n except PluginNotFoundError:\n self.remove_status = PluginLocationRemoveStatus.NOT_FOUND\n return\n except OSError as err:\n self.remove_status = PluginLocationRemoveStatus.ERROR\n self.message = err.strerror\n return\n\n self.remove_status = PluginLocationRemoveStatus.REMOVED\n\n\nclass LockedDefinitionRemoveManager(PluginLocationRemoveManager):\n \"\"\"Handle removal of a plugin locked definition from `plugins/`.\"\"\"\n\n def __init__(self, plugin, project: Project):\n \"\"\"Construct a LockedDefinitionRemoveManager instance.\n\n Args:\n plugin: The plugin to remove.\n project: The Meltano project.\n \"\"\"\n lockfile_dir = project.root_plugins_dir(plugin.type)\n glob_expr = f\"{plugin.name}*.lock\"\n super().__init__(\n plugin,\n str(lockfile_dir.relative_to(project.root).joinpath(glob_expr)),\n )\n\n self.paths = list(lockfile_dir.glob(glob_expr))\n\n def remove(self):\n \"\"\"Remove the plugin from `plugins/`.\"\"\"\n if not self.paths:\n self.remove_status = PluginLocationRemoveStatus.NOT_FOUND\n return\n\n for path in self.paths:\n try:\n path.unlink()\n except OSError as err:\n self.remove_status = PluginLocationRemoveStatus.ERROR\n self.message = err.strerror\n return\n\n self.remove_status = PluginLocationRemoveStatus.REMOVED\n\n\nclass InstallationRemoveManager(PluginLocationRemoveManager):\n \"\"\"Handle removal of a plugin installation from `.meltano`.\"\"\"\n\n def __init__(self, plugin, project: Project):\n \"\"\"Construct a InstallationRemoveManager instance.\n\n Args:\n plugin: The plugin to remove.\n project: The Meltano project.\n \"\"\"\n path = project.plugin_dir(plugin, make_dirs=False)\n super().__init__(plugin, str(path.parent.relative_to(project.root)))\n self.path = path\n\n def remove(self):\n \"\"\"Remove the plugin installation from `.meltano`.\"\"\"\n if not self.path.exists():\n self.remove_status = PluginLocationRemoveStatus.NOT_FOUND\n self.message = f\"{self.plugin_descriptor} not found in {self.path.parent}\"\n return\n\n try:\n shutil.rmtree(self.path)\n except OSError as err:\n self.remove_status = PluginLocationRemoveStatus.ERROR\n self.message = err.strerror\n return\n\n self.remove_status = PluginLocationRemoveStatus.REMOVED\n", "path": "src/meltano/core/plugin_location_remove.py"}]} | 2,258 | 337 |
gh_patches_debug_18933 | rasdani/github-patches | git_diff | Qiskit__qiskit-1720 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BackendConfiguration fails validation if backend supports pulse
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Informations
- **Qiskit Terra version**:0.8.0
- **Python version**3.6.6
- **Operating system**:OSX
### What is the current behavior?
If a backend sets `open_pulse=true` in its configuration Qiskit will raise a validation error when creating a `BackendConfigurationSchema`
### Steps to reproduce the problem
Create a backend with `open_pulse=true` set in its configuration.
### What is the expected behavior?
Should not fail.
### Suggested solutions
Allow `open_pulse=true` to be valid.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qiskit/providers/models/backendconfiguration.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2018, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """Model and schema for backend configuration."""
9
10 from marshmallow.validate import Equal, Length, OneOf, Range, Regexp
11
12 from qiskit.validation import BaseModel, BaseSchema, bind_schema
13 from qiskit.validation.fields import Boolean, DateTime, Integer, List, Nested, String
14
15
16 class GateConfigSchema(BaseSchema):
17 """Schema for GateConfig."""
18
19 # Required properties.
20 name = String(required=True)
21 parameters = List(String(), required=True)
22 qasm_def = String(required=True)
23
24 # Optional properties.
25 coupling_map = List(List(Integer(),
26 validate=Length(min=1)),
27 validate=Length(min=1))
28 latency_map = List(List(Integer(validate=OneOf([0, 1])),
29 validate=Length(min=1)),
30 validate=Length(min=1))
31 conditional = Boolean()
32 description = String()
33
34
35 class BackendConfigurationSchema(BaseSchema):
36 """Schema for BackendConfiguration."""
37
38 # Required properties.
39 backend_name = String(required=True)
40 backend_version = String(required=True,
41 validate=Regexp("[0-9]+.[0-9]+.[0-9]+$"))
42 n_qubits = Integer(required=True, validate=Range(min=1))
43 basis_gates = List(String(), required=True,
44 validate=Length(min=1))
45 gates = Nested(GateConfigSchema, required=True, many=True,
46 validate=Length(min=1))
47 local = Boolean(required=True)
48 simulator = Boolean(required=True)
49 conditional = Boolean(required=True)
50 open_pulse = Boolean(required=True, validate=Equal(False))
51 memory = Boolean(required=True)
52 max_shots = Integer(required=True, validate=Range(min=1))
53
54 # Optional properties.
55 max_experiments = Integer(validate=Range(min=1))
56 sample_name = String()
57 coupling_map = List(List(Integer(),
58 validate=Length(min=1)),
59 validate=Length(min=1))
60 n_registers = Integer(validate=Range(min=1))
61 register_map = List(List(Integer(validate=OneOf([0, 1])),
62 validate=Length(min=1)),
63 validate=Length(min=1))
64 configurable = Boolean()
65 credits_required = Boolean()
66 online_date = DateTime()
67 display_name = String()
68 description = String()
69 tags = List(String())
70
71
72 @bind_schema(GateConfigSchema)
73 class GateConfig(BaseModel):
74 """Model for GateConfig.
75
76 Please note that this class only describes the required fields. For the
77 full description of the model, please check ``GateConfigSchema``.
78
79 Attributes:
80 name (str): the gate name as it will be referred to in QASM.
81 parameters (list[str]): variable names for the gate parameters (if any).
82 qasm_def (str): definition of this gate in terms of QASM primitives U
83 and CX.
84 """
85
86 def __init__(self, name, parameters, qasm_def, **kwargs):
87 self.name = name
88 self.parameters = parameters
89 self.qasm_def = qasm_def
90
91 super().__init__(**kwargs)
92
93
94 @bind_schema(BackendConfigurationSchema)
95 class BackendConfiguration(BaseModel):
96 """Model for BackendConfiguration.
97
98 Please note that this class only describes the required fields. For the
99 full description of the model, please check ``BackendConfigurationSchema``.
100 Attributes:
101 backend_name (str): backend name.
102 backend_version (str): backend version in the form X.Y.Z.
103 n_qubits (int): number of qubits.
104 basis_gates (list[str]): list of basis gates names on the backend.
105 gates (GateConfig): list of basis gates on the backend.
106 local (bool): backend is local or remote.
107 simulator (bool): backend is a simulator.
108 conditional (bool): backend supports conditional operations.
109 open_pulse (bool): backend supports open pulse.
110 memory (bool): backend supports memory.
111 max_shots (int): maximum number of shots supported.
112 """
113
114 def __init__(self, backend_name, backend_version, n_qubits, basis_gates,
115 gates, local, simulator, conditional, open_pulse, memory,
116 max_shots, **kwargs):
117 self.backend_name = backend_name
118 self.backend_version = backend_version
119 self.n_qubits = n_qubits
120 self.basis_gates = basis_gates
121 self.gates = gates
122 self.local = local
123 self.simulator = simulator
124 self.conditional = conditional
125 self.open_pulse = open_pulse
126 self.memory = memory
127 self.max_shots = max_shots
128
129 super().__init__(**kwargs)
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qiskit/providers/models/backendconfiguration.py b/qiskit/providers/models/backendconfiguration.py
--- a/qiskit/providers/models/backendconfiguration.py
+++ b/qiskit/providers/models/backendconfiguration.py
@@ -7,7 +7,7 @@
"""Model and schema for backend configuration."""
-from marshmallow.validate import Equal, Length, OneOf, Range, Regexp
+from marshmallow.validate import Length, OneOf, Range, Regexp
from qiskit.validation import BaseModel, BaseSchema, bind_schema
from qiskit.validation.fields import Boolean, DateTime, Integer, List, Nested, String
@@ -47,7 +47,7 @@
local = Boolean(required=True)
simulator = Boolean(required=True)
conditional = Boolean(required=True)
- open_pulse = Boolean(required=True, validate=Equal(False))
+ open_pulse = Boolean(required=True)
memory = Boolean(required=True)
max_shots = Integer(required=True, validate=Range(min=1))
| {"golden_diff": "diff --git a/qiskit/providers/models/backendconfiguration.py b/qiskit/providers/models/backendconfiguration.py\n--- a/qiskit/providers/models/backendconfiguration.py\n+++ b/qiskit/providers/models/backendconfiguration.py\n@@ -7,7 +7,7 @@\n \n \"\"\"Model and schema for backend configuration.\"\"\"\n \n-from marshmallow.validate import Equal, Length, OneOf, Range, Regexp\n+from marshmallow.validate import Length, OneOf, Range, Regexp\n \n from qiskit.validation import BaseModel, BaseSchema, bind_schema\n from qiskit.validation.fields import Boolean, DateTime, Integer, List, Nested, String\n@@ -47,7 +47,7 @@\n local = Boolean(required=True)\n simulator = Boolean(required=True)\n conditional = Boolean(required=True)\n- open_pulse = Boolean(required=True, validate=Equal(False))\n+ open_pulse = Boolean(required=True)\n memory = Boolean(required=True)\n max_shots = Integer(required=True, validate=Range(min=1))\n", "issue": "BackendConfiguration fails validation if backend supports pulse\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues -->\r\n\r\n### Informations\r\n\r\n- **Qiskit Terra version**:0.8.0\r\n- **Python version**3.6.6\r\n- **Operating system**:OSX\r\n\r\n### What is the current behavior?\r\nIf a backend sets `open_pulse=true` in its configuration Qiskit will raise a validation error when creating a `BackendConfigurationSchema`\r\n\r\n\r\n### Steps to reproduce the problem\r\nCreate a backend with `open_pulse=true` set in its configuration.\r\n\r\n\r\n### What is the expected behavior?\r\nShould not fail.\r\n\r\n\r\n### Suggested solutions\r\nAllow `open_pulse=true` to be valid.\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright 2018, IBM.\n#\n# This source code is licensed under the Apache License, Version 2.0 found in\n# the LICENSE.txt file in the root directory of this source tree.\n\n\"\"\"Model and schema for backend configuration.\"\"\"\n\nfrom marshmallow.validate import Equal, Length, OneOf, Range, Regexp\n\nfrom qiskit.validation import BaseModel, BaseSchema, bind_schema\nfrom qiskit.validation.fields import Boolean, DateTime, Integer, List, Nested, String\n\n\nclass GateConfigSchema(BaseSchema):\n \"\"\"Schema for GateConfig.\"\"\"\n\n # Required properties.\n name = String(required=True)\n parameters = List(String(), required=True)\n qasm_def = String(required=True)\n\n # Optional properties.\n coupling_map = List(List(Integer(),\n validate=Length(min=1)),\n validate=Length(min=1))\n latency_map = List(List(Integer(validate=OneOf([0, 1])),\n validate=Length(min=1)),\n validate=Length(min=1))\n conditional = Boolean()\n description = String()\n\n\nclass BackendConfigurationSchema(BaseSchema):\n \"\"\"Schema for BackendConfiguration.\"\"\"\n\n # Required properties.\n backend_name = String(required=True)\n backend_version = String(required=True,\n validate=Regexp(\"[0-9]+.[0-9]+.[0-9]+$\"))\n n_qubits = Integer(required=True, validate=Range(min=1))\n basis_gates = List(String(), required=True,\n validate=Length(min=1))\n gates = Nested(GateConfigSchema, required=True, many=True,\n validate=Length(min=1))\n local = Boolean(required=True)\n simulator = Boolean(required=True)\n conditional = Boolean(required=True)\n open_pulse = Boolean(required=True, validate=Equal(False))\n memory = Boolean(required=True)\n max_shots = Integer(required=True, validate=Range(min=1))\n\n # Optional properties.\n max_experiments = Integer(validate=Range(min=1))\n sample_name = String()\n coupling_map = List(List(Integer(),\n validate=Length(min=1)),\n validate=Length(min=1))\n n_registers = Integer(validate=Range(min=1))\n register_map = List(List(Integer(validate=OneOf([0, 1])),\n validate=Length(min=1)),\n validate=Length(min=1))\n configurable = Boolean()\n credits_required = Boolean()\n online_date = DateTime()\n display_name = String()\n description = String()\n tags = List(String())\n\n\n@bind_schema(GateConfigSchema)\nclass GateConfig(BaseModel):\n \"\"\"Model for GateConfig.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``GateConfigSchema``.\n\n Attributes:\n name (str): the gate name as it will be referred to in QASM.\n parameters (list[str]): variable names for the gate parameters (if any).\n qasm_def (str): definition of this gate in terms of QASM primitives U\n and CX.\n \"\"\"\n\n def __init__(self, name, parameters, qasm_def, **kwargs):\n self.name = name\n self.parameters = parameters\n self.qasm_def = qasm_def\n\n super().__init__(**kwargs)\n\n\n@bind_schema(BackendConfigurationSchema)\nclass BackendConfiguration(BaseModel):\n \"\"\"Model for BackendConfiguration.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``BackendConfigurationSchema``.\n Attributes:\n backend_name (str): backend name.\n backend_version (str): backend version in the form X.Y.Z.\n n_qubits (int): number of qubits.\n basis_gates (list[str]): list of basis gates names on the backend.\n gates (GateConfig): list of basis gates on the backend.\n local (bool): backend is local or remote.\n simulator (bool): backend is a simulator.\n conditional (bool): backend supports conditional operations.\n open_pulse (bool): backend supports open pulse.\n memory (bool): backend supports memory.\n max_shots (int): maximum number of shots supported.\n \"\"\"\n\n def __init__(self, backend_name, backend_version, n_qubits, basis_gates,\n gates, local, simulator, conditional, open_pulse, memory,\n max_shots, **kwargs):\n self.backend_name = backend_name\n self.backend_version = backend_version\n self.n_qubits = n_qubits\n self.basis_gates = basis_gates\n self.gates = gates\n self.local = local\n self.simulator = simulator\n self.conditional = conditional\n self.open_pulse = open_pulse\n self.memory = memory\n self.max_shots = max_shots\n\n super().__init__(**kwargs)\n", "path": "qiskit/providers/models/backendconfiguration.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright 2018, IBM.\n#\n# This source code is licensed under the Apache License, Version 2.0 found in\n# the LICENSE.txt file in the root directory of this source tree.\n\n\"\"\"Model and schema for backend configuration.\"\"\"\n\nfrom marshmallow.validate import Length, OneOf, Range, Regexp\n\nfrom qiskit.validation import BaseModel, BaseSchema, bind_schema\nfrom qiskit.validation.fields import Boolean, DateTime, Integer, List, Nested, String\n\n\nclass GateConfigSchema(BaseSchema):\n \"\"\"Schema for GateConfig.\"\"\"\n\n # Required properties.\n name = String(required=True)\n parameters = List(String(), required=True)\n qasm_def = String(required=True)\n\n # Optional properties.\n coupling_map = List(List(Integer(),\n validate=Length(min=1)),\n validate=Length(min=1))\n latency_map = List(List(Integer(validate=OneOf([0, 1])),\n validate=Length(min=1)),\n validate=Length(min=1))\n conditional = Boolean()\n description = String()\n\n\nclass BackendConfigurationSchema(BaseSchema):\n \"\"\"Schema for BackendConfiguration.\"\"\"\n\n # Required properties.\n backend_name = String(required=True)\n backend_version = String(required=True,\n validate=Regexp(\"[0-9]+.[0-9]+.[0-9]+$\"))\n n_qubits = Integer(required=True, validate=Range(min=1))\n basis_gates = List(String(), required=True,\n validate=Length(min=1))\n gates = Nested(GateConfigSchema, required=True, many=True,\n validate=Length(min=1))\n local = Boolean(required=True)\n simulator = Boolean(required=True)\n conditional = Boolean(required=True)\n open_pulse = Boolean(required=True)\n memory = Boolean(required=True)\n max_shots = Integer(required=True, validate=Range(min=1))\n\n # Optional properties.\n max_experiments = Integer(validate=Range(min=1))\n sample_name = String()\n coupling_map = List(List(Integer(),\n validate=Length(min=1)),\n validate=Length(min=1))\n n_registers = Integer(validate=Range(min=1))\n register_map = List(List(Integer(validate=OneOf([0, 1])),\n validate=Length(min=1)),\n validate=Length(min=1))\n configurable = Boolean()\n credits_required = Boolean()\n online_date = DateTime()\n display_name = String()\n description = String()\n tags = List(String())\n\n\n@bind_schema(GateConfigSchema)\nclass GateConfig(BaseModel):\n \"\"\"Model for GateConfig.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``GateConfigSchema``.\n\n Attributes:\n name (str): the gate name as it will be referred to in QASM.\n parameters (list[str]): variable names for the gate parameters (if any).\n qasm_def (str): definition of this gate in terms of QASM primitives U\n and CX.\n \"\"\"\n\n def __init__(self, name, parameters, qasm_def, **kwargs):\n self.name = name\n self.parameters = parameters\n self.qasm_def = qasm_def\n\n super().__init__(**kwargs)\n\n\n@bind_schema(BackendConfigurationSchema)\nclass BackendConfiguration(BaseModel):\n \"\"\"Model for BackendConfiguration.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``BackendConfigurationSchema``.\n Attributes:\n backend_name (str): backend name.\n backend_version (str): backend version in the form X.Y.Z.\n n_qubits (int): number of qubits.\n basis_gates (list[str]): list of basis gates names on the backend.\n gates (GateConfig): list of basis gates on the backend.\n local (bool): backend is local or remote.\n simulator (bool): backend is a simulator.\n conditional (bool): backend supports conditional operations.\n open_pulse (bool): backend supports open pulse.\n memory (bool): backend supports memory.\n max_shots (int): maximum number of shots supported.\n \"\"\"\n\n def __init__(self, backend_name, backend_version, n_qubits, basis_gates,\n gates, local, simulator, conditional, open_pulse, memory,\n max_shots, **kwargs):\n self.backend_name = backend_name\n self.backend_version = backend_version\n self.n_qubits = n_qubits\n self.basis_gates = basis_gates\n self.gates = gates\n self.local = local\n self.simulator = simulator\n self.conditional = conditional\n self.open_pulse = open_pulse\n self.memory = memory\n self.max_shots = max_shots\n\n super().__init__(**kwargs)\n", "path": "qiskit/providers/models/backendconfiguration.py"}]} | 1,745 | 209 |
gh_patches_debug_23149 | rasdani/github-patches | git_diff | frappe__frappe-26301 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Typing validations should be ignored for tests
## Description of the issue
https://github.com/frappe/frappe/blob/010aa4636ace30a9df4c09f0ca991169f34274b9/frappe/utils/typing_validations.py#L164
If you're writing Frappe tests using the `unittest.mock` module, there might be cases where the argument object is replaced with a `Mock` or `MagicMock` object. This breaks typing validations when running CI tests using the `develop` branch.
I think a reasonable approach could be to either ignore all validations during tests, and/or allow configuring this behaviour per-test (with the default being "ignore").
## Context
**Output of `bench version`**
```
frappe 14.14.2
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `frappe/utils/typing_validations.py`
Content:
```
1 from collections.abc import Callable
2 from functools import lru_cache, wraps
3 from inspect import _empty, isclass, signature
4 from types import EllipsisType
5 from typing import ForwardRef, TypeVar, Union
6
7 from pydantic import ConfigDict
8
9 from frappe.exceptions import FrappeTypeError
10
11 SLACK_DICT = {
12 bool: (int, bool, float),
13 }
14 T = TypeVar("T")
15
16
17 FrappePydanticConfig = ConfigDict(arbitrary_types_allowed=True)
18
19
20 def validate_argument_types(func: Callable, apply_condition: Callable = lambda: True):
21 @wraps(func)
22 def wrapper(*args, **kwargs):
23 """Validate argument types of whitelisted functions.
24
25 :param args: Function arguments.
26 :param kwargs: Function keyword arguments."""
27
28 if apply_condition():
29 args, kwargs = transform_parameter_types(func, args, kwargs)
30
31 return func(*args, **kwargs)
32
33 return wrapper
34
35
36 def qualified_name(obj) -> str:
37 """
38 Return the qualified name (e.g. package.module.Type) for the given object.
39
40 Builtins and types from the :mod:typing package get special treatment by having the module
41 name stripped from the generated name.
42
43 """
44 discovered_type = obj if isclass(obj) else type(obj)
45 module, qualname = discovered_type.__module__, discovered_type.__qualname__
46
47 if module in {"typing", "types"}:
48 return obj
49 elif module in {"builtins"}:
50 return qualname
51 else:
52 return f"{module}.{qualname}"
53
54
55 def raise_type_error(
56 arg_name: str, arg_type: type, arg_value: object, current_exception: Exception | None = None
57 ):
58 """
59 Raise a TypeError with a message that includes the name of the argument, the expected type
60 and the actual type of the value passed.
61
62 """
63 raise FrappeTypeError(
64 f"Argument '{arg_name}' should be of type '{qualified_name(arg_type)}' but got "
65 f"'{qualified_name(arg_value)}' instead."
66 ) from current_exception
67
68
69 @lru_cache(maxsize=2048)
70 def TypeAdapter(type_):
71 from pydantic import TypeAdapter as PyTypeAdapter
72
73 return PyTypeAdapter(type_, config=FrappePydanticConfig)
74
75
76 def transform_parameter_types(func: Callable, args: tuple, kwargs: dict):
77 """
78 Validate the types of the arguments passed to a function with the type annotations
79 defined on the function.
80
81 """
82 if not (args or kwargs) or not func.__annotations__:
83 return args, kwargs
84
85 from pydantic import ValidationError as PyValidationError
86
87 annotations = func.__annotations__
88 new_args, new_kwargs = list(args), kwargs
89
90 # generate kwargs dict from args
91 arg_names = func.__code__.co_varnames[: func.__code__.co_argcount]
92
93 if not args:
94 prepared_args = kwargs
95
96 elif kwargs:
97 arg_values = args or func.__defaults__ or []
98 prepared_args = dict(zip(arg_names, arg_values, strict=False))
99 prepared_args.update(kwargs)
100
101 else:
102 prepared_args = dict(zip(arg_names, args, strict=False))
103
104 # check if type hints dont match the default values
105 func_signature = signature(func)
106 func_params = dict(func_signature.parameters)
107
108 # check if the argument types are correct
109 for current_arg, current_arg_type in annotations.items():
110 if current_arg not in prepared_args:
111 continue
112
113 current_arg_value = prepared_args[current_arg]
114
115 # if the type is a ForwardRef or str, ignore it
116 if isinstance(current_arg_type, ForwardRef | str):
117 continue
118 elif any(isinstance(x, ForwardRef | str) for x in getattr(current_arg_type, "__args__", [])):
119 continue
120
121 # allow slack for Frappe types
122 if current_arg_type in SLACK_DICT:
123 current_arg_type = SLACK_DICT[current_arg_type]
124
125 param_def = func_params.get(current_arg)
126
127 # add default value's type in acceptable types
128 if param_def.default is not _empty:
129 if isinstance(current_arg_type, tuple):
130 if type(param_def.default) not in current_arg_type:
131 current_arg_type += (type(param_def.default),)
132 current_arg_type = Union[current_arg_type] # noqa: UP007
133
134 elif param_def.default != current_arg_type:
135 current_arg_type = Union[current_arg_type, type(param_def.default)] # noqa: UP007
136 elif isinstance(current_arg_type, tuple):
137 current_arg_type = Union[current_arg_type] # noqa: UP007
138
139 # validate the type set using pydantic - raise a TypeError if Validation is raised or Ellipsis is returned
140 try:
141 current_arg_value_after = TypeAdapter(current_arg_type).validate_python(current_arg_value)
142 except (TypeError, PyValidationError) as e:
143 raise_type_error(current_arg, current_arg_type, current_arg_value, current_exception=e)
144
145 if isinstance(current_arg_value_after, EllipsisType):
146 raise_type_error(current_arg, current_arg_type, current_arg_value)
147
148 # update the args and kwargs with possibly casted value
149 if current_arg in kwargs:
150 new_kwargs[current_arg] = current_arg_value_after
151 else:
152 new_args[arg_names.index(current_arg)] = current_arg_value_after
153
154 return new_args, new_kwargs
155
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/frappe/utils/typing_validations.py b/frappe/utils/typing_validations.py
--- a/frappe/utils/typing_validations.py
+++ b/frappe/utils/typing_validations.py
@@ -3,6 +3,7 @@
from inspect import _empty, isclass, signature
from types import EllipsisType
from typing import ForwardRef, TypeVar, Union
+from unittest import mock
from pydantic import ConfigDict
@@ -77,8 +78,8 @@
"""
Validate the types of the arguments passed to a function with the type annotations
defined on the function.
-
"""
+
if not (args or kwargs) or not func.__annotations__:
return args, kwargs
@@ -117,6 +118,9 @@
continue
elif any(isinstance(x, ForwardRef | str) for x in getattr(current_arg_type, "__args__", [])):
continue
+ # ignore unittest.mock objects
+ elif isinstance(current_arg_value, mock.Mock):
+ continue
# allow slack for Frappe types
if current_arg_type in SLACK_DICT:
| {"golden_diff": "diff --git a/frappe/utils/typing_validations.py b/frappe/utils/typing_validations.py\n--- a/frappe/utils/typing_validations.py\n+++ b/frappe/utils/typing_validations.py\n@@ -3,6 +3,7 @@\n from inspect import _empty, isclass, signature\n from types import EllipsisType\n from typing import ForwardRef, TypeVar, Union\n+from unittest import mock\n \n from pydantic import ConfigDict\n \n@@ -77,8 +78,8 @@\n \t\"\"\"\n \tValidate the types of the arguments passed to a function with the type annotations\n \tdefined on the function.\n-\n \t\"\"\"\n+\n \tif not (args or kwargs) or not func.__annotations__:\n \t\treturn args, kwargs\n \n@@ -117,6 +118,9 @@\n \t\t\tcontinue\n \t\telif any(isinstance(x, ForwardRef | str) for x in getattr(current_arg_type, \"__args__\", [])):\n \t\t\tcontinue\n+\t\t# ignore unittest.mock objects\n+\t\telif isinstance(current_arg_value, mock.Mock):\n+\t\t\tcontinue\n \n \t\t# allow slack for Frappe types\n \t\tif current_arg_type in SLACK_DICT:\n", "issue": "Typing validations should be ignored for tests\n## Description of the issue\r\nhttps://github.com/frappe/frappe/blob/010aa4636ace30a9df4c09f0ca991169f34274b9/frappe/utils/typing_validations.py#L164\r\n\r\nIf you're writing Frappe tests using the `unittest.mock` module, there might be cases where the argument object is replaced with a `Mock` or `MagicMock` object. This breaks typing validations when running CI tests using the `develop` branch.\r\n\r\nI think a reasonable approach could be to either ignore all validations during tests, and/or allow configuring this behaviour per-test (with the default being \"ignore\").\r\n\r\n## Context\r\n\r\n**Output of `bench version`**\r\n```\r\nfrappe 14.14.2\r\n```\r\n\n", "before_files": [{"content": "from collections.abc import Callable\nfrom functools import lru_cache, wraps\nfrom inspect import _empty, isclass, signature\nfrom types import EllipsisType\nfrom typing import ForwardRef, TypeVar, Union\n\nfrom pydantic import ConfigDict\n\nfrom frappe.exceptions import FrappeTypeError\n\nSLACK_DICT = {\n\tbool: (int, bool, float),\n}\nT = TypeVar(\"T\")\n\n\nFrappePydanticConfig = ConfigDict(arbitrary_types_allowed=True)\n\n\ndef validate_argument_types(func: Callable, apply_condition: Callable = lambda: True):\n\t@wraps(func)\n\tdef wrapper(*args, **kwargs):\n\t\t\"\"\"Validate argument types of whitelisted functions.\n\n\t\t:param args: Function arguments.\n\t\t:param kwargs: Function keyword arguments.\"\"\"\n\n\t\tif apply_condition():\n\t\t\targs, kwargs = transform_parameter_types(func, args, kwargs)\n\n\t\treturn func(*args, **kwargs)\n\n\treturn wrapper\n\n\ndef qualified_name(obj) -> str:\n\t\"\"\"\n\tReturn the qualified name (e.g. package.module.Type) for the given object.\n\n\tBuiltins and types from the :mod:typing package get special treatment by having the module\n\tname stripped from the generated name.\n\n\t\"\"\"\n\tdiscovered_type = obj if isclass(obj) else type(obj)\n\tmodule, qualname = discovered_type.__module__, discovered_type.__qualname__\n\n\tif module in {\"typing\", \"types\"}:\n\t\treturn obj\n\telif module in {\"builtins\"}:\n\t\treturn qualname\n\telse:\n\t\treturn f\"{module}.{qualname}\"\n\n\ndef raise_type_error(\n\targ_name: str, arg_type: type, arg_value: object, current_exception: Exception | None = None\n):\n\t\"\"\"\n\tRaise a TypeError with a message that includes the name of the argument, the expected type\n\tand the actual type of the value passed.\n\n\t\"\"\"\n\traise FrappeTypeError(\n\t\tf\"Argument '{arg_name}' should be of type '{qualified_name(arg_type)}' but got \"\n\t\tf\"'{qualified_name(arg_value)}' instead.\"\n\t) from current_exception\n\n\n@lru_cache(maxsize=2048)\ndef TypeAdapter(type_):\n\tfrom pydantic import TypeAdapter as PyTypeAdapter\n\n\treturn PyTypeAdapter(type_, config=FrappePydanticConfig)\n\n\ndef transform_parameter_types(func: Callable, args: tuple, kwargs: dict):\n\t\"\"\"\n\tValidate the types of the arguments passed to a function with the type annotations\n\tdefined on the function.\n\n\t\"\"\"\n\tif not (args or kwargs) or not func.__annotations__:\n\t\treturn args, kwargs\n\n\tfrom pydantic import ValidationError as PyValidationError\n\n\tannotations = func.__annotations__\n\tnew_args, new_kwargs = list(args), kwargs\n\n\t# generate kwargs dict from args\n\targ_names = func.__code__.co_varnames[: func.__code__.co_argcount]\n\n\tif not args:\n\t\tprepared_args = kwargs\n\n\telif kwargs:\n\t\targ_values = args or func.__defaults__ or []\n\t\tprepared_args = dict(zip(arg_names, arg_values, strict=False))\n\t\tprepared_args.update(kwargs)\n\n\telse:\n\t\tprepared_args = dict(zip(arg_names, args, strict=False))\n\n\t# check if type hints dont match the default values\n\tfunc_signature = signature(func)\n\tfunc_params = dict(func_signature.parameters)\n\n\t# check if the argument types are correct\n\tfor current_arg, current_arg_type in annotations.items():\n\t\tif current_arg not in prepared_args:\n\t\t\tcontinue\n\n\t\tcurrent_arg_value = prepared_args[current_arg]\n\n\t\t# if the type is a ForwardRef or str, ignore it\n\t\tif isinstance(current_arg_type, ForwardRef | str):\n\t\t\tcontinue\n\t\telif any(isinstance(x, ForwardRef | str) for x in getattr(current_arg_type, \"__args__\", [])):\n\t\t\tcontinue\n\n\t\t# allow slack for Frappe types\n\t\tif current_arg_type in SLACK_DICT:\n\t\t\tcurrent_arg_type = SLACK_DICT[current_arg_type]\n\n\t\tparam_def = func_params.get(current_arg)\n\n\t\t# add default value's type in acceptable types\n\t\tif param_def.default is not _empty:\n\t\t\tif isinstance(current_arg_type, tuple):\n\t\t\t\tif type(param_def.default) not in current_arg_type:\n\t\t\t\t\tcurrent_arg_type += (type(param_def.default),)\n\t\t\t\tcurrent_arg_type = Union[current_arg_type] # noqa: UP007\n\n\t\t\telif param_def.default != current_arg_type:\n\t\t\t\tcurrent_arg_type = Union[current_arg_type, type(param_def.default)] # noqa: UP007\n\t\telif isinstance(current_arg_type, tuple):\n\t\t\tcurrent_arg_type = Union[current_arg_type] # noqa: UP007\n\n\t\t# validate the type set using pydantic - raise a TypeError if Validation is raised or Ellipsis is returned\n\t\ttry:\n\t\t\tcurrent_arg_value_after = TypeAdapter(current_arg_type).validate_python(current_arg_value)\n\t\texcept (TypeError, PyValidationError) as e:\n\t\t\traise_type_error(current_arg, current_arg_type, current_arg_value, current_exception=e)\n\n\t\tif isinstance(current_arg_value_after, EllipsisType):\n\t\t\traise_type_error(current_arg, current_arg_type, current_arg_value)\n\n\t\t# update the args and kwargs with possibly casted value\n\t\tif current_arg in kwargs:\n\t\t\tnew_kwargs[current_arg] = current_arg_value_after\n\t\telse:\n\t\t\tnew_args[arg_names.index(current_arg)] = current_arg_value_after\n\n\treturn new_args, new_kwargs\n", "path": "frappe/utils/typing_validations.py"}], "after_files": [{"content": "from collections.abc import Callable\nfrom functools import lru_cache, wraps\nfrom inspect import _empty, isclass, signature\nfrom types import EllipsisType\nfrom typing import ForwardRef, TypeVar, Union\nfrom unittest import mock\n\nfrom pydantic import ConfigDict\n\nfrom frappe.exceptions import FrappeTypeError\n\nSLACK_DICT = {\n\tbool: (int, bool, float),\n}\nT = TypeVar(\"T\")\n\n\nFrappePydanticConfig = ConfigDict(arbitrary_types_allowed=True)\n\n\ndef validate_argument_types(func: Callable, apply_condition: Callable = lambda: True):\n\t@wraps(func)\n\tdef wrapper(*args, **kwargs):\n\t\t\"\"\"Validate argument types of whitelisted functions.\n\n\t\t:param args: Function arguments.\n\t\t:param kwargs: Function keyword arguments.\"\"\"\n\n\t\tif apply_condition():\n\t\t\targs, kwargs = transform_parameter_types(func, args, kwargs)\n\n\t\treturn func(*args, **kwargs)\n\n\treturn wrapper\n\n\ndef qualified_name(obj) -> str:\n\t\"\"\"\n\tReturn the qualified name (e.g. package.module.Type) for the given object.\n\n\tBuiltins and types from the :mod:typing package get special treatment by having the module\n\tname stripped from the generated name.\n\n\t\"\"\"\n\tdiscovered_type = obj if isclass(obj) else type(obj)\n\tmodule, qualname = discovered_type.__module__, discovered_type.__qualname__\n\n\tif module in {\"typing\", \"types\"}:\n\t\treturn obj\n\telif module in {\"builtins\"}:\n\t\treturn qualname\n\telse:\n\t\treturn f\"{module}.{qualname}\"\n\n\ndef raise_type_error(\n\targ_name: str, arg_type: type, arg_value: object, current_exception: Exception | None = None\n):\n\t\"\"\"\n\tRaise a TypeError with a message that includes the name of the argument, the expected type\n\tand the actual type of the value passed.\n\n\t\"\"\"\n\traise FrappeTypeError(\n\t\tf\"Argument '{arg_name}' should be of type '{qualified_name(arg_type)}' but got \"\n\t\tf\"'{qualified_name(arg_value)}' instead.\"\n\t) from current_exception\n\n\n@lru_cache(maxsize=2048)\ndef TypeAdapter(type_):\n\tfrom pydantic import TypeAdapter as PyTypeAdapter\n\n\treturn PyTypeAdapter(type_, config=FrappePydanticConfig)\n\n\ndef transform_parameter_types(func: Callable, args: tuple, kwargs: dict):\n\t\"\"\"\n\tValidate the types of the arguments passed to a function with the type annotations\n\tdefined on the function.\n\t\"\"\"\n\n\tif not (args or kwargs) or not func.__annotations__:\n\t\treturn args, kwargs\n\n\tfrom pydantic import ValidationError as PyValidationError\n\n\tannotations = func.__annotations__\n\tnew_args, new_kwargs = list(args), kwargs\n\n\t# generate kwargs dict from args\n\targ_names = func.__code__.co_varnames[: func.__code__.co_argcount]\n\n\tif not args:\n\t\tprepared_args = kwargs\n\n\telif kwargs:\n\t\targ_values = args or func.__defaults__ or []\n\t\tprepared_args = dict(zip(arg_names, arg_values, strict=False))\n\t\tprepared_args.update(kwargs)\n\n\telse:\n\t\tprepared_args = dict(zip(arg_names, args, strict=False))\n\n\t# check if type hints dont match the default values\n\tfunc_signature = signature(func)\n\tfunc_params = dict(func_signature.parameters)\n\n\t# check if the argument types are correct\n\tfor current_arg, current_arg_type in annotations.items():\n\t\tif current_arg not in prepared_args:\n\t\t\tcontinue\n\n\t\tcurrent_arg_value = prepared_args[current_arg]\n\n\t\t# if the type is a ForwardRef or str, ignore it\n\t\tif isinstance(current_arg_type, ForwardRef | str):\n\t\t\tcontinue\n\t\telif any(isinstance(x, ForwardRef | str) for x in getattr(current_arg_type, \"__args__\", [])):\n\t\t\tcontinue\n\t\t# ignore unittest.mock objects\n\t\telif isinstance(current_arg_value, mock.Mock):\n\t\t\tcontinue\n\n\t\t# allow slack for Frappe types\n\t\tif current_arg_type in SLACK_DICT:\n\t\t\tcurrent_arg_type = SLACK_DICT[current_arg_type]\n\n\t\tparam_def = func_params.get(current_arg)\n\n\t\t# add default value's type in acceptable types\n\t\tif param_def.default is not _empty:\n\t\t\tif isinstance(current_arg_type, tuple):\n\t\t\t\tif type(param_def.default) not in current_arg_type:\n\t\t\t\t\tcurrent_arg_type += (type(param_def.default),)\n\t\t\t\tcurrent_arg_type = Union[current_arg_type] # noqa: UP007\n\n\t\t\telif param_def.default != current_arg_type:\n\t\t\t\tcurrent_arg_type = Union[current_arg_type, type(param_def.default)] # noqa: UP007\n\t\telif isinstance(current_arg_type, tuple):\n\t\t\tcurrent_arg_type = Union[current_arg_type] # noqa: UP007\n\n\t\t# validate the type set using pydantic - raise a TypeError if Validation is raised or Ellipsis is returned\n\t\ttry:\n\t\t\tcurrent_arg_value_after = TypeAdapter(current_arg_type).validate_python(current_arg_value)\n\t\texcept (TypeError, PyValidationError) as e:\n\t\t\traise_type_error(current_arg, current_arg_type, current_arg_value, current_exception=e)\n\n\t\tif isinstance(current_arg_value_after, EllipsisType):\n\t\t\traise_type_error(current_arg, current_arg_type, current_arg_value)\n\n\t\t# update the args and kwargs with possibly casted value\n\t\tif current_arg in kwargs:\n\t\t\tnew_kwargs[current_arg] = current_arg_value_after\n\t\telse:\n\t\t\tnew_args[arg_names.index(current_arg)] = current_arg_value_after\n\n\treturn new_args, new_kwargs\n", "path": "frappe/utils/typing_validations.py"}]} | 2,006 | 251 |
gh_patches_debug_16021 | rasdani/github-patches | git_diff | wagtail__wagtail-8270 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ThumbnailMixin does not display in header the value defined under thumb_col_header_text
<!--
Found a bug? Please fill out the sections below. 👍
-->
### Issue Summary
When adding ThumbnailMixin to a ModelAdmin, and giving it the `thumb_col_header_text` attribute, should display that on the list header for the thumbnail. but it always uses the default defined 'image'


### Steps to Reproduce
1. (for example) Start a new project with `wagtail start myproject`
2. in models.py add a new model (non page) with a forignkey to wagtailimages.Image
3. add model admin definition in wagtail_hooks.py
4. add ThumbnailMixin to model admin super classes
5. add some value to thumb_col_header_text
6. register new model admin
7. load app
8. add new instance of your new model with an image
9. in list header for your image it will say 'image' not what you defined in thumb_col_header_text
Any other relevant information. For example, why do you consider this a bug and what did you expect to happen instead?
* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: (yes)
* i already know why this is happening and will submit a pull request shortly
### Technical details
* Python version: 3.9.7
* Django version: 4.0.3
* Wagtail version: 2.16.1
* Browser version: Chrome Version 100.0.4896.60 (Official Build) (x86_64)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/contrib/modeladmin/mixins.py`
Content:
```
1 from django.conf import settings
2 from django.core.exceptions import ImproperlyConfigured
3 from django.forms.utils import flatatt
4 from django.utils.safestring import mark_safe
5 from django.utils.translation import gettext_lazy as _
6
7
8 class ThumbnailMixin:
9 """
10 Mixin class to help display thumbnail images in ModelAdmin listing results.
11 `thumb_image_field_name` must be overridden to name a ForeignKey field on
12 your model, linking to `wagtailimages.Image`.
13 """
14
15 thumb_image_field_name = "image"
16 thumb_image_filter_spec = "fill-100x100"
17 thumb_image_width = 50
18 thumb_classname = "admin-thumb"
19 thumb_col_header_text = _("image")
20 thumb_default = None
21
22 def __init__(self, *args, **kwargs):
23 if "wagtail.images" not in settings.INSTALLED_APPS:
24 raise ImproperlyConfigured(
25 "The `wagtail.images` app must be installed in order "
26 "to use the `ThumbnailMixin` class."
27 )
28 super().__init__(*args, **kwargs)
29
30 def admin_thumb(self, obj):
31 try:
32 image = getattr(obj, self.thumb_image_field_name, None)
33 except AttributeError:
34 raise ImproperlyConfigured(
35 "The `thumb_image_field_name` attribute on your `%s` class "
36 "must name a field on your model." % self.__class__.__name__
37 )
38
39 img_attrs = {
40 "src": self.thumb_default,
41 "width": self.thumb_image_width,
42 "class": self.thumb_classname,
43 }
44 if not image:
45 if self.thumb_default:
46 return mark_safe("<img{}>".format(flatatt(img_attrs)))
47 return ""
48
49 # try to get a rendition of the image to use
50 from wagtail.images.shortcuts import get_rendition_or_not_found
51
52 spec = self.thumb_image_filter_spec
53 rendition = get_rendition_or_not_found(image, spec)
54 img_attrs.update({"src": rendition.url})
55 return mark_safe("<img{}>".format(flatatt(img_attrs)))
56
57 admin_thumb.short_description = thumb_col_header_text
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/contrib/modeladmin/mixins.py b/wagtail/contrib/modeladmin/mixins.py
--- a/wagtail/contrib/modeladmin/mixins.py
+++ b/wagtail/contrib/modeladmin/mixins.py
@@ -25,6 +25,7 @@
"The `wagtail.images` app must be installed in order "
"to use the `ThumbnailMixin` class."
)
+ self.__class__.admin_thumb.short_description = self.thumb_col_header_text
super().__init__(*args, **kwargs)
def admin_thumb(self, obj):
@@ -53,5 +54,3 @@
rendition = get_rendition_or_not_found(image, spec)
img_attrs.update({"src": rendition.url})
return mark_safe("<img{}>".format(flatatt(img_attrs)))
-
- admin_thumb.short_description = thumb_col_header_text
| {"golden_diff": "diff --git a/wagtail/contrib/modeladmin/mixins.py b/wagtail/contrib/modeladmin/mixins.py\n--- a/wagtail/contrib/modeladmin/mixins.py\n+++ b/wagtail/contrib/modeladmin/mixins.py\n@@ -25,6 +25,7 @@\n \"The `wagtail.images` app must be installed in order \"\n \"to use the `ThumbnailMixin` class.\"\n )\n+ self.__class__.admin_thumb.short_description = self.thumb_col_header_text\n super().__init__(*args, **kwargs)\n \n def admin_thumb(self, obj):\n@@ -53,5 +54,3 @@\n rendition = get_rendition_or_not_found(image, spec)\n img_attrs.update({\"src\": rendition.url})\n return mark_safe(\"<img{}>\".format(flatatt(img_attrs)))\n-\n- admin_thumb.short_description = thumb_col_header_text\n", "issue": "ThumbnailMixin does not display in header the value defined under thumb_col_header_text \n<!--\r\nFound a bug? Please fill out the sections below. \ud83d\udc4d\r\n-->\r\n\r\n### Issue Summary\r\n\r\nWhen adding ThumbnailMixin to a ModelAdmin, and giving it the `thumb_col_header_text` attribute, should display that on the list header for the thumbnail. but it always uses the default defined 'image' \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n### Steps to Reproduce\r\n\r\n1. (for example) Start a new project with `wagtail start myproject`\r\n2. in models.py add a new model (non page) with a forignkey to wagtailimages.Image \r\n3. add model admin definition in wagtail_hooks.py\r\n4. add ThumbnailMixin to model admin super classes\r\n5. add some value to thumb_col_header_text\r\n6. register new model admin\r\n7. load app\r\n8. add new instance of your new model with an image\r\n9. in list header for your image it will say 'image' not what you defined in thumb_col_header_text\r\n\r\nAny other relevant information. For example, why do you consider this a bug and what did you expect to happen instead?\r\n\r\n* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: (yes)\r\n* i already know why this is happening and will submit a pull request shortly\r\n\r\n\r\n### Technical details\r\n\r\n* Python version: 3.9.7\r\n* Django version: 4.0.3\r\n* Wagtail version: 2.16.1\r\n* Browser version: Chrome Version 100.0.4896.60 (Official Build) (x86_64)\r\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.forms.utils import flatatt\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import gettext_lazy as _\n\n\nclass ThumbnailMixin:\n \"\"\"\n Mixin class to help display thumbnail images in ModelAdmin listing results.\n `thumb_image_field_name` must be overridden to name a ForeignKey field on\n your model, linking to `wagtailimages.Image`.\n \"\"\"\n\n thumb_image_field_name = \"image\"\n thumb_image_filter_spec = \"fill-100x100\"\n thumb_image_width = 50\n thumb_classname = \"admin-thumb\"\n thumb_col_header_text = _(\"image\")\n thumb_default = None\n\n def __init__(self, *args, **kwargs):\n if \"wagtail.images\" not in settings.INSTALLED_APPS:\n raise ImproperlyConfigured(\n \"The `wagtail.images` app must be installed in order \"\n \"to use the `ThumbnailMixin` class.\"\n )\n super().__init__(*args, **kwargs)\n\n def admin_thumb(self, obj):\n try:\n image = getattr(obj, self.thumb_image_field_name, None)\n except AttributeError:\n raise ImproperlyConfigured(\n \"The `thumb_image_field_name` attribute on your `%s` class \"\n \"must name a field on your model.\" % self.__class__.__name__\n )\n\n img_attrs = {\n \"src\": self.thumb_default,\n \"width\": self.thumb_image_width,\n \"class\": self.thumb_classname,\n }\n if not image:\n if self.thumb_default:\n return mark_safe(\"<img{}>\".format(flatatt(img_attrs)))\n return \"\"\n\n # try to get a rendition of the image to use\n from wagtail.images.shortcuts import get_rendition_or_not_found\n\n spec = self.thumb_image_filter_spec\n rendition = get_rendition_or_not_found(image, spec)\n img_attrs.update({\"src\": rendition.url})\n return mark_safe(\"<img{}>\".format(flatatt(img_attrs)))\n\n admin_thumb.short_description = thumb_col_header_text\n", "path": "wagtail/contrib/modeladmin/mixins.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.forms.utils import flatatt\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import gettext_lazy as _\n\n\nclass ThumbnailMixin:\n \"\"\"\n Mixin class to help display thumbnail images in ModelAdmin listing results.\n `thumb_image_field_name` must be overridden to name a ForeignKey field on\n your model, linking to `wagtailimages.Image`.\n \"\"\"\n\n thumb_image_field_name = \"image\"\n thumb_image_filter_spec = \"fill-100x100\"\n thumb_image_width = 50\n thumb_classname = \"admin-thumb\"\n thumb_col_header_text = _(\"image\")\n thumb_default = None\n\n def __init__(self, *args, **kwargs):\n if \"wagtail.images\" not in settings.INSTALLED_APPS:\n raise ImproperlyConfigured(\n \"The `wagtail.images` app must be installed in order \"\n \"to use the `ThumbnailMixin` class.\"\n )\n self.__class__.admin_thumb.short_description = self.thumb_col_header_text\n super().__init__(*args, **kwargs)\n\n def admin_thumb(self, obj):\n try:\n image = getattr(obj, self.thumb_image_field_name, None)\n except AttributeError:\n raise ImproperlyConfigured(\n \"The `thumb_image_field_name` attribute on your `%s` class \"\n \"must name a field on your model.\" % self.__class__.__name__\n )\n\n img_attrs = {\n \"src\": self.thumb_default,\n \"width\": self.thumb_image_width,\n \"class\": self.thumb_classname,\n }\n if not image:\n if self.thumb_default:\n return mark_safe(\"<img{}>\".format(flatatt(img_attrs)))\n return \"\"\n\n # try to get a rendition of the image to use\n from wagtail.images.shortcuts import get_rendition_or_not_found\n\n spec = self.thumb_image_filter_spec\n rendition = get_rendition_or_not_found(image, spec)\n img_attrs.update({\"src\": rendition.url})\n return mark_safe(\"<img{}>\".format(flatatt(img_attrs)))\n", "path": "wagtail/contrib/modeladmin/mixins.py"}]} | 1,307 | 197 |
gh_patches_debug_1095 | rasdani/github-patches | git_diff | python-poetry__poetry-277 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Discrepancy regarding license between doc and poetry init
<!--
Hi there! Thank you for discovering and submitting an issue.
Before you submit this; let's make sure of a few things.
Please make sure the following boxes are ticked if they are correct.
If not, please try and fulfill these first.
-->
<!-- Checked checkbox should look like this: [x] -->
- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.
- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.
- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).
<!--
Once those are done, if you're able to fill in the following list with your information,
it'd be very helpful to whoever handles the issue.
-->
- **OS version and name**: Manjaro Linux
- **Poetry version**: 0.11.1
- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**:
## Issue
<!-- Now feel free to write your issue, but please be descriptive! Thanks again 🙌 ❤️ -->
During the `license` prompt of `poetry init`, a valid license is required as input. Acording to the documentation, a license is highly recommended, but not actually required. This descrepancy should be removed by updating either the documentation or the code.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `poetry/console/commands/init.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from __future__ import unicode_literals
3
4 import re
5
6 from typing import List
7 from typing import Tuple
8
9 from .command import Command
10 from .venv_command import VenvCommand
11
12
13 class InitCommand(Command):
14 """
15 Creates a basic <comment>pyproject.toml</> file in the current directory.
16
17 init
18 {--name= : Name of the package}
19 {--description= : Description of the package}
20 {--author= : Author name of the package}
21 {--dependency=* : Package to require with an optional version constraint,
22 e.g. requests:^2.10.0 or requests=2.11.1}
23 {--dev-dependency=* : Package to require for development with an optional version constraint,
24 e.g. requests:^2.10.0 or requests=2.11.1}
25 {--l|license= : License of the package}
26 """
27
28 help = """\
29 The <info>init</info> command creates a basic <comment>pyproject.toml</> file in the current directory.
30 """
31
32 def __init__(self):
33 super(InitCommand, self).__init__()
34
35 self._pool = None
36
37 def handle(self):
38 from poetry.layouts import layout
39 from poetry.utils._compat import Path
40 from poetry.vcs.git import GitConfig
41
42 if (Path.cwd() / "pyproject.toml").exists():
43 self.error("A pyproject.toml file already exists.")
44 return 1
45
46 vcs_config = GitConfig()
47
48 self.line(
49 [
50 "",
51 "This command will guide you through creating your <info>poetry.toml</> config.",
52 "",
53 ]
54 )
55
56 name = self.option("name")
57 if not name:
58 name = Path.cwd().name.lower()
59
60 question = self.create_question(
61 "Package name [<comment>{}</comment>]: ".format(name), default=name
62 )
63 name = self.ask(question)
64
65 version = "0.1.0"
66 question = self.create_question(
67 "Version [<comment>{}</comment>]: ".format(version), default=version
68 )
69 version = self.ask(question)
70
71 description = self.option("description") or ""
72 question = self.create_question(
73 "Description [<comment>{}</comment>]: ".format(description),
74 default=description,
75 )
76 description = self.ask(question)
77
78 author = self.option("author")
79 if not author and vcs_config and vcs_config.get("user.name"):
80 author = vcs_config["user.name"]
81 author_email = vcs_config.get("user.email")
82 if author_email:
83 author += " <{}>".format(author_email)
84
85 question = self.create_question(
86 "Author [<comment>{}</comment>, n to skip]: ".format(author), default=author
87 )
88 question.validator = lambda v: self._validate_author(v, author)
89 author = self.ask(question)
90
91 if not author:
92 authors = []
93 else:
94 authors = [author]
95
96 license = self.option("license") or ""
97
98 question = self.create_question(
99 "License [<comment>{}</comment>]: ".format(license), default=license
100 )
101 question.validator = self._validate_license
102 license = self.ask(question)
103
104 question = self.create_question("Compatible Python versions [*]: ", default="*")
105 python = self.ask(question)
106
107 self.line("")
108
109 requirements = {}
110
111 question = "Would you like to define your dependencies" " (require) interactively?"
112 if self.confirm(question, True):
113 requirements = self._format_requirements(
114 self._determine_requirements(self.option("dependency"))
115 )
116
117 dev_requirements = {}
118
119 question = "Would you like to define your dev dependencies" " (require-dev) interactively"
120 if self.confirm(question, True):
121 dev_requirements = self._format_requirements(
122 self._determine_requirements(self.option("dev-dependency"))
123 )
124
125 layout_ = layout("standard")(
126 name,
127 version,
128 description=description,
129 author=authors[0] if authors else None,
130 license=license,
131 python=python,
132 dependencies=requirements,
133 dev_dependencies=dev_requirements,
134 )
135
136 content = layout_.generate_poetry_content()
137 if self.input.is_interactive():
138 self.line("<info>Generated file</info>")
139 self.line(["", content, ""])
140
141 if not self.confirm("Do you confirm generation?", True):
142 self.line("<error>Command aborted</error>")
143
144 return 1
145
146 with (Path.cwd() / "pyproject.toml").open("w") as f:
147 f.write(content)
148
149 def _determine_requirements(
150 self, requires, allow_prereleases=False # type: List[str] # type: bool
151 ): # type: (...) -> List[str]
152 if not requires:
153 requires = []
154
155 package = self.ask("Search for package:")
156 while package is not None:
157 matches = self._get_pool().search(package)
158
159 if not matches:
160 self.line("<error>Unable to find package</error>")
161 package = False
162 else:
163 choices = []
164
165 for found_package in matches:
166 choices.append(found_package.pretty_name)
167
168 self.line(
169 "Found <info>{}</info> packages matching <info>{}</info>".format(
170 len(matches), package
171 )
172 )
173
174 package = self.choice(
175 "\nEnter package # to add, or the complete package name if it is not listed",
176 choices,
177 attempts=3,
178 )
179
180 # no constraint yet, determine the best version automatically
181 if package is not False and " " not in package:
182 question = self.create_question(
183 "Enter the version constraint to require "
184 "(or leave blank to use the latest version):"
185 )
186 question.attempts = 3
187 question.validator = lambda x: (x or "").strip() or False
188
189 constraint = self.ask(question)
190
191 if constraint is False:
192 _, constraint = self._find_best_version_for_package(package)
193
194 self.line(
195 "Using version <info>{}</info> for <info>{}</info>".format(
196 constraint, package
197 )
198 )
199
200 package += " {}".format(constraint)
201
202 if package is not False:
203 requires.append(package)
204
205 package = self.ask("\nSearch for a package:")
206
207 return requires
208
209 requires = self._parse_name_version_pairs(requires)
210 result = []
211 for requirement in requires:
212 if "version" not in requirement:
213 # determine the best version automatically
214 name, version = self._find_best_version_for_package(
215 requirement["name"], allow_prereleases=allow_prereleases
216 )
217 requirement["version"] = version
218 requirement["name"] = name
219
220 self.line(
221 "Using version <info>{}</> for <info>{}</>".format(version, name)
222 )
223 else:
224 # check that the specified version/constraint exists
225 # before we proceed
226 name, _ = self._find_best_version_for_package(
227 requirement["name"],
228 requirement["version"],
229 allow_prereleases=allow_prereleases,
230 )
231
232 requirement["name"] = name
233
234 result.append("{} {}".format(requirement["name"], requirement["version"]))
235
236 return result
237
238 def _find_best_version_for_package(
239 self, name, required_version=None, allow_prereleases=False
240 ): # type: (...) -> Tuple[str, str]
241 from poetry.version.version_selector import VersionSelector
242
243 selector = VersionSelector(self._get_pool())
244 package = selector.find_best_candidate(
245 name, required_version, allow_prereleases=allow_prereleases
246 )
247
248 if not package:
249 # TODO: find similar
250 raise ValueError(
251 "Could not find a matching version of package {}".format(name)
252 )
253
254 return (package.pretty_name, selector.find_recommended_require_version(package))
255
256 def _parse_name_version_pairs(self, pairs): # type: (list) -> list
257 result = []
258
259 for i in range(len(pairs)):
260 pair = re.sub("^([^=: ]+)[=: ](.*)$", "\\1 \\2", pairs[i].strip())
261 pair = pair.strip()
262
263 if " " in pair:
264 name, version = pair.split(" ", 2)
265 result.append({"name": name, "version": version})
266 else:
267 result.append({"name": pair})
268
269 return result
270
271 def _format_requirements(self, requirements): # type: (List[str]) -> dict
272 requires = {}
273 requirements = self._parse_name_version_pairs(requirements)
274 for requirement in requirements:
275 requires[requirement["name"]] = requirement["version"]
276
277 return requires
278
279 def _validate_author(self, author, default):
280 from poetry.packages.package import AUTHOR_REGEX
281
282 author = author or default
283
284 if author in ["n", "no"]:
285 return
286
287 m = AUTHOR_REGEX.match(author)
288 if not m:
289 raise ValueError(
290 "Invalid author string. Must be in the format: "
291 "John Smith <[email protected]>"
292 )
293
294 return author
295
296 def _validate_license(self, license):
297 from poetry.spdx import license_by_id
298
299 license_by_id(license)
300
301 return license
302
303 def _get_pool(self):
304 from poetry.repositories import Pool
305 from poetry.repositories.pypi_repository import PyPiRepository
306
307 if isinstance(self, VenvCommand):
308 return self.poetry.pool
309
310 if self._pool is None:
311 self._pool = Pool()
312 self._pool.add_repository(PyPiRepository())
313
314 return self._pool
315
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/poetry/console/commands/init.py b/poetry/console/commands/init.py
--- a/poetry/console/commands/init.py
+++ b/poetry/console/commands/init.py
@@ -296,7 +296,8 @@
def _validate_license(self, license):
from poetry.spdx import license_by_id
- license_by_id(license)
+ if license:
+ license_by_id(license)
return license
| {"golden_diff": "diff --git a/poetry/console/commands/init.py b/poetry/console/commands/init.py\n--- a/poetry/console/commands/init.py\n+++ b/poetry/console/commands/init.py\n@@ -296,7 +296,8 @@\n def _validate_license(self, license):\n from poetry.spdx import license_by_id\n \n- license_by_id(license)\n+ if license:\n+ license_by_id(license)\n \n return license\n", "issue": "Discrepancy regarding license between doc and poetry init\n<!--\r\n Hi there! Thank you for discovering and submitting an issue.\r\n\r\n Before you submit this; let's make sure of a few things.\r\n Please make sure the following boxes are ticked if they are correct.\r\n If not, please try and fulfill these first.\r\n-->\r\n\r\n<!-- Checked checkbox should look like this: [x] -->\r\n- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.\r\n- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).\r\n\r\n<!--\r\n Once those are done, if you're able to fill in the following list with your information,\r\n it'd be very helpful to whoever handles the issue.\r\n-->\r\n\r\n- **OS version and name**: Manjaro Linux\r\n- **Poetry version**: 0.11.1\r\n- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**: \r\n\r\n## Issue\r\n<!-- Now feel free to write your issue, but please be descriptive! Thanks again \ud83d\ude4c \u2764\ufe0f -->\r\nDuring the `license` prompt of `poetry init`, a valid license is required as input. Acording to the documentation, a license is highly recommended, but not actually required. This descrepancy should be removed by updating either the documentation or the code.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nimport re\n\nfrom typing import List\nfrom typing import Tuple\n\nfrom .command import Command\nfrom .venv_command import VenvCommand\n\n\nclass InitCommand(Command):\n \"\"\"\n Creates a basic <comment>pyproject.toml</> file in the current directory.\n\n init\n {--name= : Name of the package}\n {--description= : Description of the package}\n {--author= : Author name of the package}\n {--dependency=* : Package to require with an optional version constraint,\n e.g. requests:^2.10.0 or requests=2.11.1}\n {--dev-dependency=* : Package to require for development with an optional version constraint,\n e.g. requests:^2.10.0 or requests=2.11.1}\n {--l|license= : License of the package}\n \"\"\"\n\n help = \"\"\"\\\nThe <info>init</info> command creates a basic <comment>pyproject.toml</> file in the current directory.\n\"\"\"\n\n def __init__(self):\n super(InitCommand, self).__init__()\n\n self._pool = None\n\n def handle(self):\n from poetry.layouts import layout\n from poetry.utils._compat import Path\n from poetry.vcs.git import GitConfig\n\n if (Path.cwd() / \"pyproject.toml\").exists():\n self.error(\"A pyproject.toml file already exists.\")\n return 1\n\n vcs_config = GitConfig()\n\n self.line(\n [\n \"\",\n \"This command will guide you through creating your <info>poetry.toml</> config.\",\n \"\",\n ]\n )\n\n name = self.option(\"name\")\n if not name:\n name = Path.cwd().name.lower()\n\n question = self.create_question(\n \"Package name [<comment>{}</comment>]: \".format(name), default=name\n )\n name = self.ask(question)\n\n version = \"0.1.0\"\n question = self.create_question(\n \"Version [<comment>{}</comment>]: \".format(version), default=version\n )\n version = self.ask(question)\n\n description = self.option(\"description\") or \"\"\n question = self.create_question(\n \"Description [<comment>{}</comment>]: \".format(description),\n default=description,\n )\n description = self.ask(question)\n\n author = self.option(\"author\")\n if not author and vcs_config and vcs_config.get(\"user.name\"):\n author = vcs_config[\"user.name\"]\n author_email = vcs_config.get(\"user.email\")\n if author_email:\n author += \" <{}>\".format(author_email)\n\n question = self.create_question(\n \"Author [<comment>{}</comment>, n to skip]: \".format(author), default=author\n )\n question.validator = lambda v: self._validate_author(v, author)\n author = self.ask(question)\n\n if not author:\n authors = []\n else:\n authors = [author]\n\n license = self.option(\"license\") or \"\"\n\n question = self.create_question(\n \"License [<comment>{}</comment>]: \".format(license), default=license\n )\n question.validator = self._validate_license\n license = self.ask(question)\n\n question = self.create_question(\"Compatible Python versions [*]: \", default=\"*\")\n python = self.ask(question)\n\n self.line(\"\")\n\n requirements = {}\n\n question = \"Would you like to define your dependencies\" \" (require) interactively?\"\n if self.confirm(question, True):\n requirements = self._format_requirements(\n self._determine_requirements(self.option(\"dependency\"))\n )\n\n dev_requirements = {}\n\n question = \"Would you like to define your dev dependencies\" \" (require-dev) interactively\"\n if self.confirm(question, True):\n dev_requirements = self._format_requirements(\n self._determine_requirements(self.option(\"dev-dependency\"))\n )\n\n layout_ = layout(\"standard\")(\n name,\n version,\n description=description,\n author=authors[0] if authors else None,\n license=license,\n python=python,\n dependencies=requirements,\n dev_dependencies=dev_requirements,\n )\n\n content = layout_.generate_poetry_content()\n if self.input.is_interactive():\n self.line(\"<info>Generated file</info>\")\n self.line([\"\", content, \"\"])\n\n if not self.confirm(\"Do you confirm generation?\", True):\n self.line(\"<error>Command aborted</error>\")\n\n return 1\n\n with (Path.cwd() / \"pyproject.toml\").open(\"w\") as f:\n f.write(content)\n\n def _determine_requirements(\n self, requires, allow_prereleases=False # type: List[str] # type: bool\n ): # type: (...) -> List[str]\n if not requires:\n requires = []\n\n package = self.ask(\"Search for package:\")\n while package is not None:\n matches = self._get_pool().search(package)\n\n if not matches:\n self.line(\"<error>Unable to find package</error>\")\n package = False\n else:\n choices = []\n\n for found_package in matches:\n choices.append(found_package.pretty_name)\n\n self.line(\n \"Found <info>{}</info> packages matching <info>{}</info>\".format(\n len(matches), package\n )\n )\n\n package = self.choice(\n \"\\nEnter package # to add, or the complete package name if it is not listed\",\n choices,\n attempts=3,\n )\n\n # no constraint yet, determine the best version automatically\n if package is not False and \" \" not in package:\n question = self.create_question(\n \"Enter the version constraint to require \"\n \"(or leave blank to use the latest version):\"\n )\n question.attempts = 3\n question.validator = lambda x: (x or \"\").strip() or False\n\n constraint = self.ask(question)\n\n if constraint is False:\n _, constraint = self._find_best_version_for_package(package)\n\n self.line(\n \"Using version <info>{}</info> for <info>{}</info>\".format(\n constraint, package\n )\n )\n\n package += \" {}\".format(constraint)\n\n if package is not False:\n requires.append(package)\n\n package = self.ask(\"\\nSearch for a package:\")\n\n return requires\n\n requires = self._parse_name_version_pairs(requires)\n result = []\n for requirement in requires:\n if \"version\" not in requirement:\n # determine the best version automatically\n name, version = self._find_best_version_for_package(\n requirement[\"name\"], allow_prereleases=allow_prereleases\n )\n requirement[\"version\"] = version\n requirement[\"name\"] = name\n\n self.line(\n \"Using version <info>{}</> for <info>{}</>\".format(version, name)\n )\n else:\n # check that the specified version/constraint exists\n # before we proceed\n name, _ = self._find_best_version_for_package(\n requirement[\"name\"],\n requirement[\"version\"],\n allow_prereleases=allow_prereleases,\n )\n\n requirement[\"name\"] = name\n\n result.append(\"{} {}\".format(requirement[\"name\"], requirement[\"version\"]))\n\n return result\n\n def _find_best_version_for_package(\n self, name, required_version=None, allow_prereleases=False\n ): # type: (...) -> Tuple[str, str]\n from poetry.version.version_selector import VersionSelector\n\n selector = VersionSelector(self._get_pool())\n package = selector.find_best_candidate(\n name, required_version, allow_prereleases=allow_prereleases\n )\n\n if not package:\n # TODO: find similar\n raise ValueError(\n \"Could not find a matching version of package {}\".format(name)\n )\n\n return (package.pretty_name, selector.find_recommended_require_version(package))\n\n def _parse_name_version_pairs(self, pairs): # type: (list) -> list\n result = []\n\n for i in range(len(pairs)):\n pair = re.sub(\"^([^=: ]+)[=: ](.*)$\", \"\\\\1 \\\\2\", pairs[i].strip())\n pair = pair.strip()\n\n if \" \" in pair:\n name, version = pair.split(\" \", 2)\n result.append({\"name\": name, \"version\": version})\n else:\n result.append({\"name\": pair})\n\n return result\n\n def _format_requirements(self, requirements): # type: (List[str]) -> dict\n requires = {}\n requirements = self._parse_name_version_pairs(requirements)\n for requirement in requirements:\n requires[requirement[\"name\"]] = requirement[\"version\"]\n\n return requires\n\n def _validate_author(self, author, default):\n from poetry.packages.package import AUTHOR_REGEX\n\n author = author or default\n\n if author in [\"n\", \"no\"]:\n return\n\n m = AUTHOR_REGEX.match(author)\n if not m:\n raise ValueError(\n \"Invalid author string. Must be in the format: \"\n \"John Smith <[email protected]>\"\n )\n\n return author\n\n def _validate_license(self, license):\n from poetry.spdx import license_by_id\n\n license_by_id(license)\n\n return license\n\n def _get_pool(self):\n from poetry.repositories import Pool\n from poetry.repositories.pypi_repository import PyPiRepository\n\n if isinstance(self, VenvCommand):\n return self.poetry.pool\n\n if self._pool is None:\n self._pool = Pool()\n self._pool.add_repository(PyPiRepository())\n\n return self._pool\n", "path": "poetry/console/commands/init.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nimport re\n\nfrom typing import List\nfrom typing import Tuple\n\nfrom .command import Command\nfrom .venv_command import VenvCommand\n\n\nclass InitCommand(Command):\n \"\"\"\n Creates a basic <comment>pyproject.toml</> file in the current directory.\n\n init\n {--name= : Name of the package}\n {--description= : Description of the package}\n {--author= : Author name of the package}\n {--dependency=* : Package to require with an optional version constraint,\n e.g. requests:^2.10.0 or requests=2.11.1}\n {--dev-dependency=* : Package to require for development with an optional version constraint,\n e.g. requests:^2.10.0 or requests=2.11.1}\n {--l|license= : License of the package}\n \"\"\"\n\n help = \"\"\"\\\nThe <info>init</info> command creates a basic <comment>pyproject.toml</> file in the current directory.\n\"\"\"\n\n def __init__(self):\n super(InitCommand, self).__init__()\n\n self._pool = None\n\n def handle(self):\n from poetry.layouts import layout\n from poetry.utils._compat import Path\n from poetry.vcs.git import GitConfig\n\n if (Path.cwd() / \"pyproject.toml\").exists():\n self.error(\"A pyproject.toml file already exists.\")\n return 1\n\n vcs_config = GitConfig()\n\n self.line(\n [\n \"\",\n \"This command will guide you through creating your <info>poetry.toml</> config.\",\n \"\",\n ]\n )\n\n name = self.option(\"name\")\n if not name:\n name = Path.cwd().name.lower()\n\n question = self.create_question(\n \"Package name [<comment>{}</comment>]: \".format(name), default=name\n )\n name = self.ask(question)\n\n version = \"0.1.0\"\n question = self.create_question(\n \"Version [<comment>{}</comment>]: \".format(version), default=version\n )\n version = self.ask(question)\n\n description = self.option(\"description\") or \"\"\n question = self.create_question(\n \"Description [<comment>{}</comment>]: \".format(description),\n default=description,\n )\n description = self.ask(question)\n\n author = self.option(\"author\")\n if not author and vcs_config and vcs_config.get(\"user.name\"):\n author = vcs_config[\"user.name\"]\n author_email = vcs_config.get(\"user.email\")\n if author_email:\n author += \" <{}>\".format(author_email)\n\n question = self.create_question(\n \"Author [<comment>{}</comment>, n to skip]: \".format(author), default=author\n )\n question.validator = lambda v: self._validate_author(v, author)\n author = self.ask(question)\n\n if not author:\n authors = []\n else:\n authors = [author]\n\n license = self.option(\"license\") or \"\"\n\n question = self.create_question(\n \"License [<comment>{}</comment>]: \".format(license), default=license\n )\n question.validator = self._validate_license\n license = self.ask(question)\n\n question = self.create_question(\"Compatible Python versions [*]: \", default=\"*\")\n python = self.ask(question)\n\n self.line(\"\")\n\n requirements = {}\n\n question = \"Would you like to define your dependencies\" \" (require) interactively?\"\n if self.confirm(question, True):\n requirements = self._format_requirements(\n self._determine_requirements(self.option(\"dependency\"))\n )\n\n dev_requirements = {}\n\n question = \"Would you like to define your dev dependencies\" \" (require-dev) interactively\"\n if self.confirm(question, True):\n dev_requirements = self._format_requirements(\n self._determine_requirements(self.option(\"dev-dependency\"))\n )\n\n layout_ = layout(\"standard\")(\n name,\n version,\n description=description,\n author=authors[0] if authors else None,\n license=license,\n python=python,\n dependencies=requirements,\n dev_dependencies=dev_requirements,\n )\n\n content = layout_.generate_poetry_content()\n if self.input.is_interactive():\n self.line(\"<info>Generated file</info>\")\n self.line([\"\", content, \"\"])\n\n if not self.confirm(\"Do you confirm generation?\", True):\n self.line(\"<error>Command aborted</error>\")\n\n return 1\n\n with (Path.cwd() / \"pyproject.toml\").open(\"w\") as f:\n f.write(content)\n\n def _determine_requirements(\n self, requires, allow_prereleases=False # type: List[str] # type: bool\n ): # type: (...) -> List[str]\n if not requires:\n requires = []\n\n package = self.ask(\"Search for package:\")\n while package is not None:\n matches = self._get_pool().search(package)\n\n if not matches:\n self.line(\"<error>Unable to find package</error>\")\n package = False\n else:\n choices = []\n\n for found_package in matches:\n choices.append(found_package.pretty_name)\n\n self.line(\n \"Found <info>{}</info> packages matching <info>{}</info>\".format(\n len(matches), package\n )\n )\n\n package = self.choice(\n \"\\nEnter package # to add, or the complete package name if it is not listed\",\n choices,\n attempts=3,\n )\n\n # no constraint yet, determine the best version automatically\n if package is not False and \" \" not in package:\n question = self.create_question(\n \"Enter the version constraint to require \"\n \"(or leave blank to use the latest version):\"\n )\n question.attempts = 3\n question.validator = lambda x: (x or \"\").strip() or False\n\n constraint = self.ask(question)\n\n if constraint is False:\n _, constraint = self._find_best_version_for_package(package)\n\n self.line(\n \"Using version <info>{}</info> for <info>{}</info>\".format(\n constraint, package\n )\n )\n\n package += \" {}\".format(constraint)\n\n if package is not False:\n requires.append(package)\n\n package = self.ask(\"\\nSearch for a package:\")\n\n return requires\n\n requires = self._parse_name_version_pairs(requires)\n result = []\n for requirement in requires:\n if \"version\" not in requirement:\n # determine the best version automatically\n name, version = self._find_best_version_for_package(\n requirement[\"name\"], allow_prereleases=allow_prereleases\n )\n requirement[\"version\"] = version\n requirement[\"name\"] = name\n\n self.line(\n \"Using version <info>{}</> for <info>{}</>\".format(version, name)\n )\n else:\n # check that the specified version/constraint exists\n # before we proceed\n name, _ = self._find_best_version_for_package(\n requirement[\"name\"],\n requirement[\"version\"],\n allow_prereleases=allow_prereleases,\n )\n\n requirement[\"name\"] = name\n\n result.append(\"{} {}\".format(requirement[\"name\"], requirement[\"version\"]))\n\n return result\n\n def _find_best_version_for_package(\n self, name, required_version=None, allow_prereleases=False\n ): # type: (...) -> Tuple[str, str]\n from poetry.version.version_selector import VersionSelector\n\n selector = VersionSelector(self._get_pool())\n package = selector.find_best_candidate(\n name, required_version, allow_prereleases=allow_prereleases\n )\n\n if not package:\n # TODO: find similar\n raise ValueError(\n \"Could not find a matching version of package {}\".format(name)\n )\n\n return (package.pretty_name, selector.find_recommended_require_version(package))\n\n def _parse_name_version_pairs(self, pairs): # type: (list) -> list\n result = []\n\n for i in range(len(pairs)):\n pair = re.sub(\"^([^=: ]+)[=: ](.*)$\", \"\\\\1 \\\\2\", pairs[i].strip())\n pair = pair.strip()\n\n if \" \" in pair:\n name, version = pair.split(\" \", 2)\n result.append({\"name\": name, \"version\": version})\n else:\n result.append({\"name\": pair})\n\n return result\n\n def _format_requirements(self, requirements): # type: (List[str]) -> dict\n requires = {}\n requirements = self._parse_name_version_pairs(requirements)\n for requirement in requirements:\n requires[requirement[\"name\"]] = requirement[\"version\"]\n\n return requires\n\n def _validate_author(self, author, default):\n from poetry.packages.package import AUTHOR_REGEX\n\n author = author or default\n\n if author in [\"n\", \"no\"]:\n return\n\n m = AUTHOR_REGEX.match(author)\n if not m:\n raise ValueError(\n \"Invalid author string. Must be in the format: \"\n \"John Smith <[email protected]>\"\n )\n\n return author\n\n def _validate_license(self, license):\n from poetry.spdx import license_by_id\n\n if license:\n license_by_id(license)\n\n return license\n\n def _get_pool(self):\n from poetry.repositories import Pool\n from poetry.repositories.pypi_repository import PyPiRepository\n\n if isinstance(self, VenvCommand):\n return self.poetry.pool\n\n if self._pool is None:\n self._pool = Pool()\n self._pool.add_repository(PyPiRepository())\n\n return self._pool\n", "path": "poetry/console/commands/init.py"}]} | 3,547 | 103 |
gh_patches_debug_39253 | rasdani/github-patches | git_diff | lightly-ai__lightly-1531 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug in `GatherLayer.backward`
Hi,
We've been implementing a model at [cellarium-ml](https://github.com/cellarium-ai/cellarium-ml) using your `NTXentLoss`. Comparing the model training with a single GPU and two GPUs we noticed that they do not match. By investigating it we found an apparent bug in the `GatherLayer.backward` where gradients are not sum-reduced over GPUs. Here is our fixed version (https://github.com/cellarium-ai/cellarium-ml/blob/main/cellarium/ml/distributed/gather.py#L17-L21):
```py
@staticmethod
def backward(ctx, *grads) -> torch.Tensor:
grad_out = grads[dist.get_rank()].contiguous()
dist.all_reduce(grad_out, op=dist.ReduceOp.SUM)
return grad_out
```
and the [test](https://github.com/cellarium-ai/cellarium-ml/blob/main/tests/distributed/test_gather.py) we wrote. Would you agree that this is indeed a bug? I would be happy to contribute a PR with the fix.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lightly/utils/dist.py`
Content:
```
1 from typing import Optional, Tuple
2
3 import torch
4 import torch.distributed as dist
5
6
7 class GatherLayer(torch.autograd.Function):
8 """Gather tensors from all processes, supporting backward propagation.
9
10 This code was taken and adapted from here:
11 https://github.com/Spijkervet/SimCLR
12
13 """
14
15 @staticmethod
16 def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]:
17 ctx.save_for_backward(input)
18 output = [torch.empty_like(input) for _ in range(dist.get_world_size())]
19 dist.all_gather(output, input)
20 return tuple(output)
21
22 @staticmethod
23 def backward(ctx, *grads: torch.Tensor) -> torch.Tensor:
24 (input,) = ctx.saved_tensors
25 grad_out = torch.empty_like(input)
26 grad_out[:] = grads[dist.get_rank()]
27 return grad_out
28
29
30 def rank() -> int:
31 """Returns the rank of the current process."""
32 return dist.get_rank() if dist.is_initialized() else 0
33
34
35 def world_size() -> int:
36 """Returns the current world size (number of distributed processes)."""
37 return dist.get_world_size() if dist.is_initialized() else 1
38
39
40 def gather(input: torch.Tensor) -> Tuple[torch.Tensor]:
41 """Gathers this tensor from all processes. Supports backprop."""
42 return GatherLayer.apply(input)
43
44
45 def eye_rank(n: int, device: Optional[torch.device] = None) -> torch.Tensor:
46 """Returns an (n, n * world_size) zero matrix with the diagonal for the rank
47 of this process set to 1.
48
49 Example output where n=3, the current process has rank 1, and there are
50 4 processes in total:
51
52 rank0 rank1 rank2 rank3
53 0 0 0 | 1 0 0 | 0 0 0 | 0 0 0
54 0 0 0 | 0 1 0 | 0 0 0 | 0 0 0
55 0 0 0 | 0 0 1 | 0 0 0 | 0 0 0
56
57 Equivalent to torch.eye for undistributed settings or if world size == 1.
58
59 Args:
60 n:
61 Size of the square matrix on a single process.
62 device:
63 Device on which the matrix should be created.
64
65 """
66 rows = torch.arange(n, device=device, dtype=torch.long)
67 cols = rows + rank() * n
68 diag_mask = torch.zeros((n, n * world_size()), dtype=torch.bool)
69 diag_mask[(rows, cols)] = True
70 return diag_mask
71
72
73 def rank_zero_only(fn):
74 """Decorator that only runs the function on the process with rank 0.
75
76 Example:
77 >>> @rank_zero_only
78 >>> def print_rank_zero(message: str):
79 >>> print(message)
80 >>>
81 >>> print_rank_zero("Hello from rank 0!")
82
83 """
84
85 def wrapped(*args, **kwargs):
86 if rank() == 0:
87 return fn(*args, **kwargs)
88
89 return wrapped
90
91
92 @rank_zero_only
93 def print_rank_zero(*args, **kwargs) -> None:
94 """Equivalent to print, but only runs on the process with rank 0."""
95 print(*args, **kwargs)
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lightly/utils/dist.py b/lightly/utils/dist.py
--- a/lightly/utils/dist.py
+++ b/lightly/utils/dist.py
@@ -1,29 +1,29 @@
-from typing import Optional, Tuple
+from typing import Any, Callable, Optional, Tuple, TypeVar
import torch
import torch.distributed as dist
+from torch.autograd.function import FunctionCtx
class GatherLayer(torch.autograd.Function):
"""Gather tensors from all processes, supporting backward propagation.
This code was taken and adapted from here:
- https://github.com/Spijkervet/SimCLR
+ https://github.com/vturrisi/solo-learn/blob/b69b4bd27472593919956d9ac58902a301537a4d/solo/utils/misc.py#L187
"""
@staticmethod
- def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]:
- ctx.save_for_backward(input)
+ def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]: # type: ignore
output = [torch.empty_like(input) for _ in range(dist.get_world_size())]
dist.all_gather(output, input)
return tuple(output)
@staticmethod
- def backward(ctx, *grads: torch.Tensor) -> torch.Tensor:
- (input,) = ctx.saved_tensors
- grad_out = torch.empty_like(input)
- grad_out[:] = grads[dist.get_rank()]
+ def backward(ctx, *grads) -> torch.Tensor: # type: ignore
+ all_gradients = torch.stack(grads)
+ dist.all_reduce(all_gradients)
+ grad_out = all_gradients[dist.get_rank()]
return grad_out
@@ -39,7 +39,7 @@
def gather(input: torch.Tensor) -> Tuple[torch.Tensor]:
"""Gathers this tensor from all processes. Supports backprop."""
- return GatherLayer.apply(input)
+ return GatherLayer.apply(input) # type: ignore[no-any-return]
def eye_rank(n: int, device: Optional[torch.device] = None) -> torch.Tensor:
@@ -70,7 +70,10 @@
return diag_mask
-def rank_zero_only(fn):
+R = TypeVar("R")
+
+
+def rank_zero_only(fn: Callable[..., R]) -> Callable[..., Optional[R]]:
"""Decorator that only runs the function on the process with rank 0.
Example:
@@ -79,17 +82,17 @@
>>> print(message)
>>>
>>> print_rank_zero("Hello from rank 0!")
-
"""
- def wrapped(*args, **kwargs):
+ def wrapped(*args: Any, **kwargs: Any) -> Optional[R]:
if rank() == 0:
return fn(*args, **kwargs)
+ return None
return wrapped
@rank_zero_only
-def print_rank_zero(*args, **kwargs) -> None:
+def print_rank_zero(*args: Any, **kwargs: Any) -> None: # type: ignore[misc]
"""Equivalent to print, but only runs on the process with rank 0."""
print(*args, **kwargs)
| {"golden_diff": "diff --git a/lightly/utils/dist.py b/lightly/utils/dist.py\n--- a/lightly/utils/dist.py\n+++ b/lightly/utils/dist.py\n@@ -1,29 +1,29 @@\n-from typing import Optional, Tuple\n+from typing import Any, Callable, Optional, Tuple, TypeVar\n \n import torch\n import torch.distributed as dist\n+from torch.autograd.function import FunctionCtx\n \n \n class GatherLayer(torch.autograd.Function):\n \"\"\"Gather tensors from all processes, supporting backward propagation.\n \n This code was taken and adapted from here:\n- https://github.com/Spijkervet/SimCLR\n+ https://github.com/vturrisi/solo-learn/blob/b69b4bd27472593919956d9ac58902a301537a4d/solo/utils/misc.py#L187\n \n \"\"\"\n \n @staticmethod\n- def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]:\n- ctx.save_for_backward(input)\n+ def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]: # type: ignore\n output = [torch.empty_like(input) for _ in range(dist.get_world_size())]\n dist.all_gather(output, input)\n return tuple(output)\n \n @staticmethod\n- def backward(ctx, *grads: torch.Tensor) -> torch.Tensor:\n- (input,) = ctx.saved_tensors\n- grad_out = torch.empty_like(input)\n- grad_out[:] = grads[dist.get_rank()]\n+ def backward(ctx, *grads) -> torch.Tensor: # type: ignore\n+ all_gradients = torch.stack(grads)\n+ dist.all_reduce(all_gradients)\n+ grad_out = all_gradients[dist.get_rank()]\n return grad_out\n \n \n@@ -39,7 +39,7 @@\n \n def gather(input: torch.Tensor) -> Tuple[torch.Tensor]:\n \"\"\"Gathers this tensor from all processes. Supports backprop.\"\"\"\n- return GatherLayer.apply(input)\n+ return GatherLayer.apply(input) # type: ignore[no-any-return]\n \n \n def eye_rank(n: int, device: Optional[torch.device] = None) -> torch.Tensor:\n@@ -70,7 +70,10 @@\n return diag_mask\n \n \n-def rank_zero_only(fn):\n+R = TypeVar(\"R\")\n+\n+\n+def rank_zero_only(fn: Callable[..., R]) -> Callable[..., Optional[R]]:\n \"\"\"Decorator that only runs the function on the process with rank 0.\n \n Example:\n@@ -79,17 +82,17 @@\n >>> print(message)\n >>>\n >>> print_rank_zero(\"Hello from rank 0!\")\n-\n \"\"\"\n \n- def wrapped(*args, **kwargs):\n+ def wrapped(*args: Any, **kwargs: Any) -> Optional[R]:\n if rank() == 0:\n return fn(*args, **kwargs)\n+ return None\n \n return wrapped\n \n \n @rank_zero_only\n-def print_rank_zero(*args, **kwargs) -> None:\n+def print_rank_zero(*args: Any, **kwargs: Any) -> None: # type: ignore[misc]\n \"\"\"Equivalent to print, but only runs on the process with rank 0.\"\"\"\n print(*args, **kwargs)\n", "issue": "Bug in `GatherLayer.backward`\nHi,\r\n\r\nWe've been implementing a model at [cellarium-ml](https://github.com/cellarium-ai/cellarium-ml) using your `NTXentLoss`. Comparing the model training with a single GPU and two GPUs we noticed that they do not match. By investigating it we found an apparent bug in the `GatherLayer.backward` where gradients are not sum-reduced over GPUs. Here is our fixed version (https://github.com/cellarium-ai/cellarium-ml/blob/main/cellarium/ml/distributed/gather.py#L17-L21):\r\n\r\n```py\r\n @staticmethod\r\n def backward(ctx, *grads) -> torch.Tensor:\r\n grad_out = grads[dist.get_rank()].contiguous()\r\n dist.all_reduce(grad_out, op=dist.ReduceOp.SUM)\r\n return grad_out\r\n```\r\n\r\nand the [test](https://github.com/cellarium-ai/cellarium-ml/blob/main/tests/distributed/test_gather.py) we wrote. Would you agree that this is indeed a bug? I would be happy to contribute a PR with the fix.\n", "before_files": [{"content": "from typing import Optional, Tuple\n\nimport torch\nimport torch.distributed as dist\n\n\nclass GatherLayer(torch.autograd.Function):\n \"\"\"Gather tensors from all processes, supporting backward propagation.\n\n This code was taken and adapted from here:\n https://github.com/Spijkervet/SimCLR\n\n \"\"\"\n\n @staticmethod\n def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]:\n ctx.save_for_backward(input)\n output = [torch.empty_like(input) for _ in range(dist.get_world_size())]\n dist.all_gather(output, input)\n return tuple(output)\n\n @staticmethod\n def backward(ctx, *grads: torch.Tensor) -> torch.Tensor:\n (input,) = ctx.saved_tensors\n grad_out = torch.empty_like(input)\n grad_out[:] = grads[dist.get_rank()]\n return grad_out\n\n\ndef rank() -> int:\n \"\"\"Returns the rank of the current process.\"\"\"\n return dist.get_rank() if dist.is_initialized() else 0\n\n\ndef world_size() -> int:\n \"\"\"Returns the current world size (number of distributed processes).\"\"\"\n return dist.get_world_size() if dist.is_initialized() else 1\n\n\ndef gather(input: torch.Tensor) -> Tuple[torch.Tensor]:\n \"\"\"Gathers this tensor from all processes. Supports backprop.\"\"\"\n return GatherLayer.apply(input)\n\n\ndef eye_rank(n: int, device: Optional[torch.device] = None) -> torch.Tensor:\n \"\"\"Returns an (n, n * world_size) zero matrix with the diagonal for the rank\n of this process set to 1.\n\n Example output where n=3, the current process has rank 1, and there are\n 4 processes in total:\n\n rank0 rank1 rank2 rank3\n 0 0 0 | 1 0 0 | 0 0 0 | 0 0 0\n 0 0 0 | 0 1 0 | 0 0 0 | 0 0 0\n 0 0 0 | 0 0 1 | 0 0 0 | 0 0 0\n\n Equivalent to torch.eye for undistributed settings or if world size == 1.\n\n Args:\n n:\n Size of the square matrix on a single process.\n device:\n Device on which the matrix should be created.\n\n \"\"\"\n rows = torch.arange(n, device=device, dtype=torch.long)\n cols = rows + rank() * n\n diag_mask = torch.zeros((n, n * world_size()), dtype=torch.bool)\n diag_mask[(rows, cols)] = True\n return diag_mask\n\n\ndef rank_zero_only(fn):\n \"\"\"Decorator that only runs the function on the process with rank 0.\n\n Example:\n >>> @rank_zero_only\n >>> def print_rank_zero(message: str):\n >>> print(message)\n >>>\n >>> print_rank_zero(\"Hello from rank 0!\")\n\n \"\"\"\n\n def wrapped(*args, **kwargs):\n if rank() == 0:\n return fn(*args, **kwargs)\n\n return wrapped\n\n\n@rank_zero_only\ndef print_rank_zero(*args, **kwargs) -> None:\n \"\"\"Equivalent to print, but only runs on the process with rank 0.\"\"\"\n print(*args, **kwargs)\n", "path": "lightly/utils/dist.py"}], "after_files": [{"content": "from typing import Any, Callable, Optional, Tuple, TypeVar\n\nimport torch\nimport torch.distributed as dist\nfrom torch.autograd.function import FunctionCtx\n\n\nclass GatherLayer(torch.autograd.Function):\n \"\"\"Gather tensors from all processes, supporting backward propagation.\n\n This code was taken and adapted from here:\n https://github.com/vturrisi/solo-learn/blob/b69b4bd27472593919956d9ac58902a301537a4d/solo/utils/misc.py#L187\n\n \"\"\"\n\n @staticmethod\n def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]: # type: ignore\n output = [torch.empty_like(input) for _ in range(dist.get_world_size())]\n dist.all_gather(output, input)\n return tuple(output)\n\n @staticmethod\n def backward(ctx, *grads) -> torch.Tensor: # type: ignore\n all_gradients = torch.stack(grads)\n dist.all_reduce(all_gradients)\n grad_out = all_gradients[dist.get_rank()]\n return grad_out\n\n\ndef rank() -> int:\n \"\"\"Returns the rank of the current process.\"\"\"\n return dist.get_rank() if dist.is_initialized() else 0\n\n\ndef world_size() -> int:\n \"\"\"Returns the current world size (number of distributed processes).\"\"\"\n return dist.get_world_size() if dist.is_initialized() else 1\n\n\ndef gather(input: torch.Tensor) -> Tuple[torch.Tensor]:\n \"\"\"Gathers this tensor from all processes. Supports backprop.\"\"\"\n return GatherLayer.apply(input) # type: ignore[no-any-return]\n\n\ndef eye_rank(n: int, device: Optional[torch.device] = None) -> torch.Tensor:\n \"\"\"Returns an (n, n * world_size) zero matrix with the diagonal for the rank\n of this process set to 1.\n\n Example output where n=3, the current process has rank 1, and there are\n 4 processes in total:\n\n rank0 rank1 rank2 rank3\n 0 0 0 | 1 0 0 | 0 0 0 | 0 0 0\n 0 0 0 | 0 1 0 | 0 0 0 | 0 0 0\n 0 0 0 | 0 0 1 | 0 0 0 | 0 0 0\n\n Equivalent to torch.eye for undistributed settings or if world size == 1.\n\n Args:\n n:\n Size of the square matrix on a single process.\n device:\n Device on which the matrix should be created.\n\n \"\"\"\n rows = torch.arange(n, device=device, dtype=torch.long)\n cols = rows + rank() * n\n diag_mask = torch.zeros((n, n * world_size()), dtype=torch.bool)\n diag_mask[(rows, cols)] = True\n return diag_mask\n\n\nR = TypeVar(\"R\")\n\n\ndef rank_zero_only(fn: Callable[..., R]) -> Callable[..., Optional[R]]:\n \"\"\"Decorator that only runs the function on the process with rank 0.\n\n Example:\n >>> @rank_zero_only\n >>> def print_rank_zero(message: str):\n >>> print(message)\n >>>\n >>> print_rank_zero(\"Hello from rank 0!\")\n \"\"\"\n\n def wrapped(*args: Any, **kwargs: Any) -> Optional[R]:\n if rank() == 0:\n return fn(*args, **kwargs)\n return None\n\n return wrapped\n\n\n@rank_zero_only\ndef print_rank_zero(*args: Any, **kwargs: Any) -> None: # type: ignore[misc]\n \"\"\"Equivalent to print, but only runs on the process with rank 0.\"\"\"\n print(*args, **kwargs)\n", "path": "lightly/utils/dist.py"}]} | 1,415 | 723 |
gh_patches_debug_23382 | rasdani/github-patches | git_diff | coala__coala-1290 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`DocstyleDefinition`: Accept a single marker set also
Via the normal constructor or a class method.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `coalib/bearlib/languages/documentation/DocstyleDefinition.py`
Content:
```
1 import os.path
2
3 from coalib.misc.Decorators import generate_eq, generate_repr, enforce_signature
4 from coalib.parsing.ConfParser import ConfParser
5
6
7 @generate_repr()
8 @generate_eq("language", "docstyle", "markers")
9 class DocstyleDefinition:
10 """
11 The DocstyleDefinition class holds values that identify a certain type of
12 documentation comment (for which language, documentation style/tool used
13 etc.).
14 """
15
16 @enforce_signature
17 def __init__(self, language: str, docstyle: str, markers):
18 """
19 Instantiates a new DocstyleDefinition.
20
21 :param language: The case insensitive programming language of the
22 documentation comment, e.g. `"CPP"` for C++ or
23 `"PYTHON3"`.
24 :param docstyle: The case insensitive documentation style/tool used
25 to document code, e.g. `"default"` or `"doxygen"`.
26 :param markers: An iterable of marker/delimiter string iterables that
27 identify a documentation comment. See `markers`
28 property for more details on markers.
29 """
30 self._language = language.lower()
31 self._docstyle = docstyle.lower()
32 self._markers = tuple(tuple(marker_set) for marker_set in markers)
33
34 # Check marker set dimensions.
35 for marker_set in self._markers:
36 length = len(marker_set)
37 if length != 3:
38 raise ValueError("Length of a given marker set was not 3 (was "
39 "actually {}).".format(length))
40
41 @property
42 def language(self):
43 """
44 The programming language.
45
46 :return: A lower-case string defining the programming language (i.e.
47 "cpp" or "python").
48 """
49 return self._language
50
51 @property
52 def docstyle(self):
53 """
54 The documentation style/tool used to document code.
55
56 :return: A lower-case string defining the docstyle (i.e. "default" or
57 "doxygen").
58 """
59 return self._docstyle
60
61 @property
62 def markers(self):
63 """
64 A tuple of marker sets that identify a documentation comment.
65
66 Marker sets consist of 3 entries where the first is the start-marker,
67 the second one the each-line marker and the last one the end-marker.
68 For example a marker tuple with a single marker set
69 `(("/**", "*", "*/"),)` would match following documentation comment:
70
71 ```
72 /**
73 * This is documentation.
74 */
75 ```
76
77 It's also possible to supply an empty each-line marker
78 (`("/**", "", "*/")`):
79
80 ```
81 /**
82 This is more documentation.
83 */
84 ```
85
86 Markers are matched "greedy", that means it will match as many
87 each-line markers as possible. I.e. for `("///", "///", "///")`):
88
89 ```
90 /// Brief documentation.
91 ///
92 /// Detailed documentation.
93 ```
94
95 :return: A tuple of marker/delimiter string tuples that identify a
96 documentation comment.
97 """
98 return self._markers
99
100 @classmethod
101 @enforce_signature
102 def load(cls, language: str, docstyle: str):
103 """
104 Loads a `DocstyleDefinition` from the coala docstyle definition files.
105
106 This function considers all settings inside the according coalang-files
107 as markers.
108
109 :param language: The case insensitive programming language of
110 the documentation comment as a string.
111 :param docstyle: The case insensitive documentation
112 style/tool used to document code, e.g.
113 `"default"` or `"doxygen"`.
114 :raises FileNotFoundError: Raised when the given docstyle was not
115 found.
116 :raises KeyError: Raised when the given language is not
117 defined for given docstyle.
118 :return: The `DocstyleDefinition` for given language
119 and docstyle.
120 """
121
122 docstyle = docstyle.lower()
123
124 language_config_parser = ConfParser(remove_empty_iter_elements=False)
125 try:
126 docstyle_settings = language_config_parser.parse(
127 os.path.dirname(__file__) + "/" + docstyle + ".coalang")
128 except FileNotFoundError:
129 raise FileNotFoundError("Docstyle definition " + repr(docstyle) +
130 " not found.")
131
132 language = language.lower()
133
134 try:
135 docstyle_settings = docstyle_settings[language]
136 except KeyError:
137 raise KeyError("Language {} is not defined for docstyle {}."
138 .format(repr(language), repr(docstyle)))
139
140 marker_sets = (tuple(value)
141 for key, value in
142 filter(lambda kv: not kv[0].startswith("comment"),
143 docstyle_settings.contents.items()))
144
145 return cls(language, docstyle, marker_sets)
146
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/coalib/bearlib/languages/documentation/DocstyleDefinition.py b/coalib/bearlib/languages/documentation/DocstyleDefinition.py
--- a/coalib/bearlib/languages/documentation/DocstyleDefinition.py
+++ b/coalib/bearlib/languages/documentation/DocstyleDefinition.py
@@ -23,12 +23,19 @@
`"PYTHON3"`.
:param docstyle: The case insensitive documentation style/tool used
to document code, e.g. `"default"` or `"doxygen"`.
- :param markers: An iterable of marker/delimiter string iterables that
+ :param markers: An iterable of marker/delimiter string iterables
+ or a single marker/delimiter string iterable that
identify a documentation comment. See `markers`
property for more details on markers.
"""
self._language = language.lower()
self._docstyle = docstyle.lower()
+
+ # Check and modify tuple if only one marker_set exists.
+ markers = tuple(markers)
+ if len(markers) == 3 and all(isinstance(x, str) for x in markers):
+ markers = (markers,)
+
self._markers = tuple(tuple(marker_set) for marker_set in markers)
# Check marker set dimensions.
| {"golden_diff": "diff --git a/coalib/bearlib/languages/documentation/DocstyleDefinition.py b/coalib/bearlib/languages/documentation/DocstyleDefinition.py\n--- a/coalib/bearlib/languages/documentation/DocstyleDefinition.py\n+++ b/coalib/bearlib/languages/documentation/DocstyleDefinition.py\n@@ -23,12 +23,19 @@\n `\"PYTHON3\"`.\n :param docstyle: The case insensitive documentation style/tool used\n to document code, e.g. `\"default\"` or `\"doxygen\"`.\n- :param markers: An iterable of marker/delimiter string iterables that\n+ :param markers: An iterable of marker/delimiter string iterables\n+ or a single marker/delimiter string iterable that\n identify a documentation comment. See `markers`\n property for more details on markers.\n \"\"\"\n self._language = language.lower()\n self._docstyle = docstyle.lower()\n+\n+ # Check and modify tuple if only one marker_set exists.\n+ markers = tuple(markers)\n+ if len(markers) == 3 and all(isinstance(x, str) for x in markers):\n+ markers = (markers,)\n+\n self._markers = tuple(tuple(marker_set) for marker_set in markers)\n \n # Check marker set dimensions.\n", "issue": "`DocstyleDefinition`: Accept a single marker set also\nVia the normal constructor or a class method.\n\n", "before_files": [{"content": "import os.path\n\nfrom coalib.misc.Decorators import generate_eq, generate_repr, enforce_signature\nfrom coalib.parsing.ConfParser import ConfParser\n\n\n@generate_repr()\n@generate_eq(\"language\", \"docstyle\", \"markers\")\nclass DocstyleDefinition:\n \"\"\"\n The DocstyleDefinition class holds values that identify a certain type of\n documentation comment (for which language, documentation style/tool used\n etc.).\n \"\"\"\n\n @enforce_signature\n def __init__(self, language: str, docstyle: str, markers):\n \"\"\"\n Instantiates a new DocstyleDefinition.\n\n :param language: The case insensitive programming language of the\n documentation comment, e.g. `\"CPP\"` for C++ or\n `\"PYTHON3\"`.\n :param docstyle: The case insensitive documentation style/tool used\n to document code, e.g. `\"default\"` or `\"doxygen\"`.\n :param markers: An iterable of marker/delimiter string iterables that\n identify a documentation comment. See `markers`\n property for more details on markers.\n \"\"\"\n self._language = language.lower()\n self._docstyle = docstyle.lower()\n self._markers = tuple(tuple(marker_set) for marker_set in markers)\n\n # Check marker set dimensions.\n for marker_set in self._markers:\n length = len(marker_set)\n if length != 3:\n raise ValueError(\"Length of a given marker set was not 3 (was \"\n \"actually {}).\".format(length))\n\n @property\n def language(self):\n \"\"\"\n The programming language.\n\n :return: A lower-case string defining the programming language (i.e.\n \"cpp\" or \"python\").\n \"\"\"\n return self._language\n\n @property\n def docstyle(self):\n \"\"\"\n The documentation style/tool used to document code.\n\n :return: A lower-case string defining the docstyle (i.e. \"default\" or\n \"doxygen\").\n \"\"\"\n return self._docstyle\n\n @property\n def markers(self):\n \"\"\"\n A tuple of marker sets that identify a documentation comment.\n\n Marker sets consist of 3 entries where the first is the start-marker,\n the second one the each-line marker and the last one the end-marker.\n For example a marker tuple with a single marker set\n `((\"/**\", \"*\", \"*/\"),)` would match following documentation comment:\n\n ```\n /**\n * This is documentation.\n */\n ```\n\n It's also possible to supply an empty each-line marker\n (`(\"/**\", \"\", \"*/\")`):\n\n ```\n /**\n This is more documentation.\n */\n ```\n\n Markers are matched \"greedy\", that means it will match as many\n each-line markers as possible. I.e. for `(\"///\", \"///\", \"///\")`):\n\n ```\n /// Brief documentation.\n ///\n /// Detailed documentation.\n ```\n\n :return: A tuple of marker/delimiter string tuples that identify a\n documentation comment.\n \"\"\"\n return self._markers\n\n @classmethod\n @enforce_signature\n def load(cls, language: str, docstyle: str):\n \"\"\"\n Loads a `DocstyleDefinition` from the coala docstyle definition files.\n\n This function considers all settings inside the according coalang-files\n as markers.\n\n :param language: The case insensitive programming language of\n the documentation comment as a string.\n :param docstyle: The case insensitive documentation\n style/tool used to document code, e.g.\n `\"default\"` or `\"doxygen\"`.\n :raises FileNotFoundError: Raised when the given docstyle was not\n found.\n :raises KeyError: Raised when the given language is not\n defined for given docstyle.\n :return: The `DocstyleDefinition` for given language\n and docstyle.\n \"\"\"\n\n docstyle = docstyle.lower()\n\n language_config_parser = ConfParser(remove_empty_iter_elements=False)\n try:\n docstyle_settings = language_config_parser.parse(\n os.path.dirname(__file__) + \"/\" + docstyle + \".coalang\")\n except FileNotFoundError:\n raise FileNotFoundError(\"Docstyle definition \" + repr(docstyle) +\n \" not found.\")\n\n language = language.lower()\n\n try:\n docstyle_settings = docstyle_settings[language]\n except KeyError:\n raise KeyError(\"Language {} is not defined for docstyle {}.\"\n .format(repr(language), repr(docstyle)))\n\n marker_sets = (tuple(value)\n for key, value in\n filter(lambda kv: not kv[0].startswith(\"comment\"),\n docstyle_settings.contents.items()))\n\n return cls(language, docstyle, marker_sets)\n", "path": "coalib/bearlib/languages/documentation/DocstyleDefinition.py"}], "after_files": [{"content": "import os.path\n\nfrom coalib.misc.Decorators import generate_eq, generate_repr, enforce_signature\nfrom coalib.parsing.ConfParser import ConfParser\n\n\n@generate_repr()\n@generate_eq(\"language\", \"docstyle\", \"markers\")\nclass DocstyleDefinition:\n \"\"\"\n The DocstyleDefinition class holds values that identify a certain type of\n documentation comment (for which language, documentation style/tool used\n etc.).\n \"\"\"\n\n @enforce_signature\n def __init__(self, language: str, docstyle: str, markers):\n \"\"\"\n Instantiates a new DocstyleDefinition.\n\n :param language: The case insensitive programming language of the\n documentation comment, e.g. `\"CPP\"` for C++ or\n `\"PYTHON3\"`.\n :param docstyle: The case insensitive documentation style/tool used\n to document code, e.g. `\"default\"` or `\"doxygen\"`.\n :param markers: An iterable of marker/delimiter string iterables\n or a single marker/delimiter string iterable that\n identify a documentation comment. See `markers`\n property for more details on markers.\n \"\"\"\n self._language = language.lower()\n self._docstyle = docstyle.lower()\n\n # Check and modify tuple if only one marker_set exists.\n markers = tuple(markers)\n if len(markers) == 3 and all(isinstance(x, str) for x in markers):\n markers = (markers,)\n\n self._markers = tuple(tuple(marker_set) for marker_set in markers)\n\n # Check marker set dimensions.\n for marker_set in self._markers:\n length = len(marker_set)\n if length != 3:\n raise ValueError(\"Length of a given marker set was not 3 (was \"\n \"actually {}).\".format(length))\n\n @property\n def language(self):\n \"\"\"\n The programming language.\n\n :return: A lower-case string defining the programming language (i.e.\n \"cpp\" or \"python\").\n \"\"\"\n return self._language\n\n @property\n def docstyle(self):\n \"\"\"\n The documentation style/tool used to document code.\n\n :return: A lower-case string defining the docstyle (i.e. \"default\" or\n \"doxygen\").\n \"\"\"\n return self._docstyle\n\n @property\n def markers(self):\n \"\"\"\n A tuple of marker sets that identify a documentation comment.\n\n Marker sets consist of 3 entries where the first is the start-marker,\n the second one the each-line marker and the last one the end-marker.\n For example a marker tuple with a single marker set\n `((\"/**\", \"*\", \"*/\"),)` would match following documentation comment:\n\n ```\n /**\n * This is documentation.\n */\n ```\n\n It's also possible to supply an empty each-line marker\n (`(\"/**\", \"\", \"*/\")`):\n\n ```\n /**\n This is more documentation.\n */\n ```\n\n Markers are matched \"greedy\", that means it will match as many\n each-line markers as possible. I.e. for `(\"///\", \"///\", \"///\")`):\n\n ```\n /// Brief documentation.\n ///\n /// Detailed documentation.\n ```\n\n :return: A tuple of marker/delimiter string tuples that identify a\n documentation comment.\n \"\"\"\n return self._markers\n\n @classmethod\n @enforce_signature\n def load(cls, language: str, docstyle: str):\n \"\"\"\n Loads a `DocstyleDefinition` from the coala docstyle definition files.\n\n This function considers all settings inside the according coalang-files\n as markers.\n\n :param language: The case insensitive programming language of\n the documentation comment as a string.\n :param docstyle: The case insensitive documentation\n style/tool used to document code, e.g.\n `\"default\"` or `\"doxygen\"`.\n :raises FileNotFoundError: Raised when the given docstyle was not\n found.\n :raises KeyError: Raised when the given language is not\n defined for given docstyle.\n :return: The `DocstyleDefinition` for given language\n and docstyle.\n \"\"\"\n\n docstyle = docstyle.lower()\n\n language_config_parser = ConfParser(remove_empty_iter_elements=False)\n try:\n docstyle_settings = language_config_parser.parse(\n os.path.dirname(__file__) + \"/\" + docstyle + \".coalang\")\n except FileNotFoundError:\n raise FileNotFoundError(\"Docstyle definition \" + repr(docstyle) +\n \" not found.\")\n\n language = language.lower()\n\n try:\n docstyle_settings = docstyle_settings[language]\n except KeyError:\n raise KeyError(\"Language {} is not defined for docstyle {}.\"\n .format(repr(language), repr(docstyle)))\n\n marker_sets = (tuple(value)\n for key, value in\n filter(lambda kv: not kv[0].startswith(\"comment\"),\n docstyle_settings.contents.items()))\n\n return cls(language, docstyle, marker_sets)\n", "path": "coalib/bearlib/languages/documentation/DocstyleDefinition.py"}]} | 1,632 | 284 |
gh_patches_debug_33205 | rasdani/github-patches | git_diff | CTFd__CTFd-1589 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Review usage of error components
Looks like there needs to be more usage of the error components jinja snippet. It looks like it's missing in core/teams/public and core/teams/private at least.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/teams.py`
Content:
```
1 from flask import Blueprint, redirect, render_template, request, url_for
2
3 from CTFd.cache import clear_team_session, clear_user_session
4 from CTFd.models import Teams, db
5 from CTFd.utils import config, get_config
6 from CTFd.utils.crypto import verify_password
7 from CTFd.utils.decorators import authed_only, ratelimit
8 from CTFd.utils.decorators.modes import require_team_mode
9 from CTFd.utils.decorators.visibility import (
10 check_account_visibility,
11 check_score_visibility,
12 )
13 from CTFd.utils.helpers import get_errors, get_infos
14 from CTFd.utils.user import get_current_user
15
16 teams = Blueprint("teams", __name__)
17
18
19 @teams.route("/teams")
20 @check_account_visibility
21 @require_team_mode
22 def listing():
23 q = request.args.get("q")
24 field = request.args.get("field", "name")
25 filters = []
26
27 if field not in ("name", "affiliation", "website"):
28 field = "name"
29
30 if q:
31 filters.append(getattr(Teams, field).like("%{}%".format(q)))
32
33 teams = (
34 Teams.query.filter_by(hidden=False, banned=False)
35 .filter(*filters)
36 .order_by(Teams.id.asc())
37 .paginate(per_page=50)
38 )
39
40 args = dict(request.args)
41 args.pop("page", 1)
42
43 return render_template(
44 "teams/teams.html",
45 teams=teams,
46 prev_page=url_for(request.endpoint, page=teams.prev_num, **args),
47 next_page=url_for(request.endpoint, page=teams.next_num, **args),
48 q=q,
49 field=field,
50 )
51
52
53 @teams.route("/teams/join", methods=["GET", "POST"])
54 @authed_only
55 @require_team_mode
56 @ratelimit(method="POST", limit=10, interval=5)
57 def join():
58 infos = get_infos()
59 errors = get_errors()
60 if request.method == "GET":
61 team_size_limit = get_config("team_size", default=0)
62 if team_size_limit:
63 plural = "" if team_size_limit == 1 else "s"
64 infos.append(
65 "Teams are limited to {limit} member{plural}".format(
66 limit=team_size_limit, plural=plural
67 )
68 )
69 return render_template("teams/join_team.html", infos=infos, errors=errors)
70
71 if request.method == "POST":
72 teamname = request.form.get("name")
73 passphrase = request.form.get("password", "").strip()
74
75 team = Teams.query.filter_by(name=teamname).first()
76
77 if team and verify_password(passphrase, team.password):
78 team_size_limit = get_config("team_size", default=0)
79 if team_size_limit and len(team.members) >= team_size_limit:
80 errors.append(
81 "{name} has already reached the team size limit of {limit}".format(
82 name=team.name, limit=team_size_limit
83 )
84 )
85 return render_template(
86 "teams/join_team.html", infos=infos, errors=errors
87 )
88
89 user = get_current_user()
90 user.team_id = team.id
91 db.session.commit()
92
93 if len(team.members) == 1:
94 team.captain_id = user.id
95 db.session.commit()
96
97 clear_user_session(user_id=user.id)
98 clear_team_session(team_id=team.id)
99
100 return redirect(url_for("challenges.listing"))
101 else:
102 errors.append("That information is incorrect")
103 return render_template("teams/join_team.html", infos=infos, errors=errors)
104
105
106 @teams.route("/teams/new", methods=["GET", "POST"])
107 @authed_only
108 @require_team_mode
109 def new():
110 infos = get_infos()
111 errors = get_errors()
112 if request.method == "GET":
113 team_size_limit = get_config("team_size", default=0)
114 if team_size_limit:
115 plural = "" if team_size_limit == 1 else "s"
116 infos.append(
117 "Teams are limited to {limit} member{plural}".format(
118 limit=team_size_limit, plural=plural
119 )
120 )
121
122 return render_template("teams/new_team.html", infos=infos, errors=errors)
123 elif request.method == "POST":
124 teamname = request.form.get("name", "").strip()
125 passphrase = request.form.get("password", "").strip()
126 errors = get_errors()
127
128 user = get_current_user()
129
130 existing_team = Teams.query.filter_by(name=teamname).first()
131 if existing_team:
132 errors.append("That team name is already taken")
133 if not teamname:
134 errors.append("That team name is invalid")
135
136 if errors:
137 return render_template("teams/new_team.html", errors=errors)
138
139 team = Teams(name=teamname, password=passphrase, captain_id=user.id)
140
141 db.session.add(team)
142 db.session.commit()
143
144 user.team_id = team.id
145 db.session.commit()
146
147 clear_user_session(user_id=user.id)
148 clear_team_session(team_id=team.id)
149
150 return redirect(url_for("challenges.listing"))
151
152
153 @teams.route("/team")
154 @authed_only
155 @require_team_mode
156 def private():
157 user = get_current_user()
158 if not user.team_id:
159 return render_template("teams/team_enrollment.html")
160
161 team_id = user.team_id
162
163 team = Teams.query.filter_by(id=team_id).first_or_404()
164 solves = team.get_solves()
165 awards = team.get_awards()
166
167 place = team.place
168 score = team.score
169
170 return render_template(
171 "teams/private.html",
172 solves=solves,
173 awards=awards,
174 user=user,
175 team=team,
176 score=score,
177 place=place,
178 score_frozen=config.is_scoreboard_frozen(),
179 )
180
181
182 @teams.route("/teams/<int:team_id>")
183 @check_account_visibility
184 @check_score_visibility
185 @require_team_mode
186 def public(team_id):
187 errors = get_errors()
188 team = Teams.query.filter_by(id=team_id, banned=False, hidden=False).first_or_404()
189 solves = team.get_solves()
190 awards = team.get_awards()
191
192 place = team.place
193 score = team.score
194
195 if errors:
196 return render_template("teams/public.html", team=team, errors=errors)
197
198 return render_template(
199 "teams/public.html",
200 solves=solves,
201 awards=awards,
202 team=team,
203 score=score,
204 place=place,
205 score_frozen=config.is_scoreboard_frozen(),
206 )
207
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/CTFd/teams.py b/CTFd/teams.py
--- a/CTFd/teams.py
+++ b/CTFd/teams.py
@@ -154,6 +154,9 @@
@authed_only
@require_team_mode
def private():
+ infos = get_infos()
+ errors = get_errors()
+
user = get_current_user()
if not user.team_id:
return render_template("teams/team_enrollment.html")
@@ -167,6 +170,9 @@
place = team.place
score = team.score
+ if config.is_scoreboard_frozen():
+ infos.append("Scoreboard has been frozen")
+
return render_template(
"teams/private.html",
solves=solves,
@@ -176,6 +182,8 @@
score=score,
place=place,
score_frozen=config.is_scoreboard_frozen(),
+ infos=infos,
+ errors=errors,
)
@@ -184,6 +192,7 @@
@check_score_visibility
@require_team_mode
def public(team_id):
+ infos = get_infos()
errors = get_errors()
team = Teams.query.filter_by(id=team_id, banned=False, hidden=False).first_or_404()
solves = team.get_solves()
@@ -195,6 +204,9 @@
if errors:
return render_template("teams/public.html", team=team, errors=errors)
+ if config.is_scoreboard_frozen():
+ infos.append("Scoreboard has been frozen")
+
return render_template(
"teams/public.html",
solves=solves,
@@ -203,4 +215,6 @@
score=score,
place=place,
score_frozen=config.is_scoreboard_frozen(),
+ infos=infos,
+ errors=errors,
)
| {"golden_diff": "diff --git a/CTFd/teams.py b/CTFd/teams.py\n--- a/CTFd/teams.py\n+++ b/CTFd/teams.py\n@@ -154,6 +154,9 @@\n @authed_only\n @require_team_mode\n def private():\n+ infos = get_infos()\n+ errors = get_errors()\n+\n user = get_current_user()\n if not user.team_id:\n return render_template(\"teams/team_enrollment.html\")\n@@ -167,6 +170,9 @@\n place = team.place\n score = team.score\n \n+ if config.is_scoreboard_frozen():\n+ infos.append(\"Scoreboard has been frozen\")\n+\n return render_template(\n \"teams/private.html\",\n solves=solves,\n@@ -176,6 +182,8 @@\n score=score,\n place=place,\n score_frozen=config.is_scoreboard_frozen(),\n+ infos=infos,\n+ errors=errors,\n )\n \n \n@@ -184,6 +192,7 @@\n @check_score_visibility\n @require_team_mode\n def public(team_id):\n+ infos = get_infos()\n errors = get_errors()\n team = Teams.query.filter_by(id=team_id, banned=False, hidden=False).first_or_404()\n solves = team.get_solves()\n@@ -195,6 +204,9 @@\n if errors:\n return render_template(\"teams/public.html\", team=team, errors=errors)\n \n+ if config.is_scoreboard_frozen():\n+ infos.append(\"Scoreboard has been frozen\")\n+\n return render_template(\n \"teams/public.html\",\n solves=solves,\n@@ -203,4 +215,6 @@\n score=score,\n place=place,\n score_frozen=config.is_scoreboard_frozen(),\n+ infos=infos,\n+ errors=errors,\n )\n", "issue": "Review usage of error components\nLooks like there needs to be more usage of the error components jinja snippet. It looks like it's missing in core/teams/public and core/teams/private at least. \n", "before_files": [{"content": "from flask import Blueprint, redirect, render_template, request, url_for\n\nfrom CTFd.cache import clear_team_session, clear_user_session\nfrom CTFd.models import Teams, db\nfrom CTFd.utils import config, get_config\nfrom CTFd.utils.crypto import verify_password\nfrom CTFd.utils.decorators import authed_only, ratelimit\nfrom CTFd.utils.decorators.modes import require_team_mode\nfrom CTFd.utils.decorators.visibility import (\n check_account_visibility,\n check_score_visibility,\n)\nfrom CTFd.utils.helpers import get_errors, get_infos\nfrom CTFd.utils.user import get_current_user\n\nteams = Blueprint(\"teams\", __name__)\n\n\[email protected](\"/teams\")\n@check_account_visibility\n@require_team_mode\ndef listing():\n q = request.args.get(\"q\")\n field = request.args.get(\"field\", \"name\")\n filters = []\n\n if field not in (\"name\", \"affiliation\", \"website\"):\n field = \"name\"\n\n if q:\n filters.append(getattr(Teams, field).like(\"%{}%\".format(q)))\n\n teams = (\n Teams.query.filter_by(hidden=False, banned=False)\n .filter(*filters)\n .order_by(Teams.id.asc())\n .paginate(per_page=50)\n )\n\n args = dict(request.args)\n args.pop(\"page\", 1)\n\n return render_template(\n \"teams/teams.html\",\n teams=teams,\n prev_page=url_for(request.endpoint, page=teams.prev_num, **args),\n next_page=url_for(request.endpoint, page=teams.next_num, **args),\n q=q,\n field=field,\n )\n\n\[email protected](\"/teams/join\", methods=[\"GET\", \"POST\"])\n@authed_only\n@require_team_mode\n@ratelimit(method=\"POST\", limit=10, interval=5)\ndef join():\n infos = get_infos()\n errors = get_errors()\n if request.method == \"GET\":\n team_size_limit = get_config(\"team_size\", default=0)\n if team_size_limit:\n plural = \"\" if team_size_limit == 1 else \"s\"\n infos.append(\n \"Teams are limited to {limit} member{plural}\".format(\n limit=team_size_limit, plural=plural\n )\n )\n return render_template(\"teams/join_team.html\", infos=infos, errors=errors)\n\n if request.method == \"POST\":\n teamname = request.form.get(\"name\")\n passphrase = request.form.get(\"password\", \"\").strip()\n\n team = Teams.query.filter_by(name=teamname).first()\n\n if team and verify_password(passphrase, team.password):\n team_size_limit = get_config(\"team_size\", default=0)\n if team_size_limit and len(team.members) >= team_size_limit:\n errors.append(\n \"{name} has already reached the team size limit of {limit}\".format(\n name=team.name, limit=team_size_limit\n )\n )\n return render_template(\n \"teams/join_team.html\", infos=infos, errors=errors\n )\n\n user = get_current_user()\n user.team_id = team.id\n db.session.commit()\n\n if len(team.members) == 1:\n team.captain_id = user.id\n db.session.commit()\n\n clear_user_session(user_id=user.id)\n clear_team_session(team_id=team.id)\n\n return redirect(url_for(\"challenges.listing\"))\n else:\n errors.append(\"That information is incorrect\")\n return render_template(\"teams/join_team.html\", infos=infos, errors=errors)\n\n\[email protected](\"/teams/new\", methods=[\"GET\", \"POST\"])\n@authed_only\n@require_team_mode\ndef new():\n infos = get_infos()\n errors = get_errors()\n if request.method == \"GET\":\n team_size_limit = get_config(\"team_size\", default=0)\n if team_size_limit:\n plural = \"\" if team_size_limit == 1 else \"s\"\n infos.append(\n \"Teams are limited to {limit} member{plural}\".format(\n limit=team_size_limit, plural=plural\n )\n )\n\n return render_template(\"teams/new_team.html\", infos=infos, errors=errors)\n elif request.method == \"POST\":\n teamname = request.form.get(\"name\", \"\").strip()\n passphrase = request.form.get(\"password\", \"\").strip()\n errors = get_errors()\n\n user = get_current_user()\n\n existing_team = Teams.query.filter_by(name=teamname).first()\n if existing_team:\n errors.append(\"That team name is already taken\")\n if not teamname:\n errors.append(\"That team name is invalid\")\n\n if errors:\n return render_template(\"teams/new_team.html\", errors=errors)\n\n team = Teams(name=teamname, password=passphrase, captain_id=user.id)\n\n db.session.add(team)\n db.session.commit()\n\n user.team_id = team.id\n db.session.commit()\n\n clear_user_session(user_id=user.id)\n clear_team_session(team_id=team.id)\n\n return redirect(url_for(\"challenges.listing\"))\n\n\[email protected](\"/team\")\n@authed_only\n@require_team_mode\ndef private():\n user = get_current_user()\n if not user.team_id:\n return render_template(\"teams/team_enrollment.html\")\n\n team_id = user.team_id\n\n team = Teams.query.filter_by(id=team_id).first_or_404()\n solves = team.get_solves()\n awards = team.get_awards()\n\n place = team.place\n score = team.score\n\n return render_template(\n \"teams/private.html\",\n solves=solves,\n awards=awards,\n user=user,\n team=team,\n score=score,\n place=place,\n score_frozen=config.is_scoreboard_frozen(),\n )\n\n\[email protected](\"/teams/<int:team_id>\")\n@check_account_visibility\n@check_score_visibility\n@require_team_mode\ndef public(team_id):\n errors = get_errors()\n team = Teams.query.filter_by(id=team_id, banned=False, hidden=False).first_or_404()\n solves = team.get_solves()\n awards = team.get_awards()\n\n place = team.place\n score = team.score\n\n if errors:\n return render_template(\"teams/public.html\", team=team, errors=errors)\n\n return render_template(\n \"teams/public.html\",\n solves=solves,\n awards=awards,\n team=team,\n score=score,\n place=place,\n score_frozen=config.is_scoreboard_frozen(),\n )\n", "path": "CTFd/teams.py"}], "after_files": [{"content": "from flask import Blueprint, redirect, render_template, request, url_for\n\nfrom CTFd.cache import clear_team_session, clear_user_session\nfrom CTFd.models import Teams, db\nfrom CTFd.utils import config, get_config\nfrom CTFd.utils.crypto import verify_password\nfrom CTFd.utils.decorators import authed_only, ratelimit\nfrom CTFd.utils.decorators.modes import require_team_mode\nfrom CTFd.utils.decorators.visibility import (\n check_account_visibility,\n check_score_visibility,\n)\nfrom CTFd.utils.helpers import get_errors, get_infos\nfrom CTFd.utils.user import get_current_user\n\nteams = Blueprint(\"teams\", __name__)\n\n\[email protected](\"/teams\")\n@check_account_visibility\n@require_team_mode\ndef listing():\n q = request.args.get(\"q\")\n field = request.args.get(\"field\", \"name\")\n filters = []\n\n if field not in (\"name\", \"affiliation\", \"website\"):\n field = \"name\"\n\n if q:\n filters.append(getattr(Teams, field).like(\"%{}%\".format(q)))\n\n teams = (\n Teams.query.filter_by(hidden=False, banned=False)\n .filter(*filters)\n .order_by(Teams.id.asc())\n .paginate(per_page=50)\n )\n\n args = dict(request.args)\n args.pop(\"page\", 1)\n\n return render_template(\n \"teams/teams.html\",\n teams=teams,\n prev_page=url_for(request.endpoint, page=teams.prev_num, **args),\n next_page=url_for(request.endpoint, page=teams.next_num, **args),\n q=q,\n field=field,\n )\n\n\[email protected](\"/teams/join\", methods=[\"GET\", \"POST\"])\n@authed_only\n@require_team_mode\n@ratelimit(method=\"POST\", limit=10, interval=5)\ndef join():\n infos = get_infos()\n errors = get_errors()\n if request.method == \"GET\":\n team_size_limit = get_config(\"team_size\", default=0)\n if team_size_limit:\n plural = \"\" if team_size_limit == 1 else \"s\"\n infos.append(\n \"Teams are limited to {limit} member{plural}\".format(\n limit=team_size_limit, plural=plural\n )\n )\n return render_template(\"teams/join_team.html\", infos=infos, errors=errors)\n\n if request.method == \"POST\":\n teamname = request.form.get(\"name\")\n passphrase = request.form.get(\"password\", \"\").strip()\n\n team = Teams.query.filter_by(name=teamname).first()\n\n if team and verify_password(passphrase, team.password):\n team_size_limit = get_config(\"team_size\", default=0)\n if team_size_limit and len(team.members) >= team_size_limit:\n errors.append(\n \"{name} has already reached the team size limit of {limit}\".format(\n name=team.name, limit=team_size_limit\n )\n )\n return render_template(\n \"teams/join_team.html\", infos=infos, errors=errors\n )\n\n user = get_current_user()\n user.team_id = team.id\n db.session.commit()\n\n if len(team.members) == 1:\n team.captain_id = user.id\n db.session.commit()\n\n clear_user_session(user_id=user.id)\n clear_team_session(team_id=team.id)\n\n return redirect(url_for(\"challenges.listing\"))\n else:\n errors.append(\"That information is incorrect\")\n return render_template(\"teams/join_team.html\", infos=infos, errors=errors)\n\n\[email protected](\"/teams/new\", methods=[\"GET\", \"POST\"])\n@authed_only\n@require_team_mode\ndef new():\n infos = get_infos()\n errors = get_errors()\n if request.method == \"GET\":\n team_size_limit = get_config(\"team_size\", default=0)\n if team_size_limit:\n plural = \"\" if team_size_limit == 1 else \"s\"\n infos.append(\n \"Teams are limited to {limit} member{plural}\".format(\n limit=team_size_limit, plural=plural\n )\n )\n\n return render_template(\"teams/new_team.html\", infos=infos, errors=errors)\n elif request.method == \"POST\":\n teamname = request.form.get(\"name\", \"\").strip()\n passphrase = request.form.get(\"password\", \"\").strip()\n errors = get_errors()\n\n user = get_current_user()\n\n existing_team = Teams.query.filter_by(name=teamname).first()\n if existing_team:\n errors.append(\"That team name is already taken\")\n if not teamname:\n errors.append(\"That team name is invalid\")\n\n if errors:\n return render_template(\"teams/new_team.html\", errors=errors)\n\n team = Teams(name=teamname, password=passphrase, captain_id=user.id)\n\n db.session.add(team)\n db.session.commit()\n\n user.team_id = team.id\n db.session.commit()\n\n clear_user_session(user_id=user.id)\n clear_team_session(team_id=team.id)\n\n return redirect(url_for(\"challenges.listing\"))\n\n\[email protected](\"/team\")\n@authed_only\n@require_team_mode\ndef private():\n infos = get_infos()\n errors = get_errors()\n\n user = get_current_user()\n if not user.team_id:\n return render_template(\"teams/team_enrollment.html\")\n\n team_id = user.team_id\n\n team = Teams.query.filter_by(id=team_id).first_or_404()\n solves = team.get_solves()\n awards = team.get_awards()\n\n place = team.place\n score = team.score\n\n if config.is_scoreboard_frozen():\n infos.append(\"Scoreboard has been frozen\")\n\n return render_template(\n \"teams/private.html\",\n solves=solves,\n awards=awards,\n user=user,\n team=team,\n score=score,\n place=place,\n score_frozen=config.is_scoreboard_frozen(),\n infos=infos,\n errors=errors,\n )\n\n\[email protected](\"/teams/<int:team_id>\")\n@check_account_visibility\n@check_score_visibility\n@require_team_mode\ndef public(team_id):\n infos = get_infos()\n errors = get_errors()\n team = Teams.query.filter_by(id=team_id, banned=False, hidden=False).first_or_404()\n solves = team.get_solves()\n awards = team.get_awards()\n\n place = team.place\n score = team.score\n\n if errors:\n return render_template(\"teams/public.html\", team=team, errors=errors)\n\n if config.is_scoreboard_frozen():\n infos.append(\"Scoreboard has been frozen\")\n\n return render_template(\n \"teams/public.html\",\n solves=solves,\n awards=awards,\n team=team,\n score=score,\n place=place,\n score_frozen=config.is_scoreboard_frozen(),\n infos=infos,\n errors=errors,\n )\n", "path": "CTFd/teams.py"}]} | 2,242 | 415 |
gh_patches_debug_4297 | rasdani/github-patches | git_diff | NVIDIA-Merlin__NVTabular-1312 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Getting error when loading the TF4Rec PyTorch model to the TIS
**Describe the bug**
I am getting the following error when I load a trained TF4Rec PyTorch to TIS:
```
| t4r_pytorch_pt | 1 | UNAVAILABLE: Internal: ImportError: cannot import name '_convert_string2pytorch_dty |
| | | pe' from 'nvtabular.inference.triton' (/nvtabular/nvtabular/inference/triton/__init |
| | | __.py) |
| | | |
| | | At: |
| | | /workspace/models/t4r_pytorch_pt/1/model.py(42): <module> |
| | | <frozen importlib._bootstrap>(219): _call_with_frames_removed |
| | | <frozen importlib._bootstrap_external>(848): exec_module |
| | | <frozen importlib._bootstrap>(686): _load_unlocked |
| | | <frozen importlib._bootstrap>(975): _find_and_load_unlocked |
| | | <frozen importlib._bootstrap>(991): _find_and_load |
+-----------------+---------+---------------------------------------------------------
```
**Steps/Code to reproduce bug**
Run the 02 and 03 notebooks Transformers4Rec tutorial [notebooks](https://github.com/NVIDIA-Merlin/Transformers4Rec/tree/main/examples/tutorial) to train the model. Then serve the model to TIS based on the instructions given on the [inference notebook](https://github.com/NVIDIA-Merlin/Transformers4Rec/blob/main/examples/tutorial/04-Inference-with-Triton.ipynb).
`Oct-2019.parquet` Dataset can be downloaded from here: https://drive.google.com/drive/u/0/folders/1nTuG6UHWOEaZnBJj7YSIVvnphE1zGc1h
**Expected behavior**
Model should be loaded to the TIS without issue.
**Environment details (please complete the following information):**
- Environment location: [Bare-metal, Docker, Cloud(specify cloud provider)] : Docker
- Method of NVTabular install: [conda, Docker, or from source]: Docker `merlin-inference:21.11` and `merlin-pytoch-training:21.11` `
Please do `git pull origin main` && `pip install -e .` to pull the latest main branch.
- If method of install is [Docker], provide `docker pull` & `docker run` commands used
This issue was also submitted by a user on TF4Rec GH repo- https://github.com/NVIDIA-Merlin/Transformers4Rec/issues/339
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nvtabular/inference/triton/__init__.py`
Content:
```
1 # Copyright (c) 2021, NVIDIA CORPORATION.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 import json
16 import os
17
18 import pandas as pd
19
20 # this needs to be before any modules that import protobuf
21 os.environ["PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION"] = "python"
22
23 import tritonclient.grpc as grpcclient # noqa
24 from tritonclient.utils import np_to_triton_dtype # noqa
25
26 from nvtabular.dispatch import _is_list_dtype, _is_string_dtype, _make_df # noqa
27 from nvtabular.inference.triton.ensemble import ( # noqa
28 export_hugectr_ensemble,
29 export_pytorch_ensemble,
30 export_tensorflow_ensemble,
31 generate_hugectr_model,
32 generate_nvtabular_model,
33 )
34
35
36 def convert_df_to_triton_input(column_names, batch, input_class=grpcclient.InferInput):
37 columns = [(col, batch[col]) for col in column_names]
38 inputs = []
39 for i, (name, col) in enumerate(columns):
40 if _is_list_dtype(col):
41 if isinstance(col, pd.Series):
42 raise ValueError("this function doesn't support CPU list values yet")
43 inputs.append(
44 _convert_column_to_triton_input(
45 col._column.offsets.values_host.astype("int64"), name + "__nnzs", input_class
46 )
47 )
48 inputs.append(
49 _convert_column_to_triton_input(
50 col.list.leaves.values_host.astype("int64"), name + "__values", input_class
51 )
52 )
53 else:
54 values = col.values if isinstance(col, pd.Series) else col.values_host
55 inputs.append(_convert_column_to_triton_input(values, name, input_class))
56 return inputs
57
58
59 def _convert_column_to_triton_input(col, name, input_class=grpcclient.InferInput):
60 col = col.reshape(len(col), 1)
61 input_tensor = input_class(name, col.shape, np_to_triton_dtype(col.dtype))
62 input_tensor.set_data_from_numpy(col)
63 return input_tensor
64
65
66 def convert_triton_output_to_df(columns, response):
67 return _make_df({col: response.as_numpy(col) for col in columns})
68
69
70 def get_column_types(path):
71 return json.load(open(os.path.join(path, "column_types.json")))
72
73
74 def _convert_tensor(t):
75 out = t.as_numpy()
76 if len(out.shape) == 2:
77 out = out[:, 0]
78 # cudf doesn't seem to handle dtypes like |S15 or object that well
79 if _is_string_dtype(out.dtype):
80 out = out.astype("str")
81 return out
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nvtabular/inference/triton/__init__.py b/nvtabular/inference/triton/__init__.py
--- a/nvtabular/inference/triton/__init__.py
+++ b/nvtabular/inference/triton/__init__.py
@@ -25,6 +25,7 @@
from nvtabular.dispatch import _is_list_dtype, _is_string_dtype, _make_df # noqa
from nvtabular.inference.triton.ensemble import ( # noqa
+ _convert_string2pytorch_dtype,
export_hugectr_ensemble,
export_pytorch_ensemble,
export_tensorflow_ensemble,
| {"golden_diff": "diff --git a/nvtabular/inference/triton/__init__.py b/nvtabular/inference/triton/__init__.py\n--- a/nvtabular/inference/triton/__init__.py\n+++ b/nvtabular/inference/triton/__init__.py\n@@ -25,6 +25,7 @@\n \n from nvtabular.dispatch import _is_list_dtype, _is_string_dtype, _make_df # noqa\n from nvtabular.inference.triton.ensemble import ( # noqa\n+ _convert_string2pytorch_dtype,\n export_hugectr_ensemble,\n export_pytorch_ensemble,\n export_tensorflow_ensemble,\n", "issue": "[BUG] Getting error when loading the TF4Rec PyTorch model to the TIS\n**Describe the bug**\r\nI am getting the following error when I load a trained TF4Rec PyTorch to TIS:\r\n\r\n```\r\n | t4r_pytorch_pt | 1 | UNAVAILABLE: Internal: ImportError: cannot import name '_convert_string2pytorch_dty |\r\n| | | pe' from 'nvtabular.inference.triton' (/nvtabular/nvtabular/inference/triton/__init |\r\n| | | __.py) |\r\n| | | |\r\n| | | At: |\r\n| | | /workspace/models/t4r_pytorch_pt/1/model.py(42): <module> |\r\n| | | <frozen importlib._bootstrap>(219): _call_with_frames_removed |\r\n| | | <frozen importlib._bootstrap_external>(848): exec_module |\r\n| | | <frozen importlib._bootstrap>(686): _load_unlocked |\r\n| | | <frozen importlib._bootstrap>(975): _find_and_load_unlocked |\r\n| | | <frozen importlib._bootstrap>(991): _find_and_load |\r\n+-----------------+---------+---------------------------------------------------------\r\n```\r\n\r\n**Steps/Code to reproduce bug**\r\n\r\nRun the 02 and 03 notebooks Transformers4Rec tutorial [notebooks](https://github.com/NVIDIA-Merlin/Transformers4Rec/tree/main/examples/tutorial) to train the model. Then serve the model to TIS based on the instructions given on the [inference notebook](https://github.com/NVIDIA-Merlin/Transformers4Rec/blob/main/examples/tutorial/04-Inference-with-Triton.ipynb).\r\n\r\n`Oct-2019.parquet` Dataset can be downloaded from here: https://drive.google.com/drive/u/0/folders/1nTuG6UHWOEaZnBJj7YSIVvnphE1zGc1h\r\n\r\n**Expected behavior**\r\nModel should be loaded to the TIS without issue.\r\n\r\n**Environment details (please complete the following information):**\r\n - Environment location: [Bare-metal, Docker, Cloud(specify cloud provider)] : Docker\r\n - Method of NVTabular install: [conda, Docker, or from source]: Docker `merlin-inference:21.11` and `merlin-pytoch-training:21.11` `\r\n Please do `git pull origin main` && `pip install -e .` to pull the latest main branch.\r\n - If method of install is [Docker], provide `docker pull` & `docker run` commands used\r\n \r\nThis issue was also submitted by a user on TF4Rec GH repo- https://github.com/NVIDIA-Merlin/Transformers4Rec/issues/339\r\n\n", "before_files": [{"content": "# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport json\nimport os\n\nimport pandas as pd\n\n# this needs to be before any modules that import protobuf\nos.environ[\"PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION\"] = \"python\"\n\nimport tritonclient.grpc as grpcclient # noqa\nfrom tritonclient.utils import np_to_triton_dtype # noqa\n\nfrom nvtabular.dispatch import _is_list_dtype, _is_string_dtype, _make_df # noqa\nfrom nvtabular.inference.triton.ensemble import ( # noqa\n export_hugectr_ensemble,\n export_pytorch_ensemble,\n export_tensorflow_ensemble,\n generate_hugectr_model,\n generate_nvtabular_model,\n)\n\n\ndef convert_df_to_triton_input(column_names, batch, input_class=grpcclient.InferInput):\n columns = [(col, batch[col]) for col in column_names]\n inputs = []\n for i, (name, col) in enumerate(columns):\n if _is_list_dtype(col):\n if isinstance(col, pd.Series):\n raise ValueError(\"this function doesn't support CPU list values yet\")\n inputs.append(\n _convert_column_to_triton_input(\n col._column.offsets.values_host.astype(\"int64\"), name + \"__nnzs\", input_class\n )\n )\n inputs.append(\n _convert_column_to_triton_input(\n col.list.leaves.values_host.astype(\"int64\"), name + \"__values\", input_class\n )\n )\n else:\n values = col.values if isinstance(col, pd.Series) else col.values_host\n inputs.append(_convert_column_to_triton_input(values, name, input_class))\n return inputs\n\n\ndef _convert_column_to_triton_input(col, name, input_class=grpcclient.InferInput):\n col = col.reshape(len(col), 1)\n input_tensor = input_class(name, col.shape, np_to_triton_dtype(col.dtype))\n input_tensor.set_data_from_numpy(col)\n return input_tensor\n\n\ndef convert_triton_output_to_df(columns, response):\n return _make_df({col: response.as_numpy(col) for col in columns})\n\n\ndef get_column_types(path):\n return json.load(open(os.path.join(path, \"column_types.json\")))\n\n\ndef _convert_tensor(t):\n out = t.as_numpy()\n if len(out.shape) == 2:\n out = out[:, 0]\n # cudf doesn't seem to handle dtypes like |S15 or object that well\n if _is_string_dtype(out.dtype):\n out = out.astype(\"str\")\n return out\n", "path": "nvtabular/inference/triton/__init__.py"}], "after_files": [{"content": "# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport json\nimport os\n\nimport pandas as pd\n\n# this needs to be before any modules that import protobuf\nos.environ[\"PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION\"] = \"python\"\n\nimport tritonclient.grpc as grpcclient # noqa\nfrom tritonclient.utils import np_to_triton_dtype # noqa\n\nfrom nvtabular.dispatch import _is_list_dtype, _is_string_dtype, _make_df # noqa\nfrom nvtabular.inference.triton.ensemble import ( # noqa\n _convert_string2pytorch_dtype,\n export_hugectr_ensemble,\n export_pytorch_ensemble,\n export_tensorflow_ensemble,\n generate_hugectr_model,\n generate_nvtabular_model,\n)\n\n\ndef convert_df_to_triton_input(column_names, batch, input_class=grpcclient.InferInput):\n columns = [(col, batch[col]) for col in column_names]\n inputs = []\n for i, (name, col) in enumerate(columns):\n if _is_list_dtype(col):\n if isinstance(col, pd.Series):\n raise ValueError(\"this function doesn't support CPU list values yet\")\n inputs.append(\n _convert_column_to_triton_input(\n col._column.offsets.values_host.astype(\"int64\"), name + \"__nnzs\", input_class\n )\n )\n inputs.append(\n _convert_column_to_triton_input(\n col.list.leaves.values_host.astype(\"int64\"), name + \"__values\", input_class\n )\n )\n else:\n values = col.values if isinstance(col, pd.Series) else col.values_host\n inputs.append(_convert_column_to_triton_input(values, name, input_class))\n return inputs\n\n\ndef _convert_column_to_triton_input(col, name, input_class=grpcclient.InferInput):\n col = col.reshape(len(col), 1)\n input_tensor = input_class(name, col.shape, np_to_triton_dtype(col.dtype))\n input_tensor.set_data_from_numpy(col)\n return input_tensor\n\n\ndef convert_triton_output_to_df(columns, response):\n return _make_df({col: response.as_numpy(col) for col in columns})\n\n\ndef get_column_types(path):\n return json.load(open(os.path.join(path, \"column_types.json\")))\n\n\ndef _convert_tensor(t):\n out = t.as_numpy()\n if len(out.shape) == 2:\n out = out[:, 0]\n # cudf doesn't seem to handle dtypes like |S15 or object that well\n if _is_string_dtype(out.dtype):\n out = out.astype(\"str\")\n return out\n", "path": "nvtabular/inference/triton/__init__.py"}]} | 1,744 | 145 |
gh_patches_debug_28929 | rasdani/github-patches | git_diff | iterative__dvc-7729 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dvc list: Error on empty directory.
# Bug Report
Got error message on an empty directory, shouldn't it show nothing? like ls command.
<!--
## Issue name
Issue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug.
Example: `repro: doesn't detect input changes`
-->
## Description

Error when list a empty path, strange behavior.
Might relate to https://github.com/iterative/dvc/blob/daf07451f8e8f3e76a791c696b0ea175e8ed3ac1/dvc/repo/ls.py#L40-L41
<!--
A clear and concise description of what the bug is.
-->
### Reproduce
1. git init
2. dvc init
3. mkdir empty
4. dvc list . empty
<!--
Step list of how to reproduce the bug
-->
<!--
Example:
1. dvc init
2. Copy dataset.zip to the directory
3. dvc add dataset.zip
4. dvc run -d dataset.zip -o model ./train.sh
5. modify dataset.zip
6. dvc repro
-->
### Expected
Show nothing like ls command

<!--
A clear and concise description of what you expect to happen.
-->
### Environment information
DVC version: 2.0.17+7e4851
---------------------------------
Platform: Python 3.8.8 on macOS-10.16-x86_64-i386-64bit
Supports: All remotes
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: None
Workspace directory: apfs on /dev/disk3s1s1
Repo: dvc, git
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor`:**
```console
$ dvc doctor
```
**Additional Information (if any):**
<!--
Please check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue.
If applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.
If the issue is regarding the performance, please attach the profiling information and the benchmark comparisons.
-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/repo/ls.py`
Content:
```
1 import os
2 from itertools import chain
3
4 from dvc.exceptions import PathMissingError
5
6
7 def ls(url, path=None, rev=None, recursive=None, dvc_only=False):
8 """Methods for getting files and outputs for the repo.
9
10 Args:
11 url (str): the repo url
12 path (str, optional): relative path into the repo
13 rev (str, optional): SHA commit, branch or tag name
14 recursive (bool, optional): recursively walk the repo
15 dvc_only (bool, optional): show only DVC-artifacts
16
17 Returns:
18 list of `entry`
19
20 Notes:
21 `entry` is a dictionary with structure
22 {
23 "path": str,
24 "isout": bool,
25 "isdir": bool,
26 "isexec": bool,
27 }
28 """
29 from . import Repo
30
31 with Repo.open(url, rev=rev, subrepos=True, uninitialized=True) as repo:
32 path = path or ""
33
34 ret = _ls(repo.repo_fs, path, recursive, dvc_only)
35
36 if path and not ret:
37 raise PathMissingError(path, repo, dvc_only=dvc_only)
38
39 ret_list = []
40 for path, info in ret.items():
41 info["path"] = path
42 ret_list.append(info)
43 ret_list.sort(key=lambda f: f["path"])
44 return ret_list
45
46
47 def _ls(fs, path, recursive=None, dvc_only=False):
48 fs_path = fs.from_os_path(path)
49
50 try:
51 fs_path = fs.info(fs_path)["name"]
52 except FileNotFoundError:
53 return {}
54
55 infos = {}
56 for root, dirs, files in fs.walk(
57 fs_path, dvcfiles=True, dvc_only=dvc_only
58 ):
59 entries = chain(files, dirs) if not recursive else files
60
61 for entry in entries:
62 entry_fs_path = fs.path.join(root, entry)
63 relparts = fs.path.relparts(entry_fs_path, fs_path)
64 name = os.path.join(*relparts)
65 infos[name] = fs.info(entry_fs_path)
66
67 if not recursive:
68 break
69
70 if not infos and fs.isfile(fs_path):
71 infos[os.path.basename(path)] = fs.info(fs_path)
72
73 ret = {}
74 for name, info in infos.items():
75 dvc_info = info.get("dvc_info", {})
76 if dvc_info.get("outs") or not dvc_only:
77 ret[name] = {
78 "isout": dvc_info.get("isout", False),
79 "isdir": info["type"] == "directory",
80 "isexec": info.get("isexec", False),
81 }
82
83 return ret
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dvc/repo/ls.py b/dvc/repo/ls.py
--- a/dvc/repo/ls.py
+++ b/dvc/repo/ls.py
@@ -1,10 +1,22 @@
import os
from itertools import chain
+from typing import TYPE_CHECKING, Optional
from dvc.exceptions import PathMissingError
+if TYPE_CHECKING:
+ from dvc.fs.repo import RepoFileSystem
-def ls(url, path=None, rev=None, recursive=None, dvc_only=False):
+ from . import Repo
+
+
+def ls(
+ url: str,
+ path: Optional[str] = None,
+ rev: str = None,
+ recursive: bool = None,
+ dvc_only: bool = False,
+):
"""Methods for getting files and outputs for the repo.
Args:
@@ -31,10 +43,7 @@
with Repo.open(url, rev=rev, subrepos=True, uninitialized=True) as repo:
path = path or ""
- ret = _ls(repo.repo_fs, path, recursive, dvc_only)
-
- if path and not ret:
- raise PathMissingError(path, repo, dvc_only=dvc_only)
+ ret = _ls(repo, path, recursive, dvc_only)
ret_list = []
for path, info in ret.items():
@@ -44,13 +53,16 @@
return ret_list
-def _ls(fs, path, recursive=None, dvc_only=False):
+def _ls(
+ repo: "Repo", path: str, recursive: bool = None, dvc_only: bool = False
+):
+ fs: "RepoFileSystem" = repo.repo_fs
fs_path = fs.from_os_path(path)
try:
fs_path = fs.info(fs_path)["name"]
except FileNotFoundError:
- return {}
+ raise PathMissingError(path, repo, dvc_only=dvc_only)
infos = {}
for root, dirs, files in fs.walk(
| {"golden_diff": "diff --git a/dvc/repo/ls.py b/dvc/repo/ls.py\n--- a/dvc/repo/ls.py\n+++ b/dvc/repo/ls.py\n@@ -1,10 +1,22 @@\n import os\n from itertools import chain\n+from typing import TYPE_CHECKING, Optional\n \n from dvc.exceptions import PathMissingError\n \n+if TYPE_CHECKING:\n+ from dvc.fs.repo import RepoFileSystem\n \n-def ls(url, path=None, rev=None, recursive=None, dvc_only=False):\n+ from . import Repo\n+\n+\n+def ls(\n+ url: str,\n+ path: Optional[str] = None,\n+ rev: str = None,\n+ recursive: bool = None,\n+ dvc_only: bool = False,\n+):\n \"\"\"Methods for getting files and outputs for the repo.\n \n Args:\n@@ -31,10 +43,7 @@\n with Repo.open(url, rev=rev, subrepos=True, uninitialized=True) as repo:\n path = path or \"\"\n \n- ret = _ls(repo.repo_fs, path, recursive, dvc_only)\n-\n- if path and not ret:\n- raise PathMissingError(path, repo, dvc_only=dvc_only)\n+ ret = _ls(repo, path, recursive, dvc_only)\n \n ret_list = []\n for path, info in ret.items():\n@@ -44,13 +53,16 @@\n return ret_list\n \n \n-def _ls(fs, path, recursive=None, dvc_only=False):\n+def _ls(\n+ repo: \"Repo\", path: str, recursive: bool = None, dvc_only: bool = False\n+):\n+ fs: \"RepoFileSystem\" = repo.repo_fs\n fs_path = fs.from_os_path(path)\n \n try:\n fs_path = fs.info(fs_path)[\"name\"]\n except FileNotFoundError:\n- return {}\n+ raise PathMissingError(path, repo, dvc_only=dvc_only)\n \n infos = {}\n for root, dirs, files in fs.walk(\n", "issue": "dvc list: Error on empty directory. \n# Bug Report\r\n\r\nGot error message on an empty directory, shouldn't it show nothing? like ls command.\r\n\r\n\r\n<!--\r\n## Issue name\r\n\r\nIssue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug. \r\n\r\nExample: `repro: doesn't detect input changes`\r\n-->\r\n\r\n## Description\r\n\r\nError when list a empty path, strange behavior.\r\nMight relate to https://github.com/iterative/dvc/blob/daf07451f8e8f3e76a791c696b0ea175e8ed3ac1/dvc/repo/ls.py#L40-L41\r\n\r\n<!--\r\nA clear and concise description of what the bug is.\r\n-->\r\n\r\n### Reproduce\r\n\r\n1. git init\r\n2. dvc init\r\n3. mkdir empty\r\n4. dvc list . empty\r\n\r\n<!--\r\nStep list of how to reproduce the bug\r\n-->\r\n\r\n<!--\r\nExample:\r\n\r\n1. dvc init\r\n2. Copy dataset.zip to the directory\r\n3. dvc add dataset.zip\r\n4. dvc run -d dataset.zip -o model ./train.sh\r\n5. modify dataset.zip\r\n6. dvc repro\r\n-->\r\n\r\n### Expected\r\nShow nothing like ls command\r\n\r\n\r\n<!--\r\nA clear and concise description of what you expect to happen.\r\n-->\r\n\r\n### Environment information\r\nDVC version: 2.0.17+7e4851\r\n---------------------------------\r\nPlatform: Python 3.8.8 on macOS-10.16-x86_64-i386-64bit\r\nSupports: All remotes\r\nCache types: <https://error.dvc.org/no-dvc-cache>\r\nCaches: local\r\nRemotes: None\r\nWorkspace directory: apfs on /dev/disk3s1s1\r\nRepo: dvc, git\r\n<!--\r\nThis is required to ensure that we can reproduce the bug.\r\n-->\r\n\r\n**Output of `dvc doctor`:**\r\n\r\n```console\r\n$ dvc doctor\r\n```\r\n\r\n**Additional Information (if any):**\r\n\r\n<!--\r\nPlease check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue.\r\n\r\nIf applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.\r\nIf the issue is regarding the performance, please attach the profiling information and the benchmark comparisons.\r\n-->\r\n\n", "before_files": [{"content": "import os\nfrom itertools import chain\n\nfrom dvc.exceptions import PathMissingError\n\n\ndef ls(url, path=None, rev=None, recursive=None, dvc_only=False):\n \"\"\"Methods for getting files and outputs for the repo.\n\n Args:\n url (str): the repo url\n path (str, optional): relative path into the repo\n rev (str, optional): SHA commit, branch or tag name\n recursive (bool, optional): recursively walk the repo\n dvc_only (bool, optional): show only DVC-artifacts\n\n Returns:\n list of `entry`\n\n Notes:\n `entry` is a dictionary with structure\n {\n \"path\": str,\n \"isout\": bool,\n \"isdir\": bool,\n \"isexec\": bool,\n }\n \"\"\"\n from . import Repo\n\n with Repo.open(url, rev=rev, subrepos=True, uninitialized=True) as repo:\n path = path or \"\"\n\n ret = _ls(repo.repo_fs, path, recursive, dvc_only)\n\n if path and not ret:\n raise PathMissingError(path, repo, dvc_only=dvc_only)\n\n ret_list = []\n for path, info in ret.items():\n info[\"path\"] = path\n ret_list.append(info)\n ret_list.sort(key=lambda f: f[\"path\"])\n return ret_list\n\n\ndef _ls(fs, path, recursive=None, dvc_only=False):\n fs_path = fs.from_os_path(path)\n\n try:\n fs_path = fs.info(fs_path)[\"name\"]\n except FileNotFoundError:\n return {}\n\n infos = {}\n for root, dirs, files in fs.walk(\n fs_path, dvcfiles=True, dvc_only=dvc_only\n ):\n entries = chain(files, dirs) if not recursive else files\n\n for entry in entries:\n entry_fs_path = fs.path.join(root, entry)\n relparts = fs.path.relparts(entry_fs_path, fs_path)\n name = os.path.join(*relparts)\n infos[name] = fs.info(entry_fs_path)\n\n if not recursive:\n break\n\n if not infos and fs.isfile(fs_path):\n infos[os.path.basename(path)] = fs.info(fs_path)\n\n ret = {}\n for name, info in infos.items():\n dvc_info = info.get(\"dvc_info\", {})\n if dvc_info.get(\"outs\") or not dvc_only:\n ret[name] = {\n \"isout\": dvc_info.get(\"isout\", False),\n \"isdir\": info[\"type\"] == \"directory\",\n \"isexec\": info.get(\"isexec\", False),\n }\n\n return ret\n", "path": "dvc/repo/ls.py"}], "after_files": [{"content": "import os\nfrom itertools import chain\nfrom typing import TYPE_CHECKING, Optional\n\nfrom dvc.exceptions import PathMissingError\n\nif TYPE_CHECKING:\n from dvc.fs.repo import RepoFileSystem\n\n from . import Repo\n\n\ndef ls(\n url: str,\n path: Optional[str] = None,\n rev: str = None,\n recursive: bool = None,\n dvc_only: bool = False,\n):\n \"\"\"Methods for getting files and outputs for the repo.\n\n Args:\n url (str): the repo url\n path (str, optional): relative path into the repo\n rev (str, optional): SHA commit, branch or tag name\n recursive (bool, optional): recursively walk the repo\n dvc_only (bool, optional): show only DVC-artifacts\n\n Returns:\n list of `entry`\n\n Notes:\n `entry` is a dictionary with structure\n {\n \"path\": str,\n \"isout\": bool,\n \"isdir\": bool,\n \"isexec\": bool,\n }\n \"\"\"\n from . import Repo\n\n with Repo.open(url, rev=rev, subrepos=True, uninitialized=True) as repo:\n path = path or \"\"\n\n ret = _ls(repo, path, recursive, dvc_only)\n\n ret_list = []\n for path, info in ret.items():\n info[\"path\"] = path\n ret_list.append(info)\n ret_list.sort(key=lambda f: f[\"path\"])\n return ret_list\n\n\ndef _ls(\n repo: \"Repo\", path: str, recursive: bool = None, dvc_only: bool = False\n):\n fs: \"RepoFileSystem\" = repo.repo_fs\n fs_path = fs.from_os_path(path)\n\n try:\n fs_path = fs.info(fs_path)[\"name\"]\n except FileNotFoundError:\n raise PathMissingError(path, repo, dvc_only=dvc_only)\n\n infos = {}\n for root, dirs, files in fs.walk(\n fs_path, dvcfiles=True, dvc_only=dvc_only\n ):\n entries = chain(files, dirs) if not recursive else files\n\n for entry in entries:\n entry_fs_path = fs.path.join(root, entry)\n relparts = fs.path.relparts(entry_fs_path, fs_path)\n name = os.path.join(*relparts)\n infos[name] = fs.info(entry_fs_path)\n\n if not recursive:\n break\n\n if not infos and fs.isfile(fs_path):\n infos[os.path.basename(path)] = fs.info(fs_path)\n\n ret = {}\n for name, info in infos.items():\n dvc_info = info.get(\"dvc_info\", {})\n if dvc_info.get(\"outs\") or not dvc_only:\n ret[name] = {\n \"isout\": dvc_info.get(\"isout\", False),\n \"isdir\": info[\"type\"] == \"directory\",\n \"isexec\": info.get(\"isexec\", False),\n }\n\n return ret\n", "path": "dvc/repo/ls.py"}]} | 1,662 | 448 |
gh_patches_debug_14771 | rasdani/github-patches | git_diff | litestar-org__litestar-992 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: Running `starlite run` after installing starlite[cli] gives error about missing cryptography package
The error is here:
```
Traceback (most recent call last):
File "C:\Users\hanne\Documents\Programme\analyze-wiktionary\.venv\lib\site-packages\starlite\middleware\session\cookie_backend.py", line 20,
in <module>
from cryptography.exceptions import InvalidTag
ModuleNotFoundError: No module named 'cryptography'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\hanne\Documents\Programme\analyze-wiktionary\.venv\Scripts\starlite.exe\__main__.py", line 4, in <module>
File "C:\Users\hanne\Documents\Programme\analyze-wiktionary\.venv\lib\site-packages\starlite\cli.py", line 41, in <module>
from starlite.middleware.session import SessionMiddleware
File "C:\Users\hanne\Documents\Programme\analyze-wiktionary\.venv\lib\site-packages\starlite\middleware\session\__init__.py", line 2, in <module>
from .cookie_backend import (
File "C:\Users\hanne\Documents\Programme\analyze-wiktionary\.venv\lib\site-packages\starlite\middleware\session\cookie_backend.py", line 23,
in <module>
raise MissingDependencyException("cryptography is not installed") from e
starlite.exceptions.base_exceptions.MissingDependencyException: cryptography is not installed
```
I thought it might be a good idea to install the package automatically with the CLI extra. (Or to update the [docs](https://starlite-api.github.io/starlite/usage/19-cli/?h=uvicorn) if I'm missing something).
My versions: Windows, Python 3.10, starlite 1.46.0
PS: Thank you all for the great amount of effort you spend on this project!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlite/middleware/session/__init__.py`
Content:
```
1 from .base import SessionMiddleware
2 from .cookie_backend import (
3 CookieBackendConfig as SessionCookieConfig, # backwards compatible export
4 )
5
6 __all__ = [
7 "SessionMiddleware",
8 "SessionCookieConfig",
9 ]
10
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlite/middleware/session/__init__.py b/starlite/middleware/session/__init__.py
--- a/starlite/middleware/session/__init__.py
+++ b/starlite/middleware/session/__init__.py
@@ -1,9 +1,27 @@
+from typing import Any
+
+from starlite.utils import warn_deprecation
+
from .base import SessionMiddleware
-from .cookie_backend import (
- CookieBackendConfig as SessionCookieConfig, # backwards compatible export
-)
-
-__all__ = [
- "SessionMiddleware",
- "SessionCookieConfig",
-]
+
+
+def __getattr__(name: str) -> Any:
+ """Provide lazy importing as per https://peps.python.org/pep-0562/"""
+
+ if name != "SessionCookieConfig":
+ raise AttributeError(f"Module {__package__} has no attribute {name}")
+
+ from .cookie_backend import CookieBackendConfig
+
+ warn_deprecation(
+ deprecated_name=f"{name} from {__package__}",
+ kind="import",
+ alternative="'from startlite.middleware.sessions.cookie_backend import CookieBackendConfig'",
+ version="1.47.0",
+ )
+
+ globals()[name] = CookieBackendConfig
+ return CookieBackendConfig
+
+
+__all__ = ["SessionMiddleware"]
| {"golden_diff": "diff --git a/starlite/middleware/session/__init__.py b/starlite/middleware/session/__init__.py\n--- a/starlite/middleware/session/__init__.py\n+++ b/starlite/middleware/session/__init__.py\n@@ -1,9 +1,27 @@\n+from typing import Any\n+\n+from starlite.utils import warn_deprecation\n+\n from .base import SessionMiddleware\n-from .cookie_backend import (\n- CookieBackendConfig as SessionCookieConfig, # backwards compatible export\n-)\n-\n-__all__ = [\n- \"SessionMiddleware\",\n- \"SessionCookieConfig\",\n-]\n+\n+\n+def __getattr__(name: str) -> Any:\n+ \"\"\"Provide lazy importing as per https://peps.python.org/pep-0562/\"\"\"\n+\n+ if name != \"SessionCookieConfig\":\n+ raise AttributeError(f\"Module {__package__} has no attribute {name}\")\n+\n+ from .cookie_backend import CookieBackendConfig\n+\n+ warn_deprecation(\n+ deprecated_name=f\"{name} from {__package__}\",\n+ kind=\"import\",\n+ alternative=\"'from startlite.middleware.sessions.cookie_backend import CookieBackendConfig'\",\n+ version=\"1.47.0\",\n+ )\n+\n+ globals()[name] = CookieBackendConfig\n+ return CookieBackendConfig\n+\n+\n+__all__ = [\"SessionMiddleware\"]\n", "issue": "Bug: Running `starlite run` after installing starlite[cli] gives error about missing cryptography package\nThe error is here:\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\hanne\\Documents\\Programme\\analyze-wiktionary\\.venv\\lib\\site-packages\\starlite\\middleware\\session\\cookie_backend.py\", line 20, \r\nin <module>\r\n from cryptography.exceptions import InvalidTag\r\nModuleNotFoundError: No module named 'cryptography'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Python310\\lib\\runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"C:\\Python310\\lib\\runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\hanne\\Documents\\Programme\\analyze-wiktionary\\.venv\\Scripts\\starlite.exe\\__main__.py\", line 4, in <module>\r\n File \"C:\\Users\\hanne\\Documents\\Programme\\analyze-wiktionary\\.venv\\lib\\site-packages\\starlite\\cli.py\", line 41, in <module>\r\n from starlite.middleware.session import SessionMiddleware\r\n File \"C:\\Users\\hanne\\Documents\\Programme\\analyze-wiktionary\\.venv\\lib\\site-packages\\starlite\\middleware\\session\\__init__.py\", line 2, in <module>\r\n from .cookie_backend import (\r\n File \"C:\\Users\\hanne\\Documents\\Programme\\analyze-wiktionary\\.venv\\lib\\site-packages\\starlite\\middleware\\session\\cookie_backend.py\", line 23, \r\nin <module>\r\n raise MissingDependencyException(\"cryptography is not installed\") from e\r\nstarlite.exceptions.base_exceptions.MissingDependencyException: cryptography is not installed\r\n```\r\n\r\nI thought it might be a good idea to install the package automatically with the CLI extra. (Or to update the [docs](https://starlite-api.github.io/starlite/usage/19-cli/?h=uvicorn) if I'm missing something).\r\n\r\nMy versions: Windows, Python 3.10, starlite 1.46.0 \r\n\r\nPS: Thank you all for the great amount of effort you spend on this project!\n", "before_files": [{"content": "from .base import SessionMiddleware\nfrom .cookie_backend import (\n CookieBackendConfig as SessionCookieConfig, # backwards compatible export\n)\n\n__all__ = [\n \"SessionMiddleware\",\n \"SessionCookieConfig\",\n]\n", "path": "starlite/middleware/session/__init__.py"}], "after_files": [{"content": "from typing import Any\n\nfrom starlite.utils import warn_deprecation\n\nfrom .base import SessionMiddleware\n\n\ndef __getattr__(name: str) -> Any:\n \"\"\"Provide lazy importing as per https://peps.python.org/pep-0562/\"\"\"\n\n if name != \"SessionCookieConfig\":\n raise AttributeError(f\"Module {__package__} has no attribute {name}\")\n\n from .cookie_backend import CookieBackendConfig\n\n warn_deprecation(\n deprecated_name=f\"{name} from {__package__}\",\n kind=\"import\",\n alternative=\"'from startlite.middleware.sessions.cookie_backend import CookieBackendConfig'\",\n version=\"1.47.0\",\n )\n\n globals()[name] = CookieBackendConfig\n return CookieBackendConfig\n\n\n__all__ = [\"SessionMiddleware\"]\n", "path": "starlite/middleware/session/__init__.py"}]} | 848 | 296 |
gh_patches_debug_19426 | rasdani/github-patches | git_diff | nautobot__nautobot-975 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`::1/128` is not a valid prefix
<!--
NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.
This form is only for reporting reproducible bugs. If you need assistance
with Nautobot installation, or if you have a general question, please start a
discussion instead: https://github.com/nautobot/nautobot/discussions
Please describe the environment in which you are running Nautobot. Be sure
that you are running an unmodified instance of the latest stable release
before submitting a bug report, and that any plugins have been disabled.
-->
### Environment
* Python version: 3.6
* Nautobot version: 1.1.3
<!--
Describe in detail the exact steps that someone else can take to reproduce
this bug using the current stable release of Nautobot. Begin with the
creation of any necessary database objects and call out every operation
being performed explicitly. If reporting a bug in the REST API, be sure to
reconstruct the raw HTTP request(s) being made: Don't rely on a client
library such as pynautobot.
-->
When trying to create the prefix `::1/128` I get the following error:
```no-highlight
<class 'netaddr.core.AddrFormatError'>
invalid IPNetwork 0.0.0.1/128
```
Both Python netaddr and ipaddress modules see this as a valid IPNetwork.
### Steps to Reproduce
1. Create a prefix or aggregate using the prefix `::1/128`
<!-- What did you expect to happen? -->
### Expected Behavior
Prefix created
<!-- What happened instead? -->
### Observed Behavior
```
invalid IPNetwork 0.0.0.1/128
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nautobot/ipam/fields.py`
Content:
```
1 from django.core.exceptions import ValidationError
2 from django.db import models
3 from django.utils.datastructures import DictWrapper
4 import netaddr
5
6 from .formfields import IPNetworkFormField
7
8
9 class VarbinaryIPField(models.BinaryField):
10 """
11 IP network address
12 """
13
14 description = "IP network address"
15
16 def __init__(self, **kwargs):
17 super().__init__(**kwargs)
18
19 def db_type(self, connection):
20 """Returns the correct field type for a given database vendor."""
21
22 # Use 'bytea' type for PostgreSQL.
23 if connection.vendor == "postgresql":
24 return "bytea"
25
26 # Or 'varbinary' for everyone else.
27 return "varbinary(16)"
28
29 def value_to_string(self, obj):
30 """IPField is serialized as str(IPAddress())"""
31 value = self.value_from_object(obj)
32 if not value:
33 return value
34
35 return str(self._parse_address(value))
36
37 def _parse_address(self, value):
38 """
39 Parse `str`, `bytes` (varbinary), or `netaddr.IPAddress to `netaddr.IPAddress`.
40 """
41 try:
42 value = int.from_bytes(value, "big")
43 except TypeError:
44 pass # It's a string
45
46 try:
47 return netaddr.IPAddress(value)
48 except netaddr.AddrFormatError:
49 raise ValidationError("Invalid IP address format: {}".format(value))
50 except (TypeError, ValueError) as e:
51 raise ValidationError(e)
52
53 def from_db_value(self, value, expression, connection):
54 """Converts DB (varbinary) to Python (str)."""
55 return self.to_python(value)
56
57 def to_python(self, value):
58 """Converts `value` to Python (str)."""
59 if isinstance(value, netaddr.IPAddress):
60 return str(value)
61
62 if value is None:
63 return value
64
65 return str(self._parse_address(value))
66
67 def get_db_prep_value(self, value, connection, prepared=False):
68 """Converts Python (str) to DB (varbinary)."""
69 if value is None:
70 return value
71
72 # Parse the address and then pack it to binary.
73 value = self._parse_address(value).packed
74
75 # Use defaults for PostgreSQL
76 if connection.vendor == "postgresql":
77 return super().get_db_prep_value(value, connection, prepared)
78
79 return value
80
81 def form_class(self):
82 return IPNetworkFormField
83
84 def formfield(self, **kwargs):
85 defaults = {"form_class": self.form_class()}
86 defaults.update(kwargs)
87 return super().formfield(**defaults)
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nautobot/ipam/fields.py b/nautobot/ipam/fields.py
--- a/nautobot/ipam/fields.py
+++ b/nautobot/ipam/fields.py
@@ -39,12 +39,17 @@
Parse `str`, `bytes` (varbinary), or `netaddr.IPAddress to `netaddr.IPAddress`.
"""
try:
- value = int.from_bytes(value, "big")
+ int_value = int.from_bytes(value, "big")
+ # Distinguish between
+ # \x00\x00\x00\x01 (IPv4 0.0.0.1) and
+ # \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01 (IPv6 ::1), among other cases
+ version = 4 if len(value) == 4 else 6
+ value = int_value
except TypeError:
- pass # It's a string
+ version = None # It's a string, IP version should be self-evident
try:
- return netaddr.IPAddress(value)
+ return netaddr.IPAddress(value, version=version)
except netaddr.AddrFormatError:
raise ValidationError("Invalid IP address format: {}".format(value))
except (TypeError, ValueError) as e:
| {"golden_diff": "diff --git a/nautobot/ipam/fields.py b/nautobot/ipam/fields.py\n--- a/nautobot/ipam/fields.py\n+++ b/nautobot/ipam/fields.py\n@@ -39,12 +39,17 @@\n Parse `str`, `bytes` (varbinary), or `netaddr.IPAddress to `netaddr.IPAddress`.\n \"\"\"\n try:\n- value = int.from_bytes(value, \"big\")\n+ int_value = int.from_bytes(value, \"big\")\n+ # Distinguish between\n+ # \\x00\\x00\\x00\\x01 (IPv4 0.0.0.1) and\n+ # \\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01 (IPv6 ::1), among other cases\n+ version = 4 if len(value) == 4 else 6\n+ value = int_value\n except TypeError:\n- pass # It's a string\n+ version = None # It's a string, IP version should be self-evident\n \n try:\n- return netaddr.IPAddress(value)\n+ return netaddr.IPAddress(value, version=version)\n except netaddr.AddrFormatError:\n raise ValidationError(\"Invalid IP address format: {}\".format(value))\n except (TypeError, ValueError) as e:\n", "issue": "`::1/128` is not a valid prefix\n<!--\r\n NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.\r\n\r\n This form is only for reporting reproducible bugs. If you need assistance\r\n with Nautobot installation, or if you have a general question, please start a\r\n discussion instead: https://github.com/nautobot/nautobot/discussions\r\n\r\n Please describe the environment in which you are running Nautobot. Be sure\r\n that you are running an unmodified instance of the latest stable release\r\n before submitting a bug report, and that any plugins have been disabled.\r\n-->\r\n### Environment\r\n* Python version: 3.6\r\n* Nautobot version: 1.1.3\r\n\r\n<!--\r\n Describe in detail the exact steps that someone else can take to reproduce\r\n this bug using the current stable release of Nautobot. Begin with the\r\n creation of any necessary database objects and call out every operation\r\n being performed explicitly. If reporting a bug in the REST API, be sure to\r\n reconstruct the raw HTTP request(s) being made: Don't rely on a client\r\n library such as pynautobot.\r\n-->\r\n\r\nWhen trying to create the prefix `::1/128` I get the following error:\r\n\r\n```no-highlight\r\n<class 'netaddr.core.AddrFormatError'>\r\n\r\ninvalid IPNetwork 0.0.0.1/128\r\n```\r\n\r\nBoth Python netaddr and ipaddress modules see this as a valid IPNetwork. \r\n\r\n### Steps to Reproduce\r\n1. Create a prefix or aggregate using the prefix `::1/128`\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\n\r\nPrefix created\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\n\r\n```\r\ninvalid IPNetwork 0.0.0.1/128\r\n```\n", "before_files": [{"content": "from django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.utils.datastructures import DictWrapper\nimport netaddr\n\nfrom .formfields import IPNetworkFormField\n\n\nclass VarbinaryIPField(models.BinaryField):\n \"\"\"\n IP network address\n \"\"\"\n\n description = \"IP network address\"\n\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n\n def db_type(self, connection):\n \"\"\"Returns the correct field type for a given database vendor.\"\"\"\n\n # Use 'bytea' type for PostgreSQL.\n if connection.vendor == \"postgresql\":\n return \"bytea\"\n\n # Or 'varbinary' for everyone else.\n return \"varbinary(16)\"\n\n def value_to_string(self, obj):\n \"\"\"IPField is serialized as str(IPAddress())\"\"\"\n value = self.value_from_object(obj)\n if not value:\n return value\n\n return str(self._parse_address(value))\n\n def _parse_address(self, value):\n \"\"\"\n Parse `str`, `bytes` (varbinary), or `netaddr.IPAddress to `netaddr.IPAddress`.\n \"\"\"\n try:\n value = int.from_bytes(value, \"big\")\n except TypeError:\n pass # It's a string\n\n try:\n return netaddr.IPAddress(value)\n except netaddr.AddrFormatError:\n raise ValidationError(\"Invalid IP address format: {}\".format(value))\n except (TypeError, ValueError) as e:\n raise ValidationError(e)\n\n def from_db_value(self, value, expression, connection):\n \"\"\"Converts DB (varbinary) to Python (str).\"\"\"\n return self.to_python(value)\n\n def to_python(self, value):\n \"\"\"Converts `value` to Python (str).\"\"\"\n if isinstance(value, netaddr.IPAddress):\n return str(value)\n\n if value is None:\n return value\n\n return str(self._parse_address(value))\n\n def get_db_prep_value(self, value, connection, prepared=False):\n \"\"\"Converts Python (str) to DB (varbinary).\"\"\"\n if value is None:\n return value\n\n # Parse the address and then pack it to binary.\n value = self._parse_address(value).packed\n\n # Use defaults for PostgreSQL\n if connection.vendor == \"postgresql\":\n return super().get_db_prep_value(value, connection, prepared)\n\n return value\n\n def form_class(self):\n return IPNetworkFormField\n\n def formfield(self, **kwargs):\n defaults = {\"form_class\": self.form_class()}\n defaults.update(kwargs)\n return super().formfield(**defaults)\n", "path": "nautobot/ipam/fields.py"}], "after_files": [{"content": "from django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.utils.datastructures import DictWrapper\nimport netaddr\n\nfrom .formfields import IPNetworkFormField\n\n\nclass VarbinaryIPField(models.BinaryField):\n \"\"\"\n IP network address\n \"\"\"\n\n description = \"IP network address\"\n\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n\n def db_type(self, connection):\n \"\"\"Returns the correct field type for a given database vendor.\"\"\"\n\n # Use 'bytea' type for PostgreSQL.\n if connection.vendor == \"postgresql\":\n return \"bytea\"\n\n # Or 'varbinary' for everyone else.\n return \"varbinary(16)\"\n\n def value_to_string(self, obj):\n \"\"\"IPField is serialized as str(IPAddress())\"\"\"\n value = self.value_from_object(obj)\n if not value:\n return value\n\n return str(self._parse_address(value))\n\n def _parse_address(self, value):\n \"\"\"\n Parse `str`, `bytes` (varbinary), or `netaddr.IPAddress to `netaddr.IPAddress`.\n \"\"\"\n try:\n int_value = int.from_bytes(value, \"big\")\n # Distinguish between\n # \\x00\\x00\\x00\\x01 (IPv4 0.0.0.1) and\n # \\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01 (IPv6 ::1), among other cases\n version = 4 if len(value) == 4 else 6\n value = int_value\n except TypeError:\n version = None # It's a string, IP version should be self-evident\n\n try:\n return netaddr.IPAddress(value, version=version)\n except netaddr.AddrFormatError:\n raise ValidationError(\"Invalid IP address format: {}\".format(value))\n except (TypeError, ValueError) as e:\n raise ValidationError(e)\n\n def from_db_value(self, value, expression, connection):\n \"\"\"Converts DB (varbinary) to Python (str).\"\"\"\n return self.to_python(value)\n\n def to_python(self, value):\n \"\"\"Converts `value` to Python (str).\"\"\"\n if isinstance(value, netaddr.IPAddress):\n return str(value)\n\n if value is None:\n return value\n\n return str(self._parse_address(value))\n\n def get_db_prep_value(self, value, connection, prepared=False):\n \"\"\"Converts Python (str) to DB (varbinary).\"\"\"\n if value is None:\n return value\n\n # Parse the address and then pack it to binary.\n value = self._parse_address(value).packed\n\n # Use defaults for PostgreSQL\n if connection.vendor == \"postgresql\":\n return super().get_db_prep_value(value, connection, prepared)\n\n return value\n\n def form_class(self):\n return IPNetworkFormField\n\n def formfield(self, **kwargs):\n defaults = {\"form_class\": self.form_class()}\n defaults.update(kwargs)\n return super().formfield(**defaults)\n", "path": "nautobot/ipam/fields.py"}]} | 1,382 | 327 |
gh_patches_debug_28563 | rasdani/github-patches | git_diff | talonhub__community-479 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Number small can go larger than 100
If you say "ten five" number small will be 105.
Number small can go larger than 100
If you say "ten five" number small will be 105.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `code/numbers.py`
Content:
```
1 from talon import Context, Module, actions
2 from typing import List, Optional, Union, Iterator
3
4 mod = Module()
5 ctx = Context()
6
7 digits = "zero one two three four five six seven eight nine".split()
8 teens = "eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen".split()
9 tens = "ten twenty thirty forty fifty sixty seventy eighty ninety".split()
10 scales = "hundred thousand million billion trillion quadrillion quintillion sextillion septillion octillion nonillion decillion".split()
11
12 digits_map = {n: i for i, n in enumerate(digits)}
13 digits_map["oh"] = 0
14 teens_map = {n: i + 11 for i, n in enumerate(teens)}
15 tens_map = {n: 10 * (i + 1) for i, n in enumerate(tens)}
16 scales_map = {n: 10 ** (3 * (i+1)) for i, n in enumerate(scales[1:])}
17 scales_map["hundred"] = 100
18
19 numbers_map = digits_map.copy()
20 numbers_map.update(teens_map)
21 numbers_map.update(tens_map)
22 numbers_map.update(scales_map)
23
24 def parse_number(l: List[str]) -> str:
25 """Parses a list of words into a number/digit string."""
26 l = list(scan_small_numbers(l))
27 for scale in scales:
28 l = parse_scale(scale, l)
29 return "".join(str(n) for n in l)
30
31 def scan_small_numbers(l: List[str]) -> Iterator[Union[str,int]]:
32 """
33 Takes a list of number words, yields a generator of mixed numbers & strings.
34 Translates small number terms (<100) into corresponding numbers.
35 Drops all occurrences of "and".
36 Smashes digits onto tens words, eg. ["twenty", "one"] -> [21].
37 But note that "ten" and "zero" are excluded, ie:
38 ["ten", "three"] -> [10, 3]
39 ["fifty", "zero"] -> [50, 0]
40 Does nothing to scale words ("hundred", "thousand", "million", etc).
41 """
42 # reversed so that repeated pop() visits in left-to-right order
43 l = [x for x in reversed(l) if x != "and"]
44 while l:
45 n = l.pop()
46 # fuse tens onto digits, eg. "twenty", "one" -> 21
47 if n in tens_map and n != "ten" and l and digits_map.get(l[-1], 0) != 0:
48 d = l.pop()
49 yield numbers_map[n] + numbers_map[d]
50 # turn small number terms into corresponding numbers
51 elif n not in scales_map:
52 yield numbers_map[n]
53 else:
54 yield n
55
56 def parse_scale(scale: str, l: List[Union[str,int]]) -> List[Union[str,int]]:
57 """Parses a list of mixed numbers & strings for occurrences of the following
58 pattern:
59
60 <multiplier> <scale> <remainder>
61
62 where <scale> is a scale word like "hundred", "thousand", "million", etc and
63 multiplier and remainder are numbers or strings of numbers of the
64 appropriate size. For example:
65
66 parse_scale("hundred", [1, "hundred", 2]) -> [102]
67 parse_scale("thousand", [12, "thousand", 3, 45]) -> [12345]
68
69 We assume that all scales of lower magnitude have already been parsed; don't
70 call parse_scale("thousand") until you've called parse_scale("hundred").
71 """
72 scale_value = scales_map[scale]
73 scale_digits = len(str(scale_value))
74
75 # Split the list on the desired scale word, then parse from left to right.
76 left, *splits = split_list(scale, l)
77 for right in splits:
78 # (1) Figure out the multiplier by looking to the left of the scale
79 # word. We ignore non-integers because they are scale words that we
80 # haven't processed yet; this strategy means that "thousand hundred"
81 # gets parsed as 1,100 instead of 100,000, but "hundred thousand" is
82 # parsed correctly as 100,000.
83 before = 1 # default multiplier
84 if left and isinstance(left[-1], int) and left[-1] != 0:
85 before = left.pop()
86
87 # (2) Absorb numbers to the right, eg. in [1, "thousand", 1, 26], "1
88 # thousand" absorbs ["1", "26"] to make 1,126. We pull numbers off
89 # `right` until we fill up the desired number of digits.
90 after = ""
91 while right and isinstance(right[0], int):
92 next = after + str(right[0])
93 if len(next) >= scale_digits: break
94 after = next
95 right.pop(0)
96 after = int(after) if after else 0
97
98 # (3) Push the parsed number into place, append whatever was left
99 # unparsed, and continue.
100 left.append(before * scale_value + after)
101 left.extend(right)
102
103 return left
104
105 def split_list(value, l: list) -> Iterator:
106 """Splits a list by occurrences of a given value."""
107 start = 0
108 while True:
109 try: i = l.index(value, start)
110 except ValueError: break
111 yield l[start:i]
112 start = i+1
113 yield l[start:]
114
115
116 # # ---------- TESTS (uncomment to run) ----------
117 # def test_number(expected, string):
118 # print('testing:', string)
119 # l = list(scan_small_numbers(string.split()))
120 # print(" scan --->", l)
121 # for scale in scales:
122 # old = l
123 # l = parse_scale(scale, l)
124 # if scale in old: print(" parse -->", l)
125 # else: assert old == l, "parse_scale should do nothing if the scale does not occur in the list"
126 # result = "".join(str(n) for n in l)
127 # assert result == parse_number(string.split())
128 # assert str(expected) == result, f"parsing {string!r}, expected {expected}, got {result}"
129
130 # test_number(105000, "one hundred and five thousand")
131 # test_number(1000000, "one thousand thousand")
132 # test_number(1501000, "one million five hundred one thousand")
133 # test_number(1501106, "one million five hundred and one thousand one hundred and six")
134 # test_number(123, "one two three")
135 # test_number(123, "one twenty three")
136 # test_number(104, "ten four") # borderline, but valid in some dialects
137 # test_number(1066, "ten sixty six") # a common way of saying years
138 # test_number(1906, "nineteen oh six") # year
139 # test_number(2001, "twenty oh one") # year
140 # test_number(2020, "twenty twenty")
141 # test_number(1001, "one thousand one")
142 # test_number(1010, "one thousand ten")
143 # test_number(123456, "one hundred and twenty three thousand and four hundred and fifty six")
144 # test_number(123456, "one twenty three thousand four fifty six")
145
146 # ## failing (and somewhat debatable) tests from old numbers.py
147 # #test_number(10000011, "one million one one")
148 # #test_number(100001010, "one million ten ten")
149 # #test_number(1050006000, "one hundred thousand and five thousand and six thousand")
150
151
152 # ---------- CAPTURES ----------
153 alt_digits = "(" + ("|".join(digits_map.keys())) + ")"
154 alt_teens = "(" + ("|".join(teens_map.keys())) + ")"
155 alt_tens = "(" + ("|".join(tens_map.keys())) + ")"
156 alt_scales = "(" + ("|".join(scales_map.keys())) + ")"
157 number_word = "(" + "|".join(numbers_map.keys()) + ")"
158
159 # TODO: allow things like "double eight" for 88
160 @ctx.capture("digit_string", rule=f"({alt_digits} | {alt_teens} | {alt_tens})+")
161 def digit_string(m) -> str: return parse_number(list(m))
162
163 @ctx.capture("digits", rule="<digit_string>")
164 def digits(m) -> int:
165 """Parses a phrase representing a digit sequence, returning it as an integer."""
166 return int(m.digit_string)
167
168 @mod.capture(rule=f"{number_word}+ (and {number_word}+)*")
169 def number_string(m) -> str:
170 """Parses a number phrase, returning that number as a string."""
171 return parse_number(list(m))
172
173 @ctx.capture("number", rule="<user.number_string>")
174 def number(m) -> int:
175 """Parses a number phrase, returning it as an integer."""
176 return int(m.number_string)
177
178 @ctx.capture("number_signed", rule=f"[negative|minus] <number>")
179 def number_signed(m):
180 number = m[-1]
181 return -number if (m[0] in ["negative", "minus"]) else number
182
183 @ctx.capture(
184 "number_small", rule=f"({alt_digits} | {alt_teens} | {alt_tens} [{alt_digits}])"
185 )
186 def number_small(m): return int(parse_number(list(m)))
187
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/code/numbers.py b/code/numbers.py
--- a/code/numbers.py
+++ b/code/numbers.py
@@ -5,14 +5,14 @@
ctx = Context()
digits = "zero one two three four five six seven eight nine".split()
-teens = "eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen".split()
-tens = "ten twenty thirty forty fifty sixty seventy eighty ninety".split()
+teens = "ten eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen".split()
+tens = "twenty thirty forty fifty sixty seventy eighty ninety".split()
scales = "hundred thousand million billion trillion quadrillion quintillion sextillion septillion octillion nonillion decillion".split()
digits_map = {n: i for i, n in enumerate(digits)}
digits_map["oh"] = 0
-teens_map = {n: i + 11 for i, n in enumerate(teens)}
-tens_map = {n: 10 * (i + 1) for i, n in enumerate(tens)}
+teens_map = {n: i + 10 for i, n in enumerate(teens)}
+tens_map = {n: 10 * (i + 2) for i, n in enumerate(tens)}
scales_map = {n: 10 ** (3 * (i+1)) for i, n in enumerate(scales[1:])}
scales_map["hundred"] = 100
@@ -44,7 +44,7 @@
while l:
n = l.pop()
# fuse tens onto digits, eg. "twenty", "one" -> 21
- if n in tens_map and n != "ten" and l and digits_map.get(l[-1], 0) != 0:
+ if n in tens_map and l and digits_map.get(l[-1], 0) != 0:
d = l.pop()
yield numbers_map[n] + numbers_map[d]
# turn small number terms into corresponding numbers
| {"golden_diff": "diff --git a/code/numbers.py b/code/numbers.py\n--- a/code/numbers.py\n+++ b/code/numbers.py\n@@ -5,14 +5,14 @@\n ctx = Context()\n \n digits = \"zero one two three four five six seven eight nine\".split()\n-teens = \"eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen\".split()\n-tens = \"ten twenty thirty forty fifty sixty seventy eighty ninety\".split()\n+teens = \"ten eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen\".split()\n+tens = \"twenty thirty forty fifty sixty seventy eighty ninety\".split()\n scales = \"hundred thousand million billion trillion quadrillion quintillion sextillion septillion octillion nonillion decillion\".split()\n \n digits_map = {n: i for i, n in enumerate(digits)}\n digits_map[\"oh\"] = 0\n-teens_map = {n: i + 11 for i, n in enumerate(teens)}\n-tens_map = {n: 10 * (i + 1) for i, n in enumerate(tens)}\n+teens_map = {n: i + 10 for i, n in enumerate(teens)}\n+tens_map = {n: 10 * (i + 2) for i, n in enumerate(tens)}\n scales_map = {n: 10 ** (3 * (i+1)) for i, n in enumerate(scales[1:])}\n scales_map[\"hundred\"] = 100\n \n@@ -44,7 +44,7 @@\n while l:\n n = l.pop()\n # fuse tens onto digits, eg. \"twenty\", \"one\" -> 21\n- if n in tens_map and n != \"ten\" and l and digits_map.get(l[-1], 0) != 0:\n+ if n in tens_map and l and digits_map.get(l[-1], 0) != 0:\n d = l.pop()\n yield numbers_map[n] + numbers_map[d]\n # turn small number terms into corresponding numbers\n", "issue": "Number small can go larger than 100\nIf you say \"ten five\" number small will be 105.\nNumber small can go larger than 100\nIf you say \"ten five\" number small will be 105.\n", "before_files": [{"content": "from talon import Context, Module, actions\nfrom typing import List, Optional, Union, Iterator\n\nmod = Module()\nctx = Context()\n\ndigits = \"zero one two three four five six seven eight nine\".split()\nteens = \"eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen\".split()\ntens = \"ten twenty thirty forty fifty sixty seventy eighty ninety\".split()\nscales = \"hundred thousand million billion trillion quadrillion quintillion sextillion septillion octillion nonillion decillion\".split()\n\ndigits_map = {n: i for i, n in enumerate(digits)}\ndigits_map[\"oh\"] = 0\nteens_map = {n: i + 11 for i, n in enumerate(teens)}\ntens_map = {n: 10 * (i + 1) for i, n in enumerate(tens)}\nscales_map = {n: 10 ** (3 * (i+1)) for i, n in enumerate(scales[1:])}\nscales_map[\"hundred\"] = 100\n\nnumbers_map = digits_map.copy()\nnumbers_map.update(teens_map)\nnumbers_map.update(tens_map)\nnumbers_map.update(scales_map)\n\ndef parse_number(l: List[str]) -> str:\n \"\"\"Parses a list of words into a number/digit string.\"\"\"\n l = list(scan_small_numbers(l))\n for scale in scales:\n l = parse_scale(scale, l)\n return \"\".join(str(n) for n in l)\n\ndef scan_small_numbers(l: List[str]) -> Iterator[Union[str,int]]:\n \"\"\"\n Takes a list of number words, yields a generator of mixed numbers & strings.\n Translates small number terms (<100) into corresponding numbers.\n Drops all occurrences of \"and\".\n Smashes digits onto tens words, eg. [\"twenty\", \"one\"] -> [21].\n But note that \"ten\" and \"zero\" are excluded, ie:\n [\"ten\", \"three\"] -> [10, 3]\n [\"fifty\", \"zero\"] -> [50, 0]\n Does nothing to scale words (\"hundred\", \"thousand\", \"million\", etc).\n \"\"\"\n # reversed so that repeated pop() visits in left-to-right order\n l = [x for x in reversed(l) if x != \"and\"]\n while l:\n n = l.pop()\n # fuse tens onto digits, eg. \"twenty\", \"one\" -> 21\n if n in tens_map and n != \"ten\" and l and digits_map.get(l[-1], 0) != 0:\n d = l.pop()\n yield numbers_map[n] + numbers_map[d]\n # turn small number terms into corresponding numbers\n elif n not in scales_map:\n yield numbers_map[n]\n else:\n yield n\n\ndef parse_scale(scale: str, l: List[Union[str,int]]) -> List[Union[str,int]]:\n \"\"\"Parses a list of mixed numbers & strings for occurrences of the following\n pattern:\n\n <multiplier> <scale> <remainder>\n\n where <scale> is a scale word like \"hundred\", \"thousand\", \"million\", etc and\n multiplier and remainder are numbers or strings of numbers of the\n appropriate size. For example:\n\n parse_scale(\"hundred\", [1, \"hundred\", 2]) -> [102]\n parse_scale(\"thousand\", [12, \"thousand\", 3, 45]) -> [12345]\n\n We assume that all scales of lower magnitude have already been parsed; don't\n call parse_scale(\"thousand\") until you've called parse_scale(\"hundred\").\n \"\"\"\n scale_value = scales_map[scale]\n scale_digits = len(str(scale_value))\n\n # Split the list on the desired scale word, then parse from left to right.\n left, *splits = split_list(scale, l)\n for right in splits:\n # (1) Figure out the multiplier by looking to the left of the scale\n # word. We ignore non-integers because they are scale words that we\n # haven't processed yet; this strategy means that \"thousand hundred\"\n # gets parsed as 1,100 instead of 100,000, but \"hundred thousand\" is\n # parsed correctly as 100,000.\n before = 1 # default multiplier\n if left and isinstance(left[-1], int) and left[-1] != 0:\n before = left.pop()\n\n # (2) Absorb numbers to the right, eg. in [1, \"thousand\", 1, 26], \"1\n # thousand\" absorbs [\"1\", \"26\"] to make 1,126. We pull numbers off\n # `right` until we fill up the desired number of digits.\n after = \"\"\n while right and isinstance(right[0], int):\n next = after + str(right[0])\n if len(next) >= scale_digits: break\n after = next\n right.pop(0)\n after = int(after) if after else 0\n\n # (3) Push the parsed number into place, append whatever was left\n # unparsed, and continue.\n left.append(before * scale_value + after)\n left.extend(right)\n\n return left\n\ndef split_list(value, l: list) -> Iterator:\n \"\"\"Splits a list by occurrences of a given value.\"\"\"\n start = 0\n while True:\n try: i = l.index(value, start)\n except ValueError: break\n yield l[start:i]\n start = i+1\n yield l[start:]\n\n\f\n# # ---------- TESTS (uncomment to run) ----------\n# def test_number(expected, string):\n# print('testing:', string)\n# l = list(scan_small_numbers(string.split()))\n# print(\" scan --->\", l)\n# for scale in scales:\n# old = l\n# l = parse_scale(scale, l)\n# if scale in old: print(\" parse -->\", l)\n# else: assert old == l, \"parse_scale should do nothing if the scale does not occur in the list\"\n# result = \"\".join(str(n) for n in l)\n# assert result == parse_number(string.split())\n# assert str(expected) == result, f\"parsing {string!r}, expected {expected}, got {result}\"\n\n# test_number(105000, \"one hundred and five thousand\")\n# test_number(1000000, \"one thousand thousand\")\n# test_number(1501000, \"one million five hundred one thousand\")\n# test_number(1501106, \"one million five hundred and one thousand one hundred and six\")\n# test_number(123, \"one two three\")\n# test_number(123, \"one twenty three\")\n# test_number(104, \"ten four\") # borderline, but valid in some dialects\n# test_number(1066, \"ten sixty six\") # a common way of saying years\n# test_number(1906, \"nineteen oh six\") # year\n# test_number(2001, \"twenty oh one\") # year\n# test_number(2020, \"twenty twenty\")\n# test_number(1001, \"one thousand one\")\n# test_number(1010, \"one thousand ten\")\n# test_number(123456, \"one hundred and twenty three thousand and four hundred and fifty six\")\n# test_number(123456, \"one twenty three thousand four fifty six\")\n\n# ## failing (and somewhat debatable) tests from old numbers.py\n# #test_number(10000011, \"one million one one\")\n# #test_number(100001010, \"one million ten ten\")\n# #test_number(1050006000, \"one hundred thousand and five thousand and six thousand\")\n\n\f\n# ---------- CAPTURES ----------\nalt_digits = \"(\" + (\"|\".join(digits_map.keys())) + \")\"\nalt_teens = \"(\" + (\"|\".join(teens_map.keys())) + \")\"\nalt_tens = \"(\" + (\"|\".join(tens_map.keys())) + \")\"\nalt_scales = \"(\" + (\"|\".join(scales_map.keys())) + \")\"\nnumber_word = \"(\" + \"|\".join(numbers_map.keys()) + \")\"\n\n# TODO: allow things like \"double eight\" for 88\[email protected](\"digit_string\", rule=f\"({alt_digits} | {alt_teens} | {alt_tens})+\")\ndef digit_string(m) -> str: return parse_number(list(m))\n\[email protected](\"digits\", rule=\"<digit_string>\")\ndef digits(m) -> int:\n \"\"\"Parses a phrase representing a digit sequence, returning it as an integer.\"\"\"\n return int(m.digit_string)\n\[email protected](rule=f\"{number_word}+ (and {number_word}+)*\")\ndef number_string(m) -> str:\n \"\"\"Parses a number phrase, returning that number as a string.\"\"\"\n return parse_number(list(m))\n\[email protected](\"number\", rule=\"<user.number_string>\")\ndef number(m) -> int:\n \"\"\"Parses a number phrase, returning it as an integer.\"\"\"\n return int(m.number_string)\n\[email protected](\"number_signed\", rule=f\"[negative|minus] <number>\")\ndef number_signed(m):\n number = m[-1]\n return -number if (m[0] in [\"negative\", \"minus\"]) else number\n\[email protected](\n \"number_small\", rule=f\"({alt_digits} | {alt_teens} | {alt_tens} [{alt_digits}])\"\n)\ndef number_small(m): return int(parse_number(list(m)))\n", "path": "code/numbers.py"}], "after_files": [{"content": "from talon import Context, Module, actions\nfrom typing import List, Optional, Union, Iterator\n\nmod = Module()\nctx = Context()\n\ndigits = \"zero one two three four five six seven eight nine\".split()\nteens = \"ten eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen\".split()\ntens = \"twenty thirty forty fifty sixty seventy eighty ninety\".split()\nscales = \"hundred thousand million billion trillion quadrillion quintillion sextillion septillion octillion nonillion decillion\".split()\n\ndigits_map = {n: i for i, n in enumerate(digits)}\ndigits_map[\"oh\"] = 0\nteens_map = {n: i + 10 for i, n in enumerate(teens)}\ntens_map = {n: 10 * (i + 2) for i, n in enumerate(tens)}\nscales_map = {n: 10 ** (3 * (i+1)) for i, n in enumerate(scales[1:])}\nscales_map[\"hundred\"] = 100\n\nnumbers_map = digits_map.copy()\nnumbers_map.update(teens_map)\nnumbers_map.update(tens_map)\nnumbers_map.update(scales_map)\n\ndef parse_number(l: List[str]) -> str:\n \"\"\"Parses a list of words into a number/digit string.\"\"\"\n l = list(scan_small_numbers(l))\n for scale in scales:\n l = parse_scale(scale, l)\n return \"\".join(str(n) for n in l)\n\ndef scan_small_numbers(l: List[str]) -> Iterator[Union[str,int]]:\n \"\"\"\n Takes a list of number words, yields a generator of mixed numbers & strings.\n Translates small number terms (<100) into corresponding numbers.\n Drops all occurrences of \"and\".\n Smashes digits onto tens words, eg. [\"twenty\", \"one\"] -> [21].\n But note that \"ten\" and \"zero\" are excluded, ie:\n [\"ten\", \"three\"] -> [10, 3]\n [\"fifty\", \"zero\"] -> [50, 0]\n Does nothing to scale words (\"hundred\", \"thousand\", \"million\", etc).\n \"\"\"\n # reversed so that repeated pop() visits in left-to-right order\n l = [x for x in reversed(l) if x != \"and\"]\n while l:\n n = l.pop()\n # fuse tens onto digits, eg. \"twenty\", \"one\" -> 21\n if n in tens_map and l and digits_map.get(l[-1], 0) != 0:\n d = l.pop()\n yield numbers_map[n] + numbers_map[d]\n # turn small number terms into corresponding numbers\n elif n not in scales_map:\n yield numbers_map[n]\n else:\n yield n\n\ndef parse_scale(scale: str, l: List[Union[str,int]]) -> List[Union[str,int]]:\n \"\"\"Parses a list of mixed numbers & strings for occurrences of the following\n pattern:\n\n <multiplier> <scale> <remainder>\n\n where <scale> is a scale word like \"hundred\", \"thousand\", \"million\", etc and\n multiplier and remainder are numbers or strings of numbers of the\n appropriate size. For example:\n\n parse_scale(\"hundred\", [1, \"hundred\", 2]) -> [102]\n parse_scale(\"thousand\", [12, \"thousand\", 3, 45]) -> [12345]\n\n We assume that all scales of lower magnitude have already been parsed; don't\n call parse_scale(\"thousand\") until you've called parse_scale(\"hundred\").\n \"\"\"\n scale_value = scales_map[scale]\n scale_digits = len(str(scale_value))\n\n # Split the list on the desired scale word, then parse from left to right.\n left, *splits = split_list(scale, l)\n for right in splits:\n # (1) Figure out the multiplier by looking to the left of the scale\n # word. We ignore non-integers because they are scale words that we\n # haven't processed yet; this strategy means that \"thousand hundred\"\n # gets parsed as 1,100 instead of 100,000, but \"hundred thousand\" is\n # parsed correctly as 100,000.\n before = 1 # default multiplier\n if left and isinstance(left[-1], int) and left[-1] != 0:\n before = left.pop()\n\n # (2) Absorb numbers to the right, eg. in [1, \"thousand\", 1, 26], \"1\n # thousand\" absorbs [\"1\", \"26\"] to make 1,126. We pull numbers off\n # `right` until we fill up the desired number of digits.\n after = \"\"\n while right and isinstance(right[0], int):\n next = after + str(right[0])\n if len(next) >= scale_digits: break\n after = next\n right.pop(0)\n after = int(after) if after else 0\n\n # (3) Push the parsed number into place, append whatever was left\n # unparsed, and continue.\n left.append(before * scale_value + after)\n left.extend(right)\n\n return left\n\ndef split_list(value, l: list) -> Iterator:\n \"\"\"Splits a list by occurrences of a given value.\"\"\"\n start = 0\n while True:\n try: i = l.index(value, start)\n except ValueError: break\n yield l[start:i]\n start = i+1\n yield l[start:]\n\n\f\n# # ---------- TESTS (uncomment to run) ----------\n# def test_number(expected, string):\n# print('testing:', string)\n# l = list(scan_small_numbers(string.split()))\n# print(\" scan --->\", l)\n# for scale in scales:\n# old = l\n# l = parse_scale(scale, l)\n# if scale in old: print(\" parse -->\", l)\n# else: assert old == l, \"parse_scale should do nothing if the scale does not occur in the list\"\n# result = \"\".join(str(n) for n in l)\n# assert result == parse_number(string.split())\n# assert str(expected) == result, f\"parsing {string!r}, expected {expected}, got {result}\"\n\n# test_number(105000, \"one hundred and five thousand\")\n# test_number(1000000, \"one thousand thousand\")\n# test_number(1501000, \"one million five hundred one thousand\")\n# test_number(1501106, \"one million five hundred and one thousand one hundred and six\")\n# test_number(123, \"one two three\")\n# test_number(123, \"one twenty three\")\n# test_number(104, \"ten four\") # borderline, but valid in some dialects\n# test_number(1066, \"ten sixty six\") # a common way of saying years\n# test_number(1906, \"nineteen oh six\") # year\n# test_number(2001, \"twenty oh one\") # year\n# test_number(2020, \"twenty twenty\")\n# test_number(1001, \"one thousand one\")\n# test_number(1010, \"one thousand ten\")\n# test_number(123456, \"one hundred and twenty three thousand and four hundred and fifty six\")\n# test_number(123456, \"one twenty three thousand four fifty six\")\n\n# ## failing (and somewhat debatable) tests from old numbers.py\n# #test_number(10000011, \"one million one one\")\n# #test_number(100001010, \"one million ten ten\")\n# #test_number(1050006000, \"one hundred thousand and five thousand and six thousand\")\n\n\f\n# ---------- CAPTURES ----------\nalt_digits = \"(\" + (\"|\".join(digits_map.keys())) + \")\"\nalt_teens = \"(\" + (\"|\".join(teens_map.keys())) + \")\"\nalt_tens = \"(\" + (\"|\".join(tens_map.keys())) + \")\"\nalt_scales = \"(\" + (\"|\".join(scales_map.keys())) + \")\"\nnumber_word = \"(\" + \"|\".join(numbers_map.keys()) + \")\"\n\n# TODO: allow things like \"double eight\" for 88\[email protected](\"digit_string\", rule=f\"({alt_digits} | {alt_teens} | {alt_tens})+\")\ndef digit_string(m) -> str: return parse_number(list(m))\n\[email protected](\"digits\", rule=\"<digit_string>\")\ndef digits(m) -> int:\n \"\"\"Parses a phrase representing a digit sequence, returning it as an integer.\"\"\"\n return int(m.digit_string)\n\[email protected](rule=f\"{number_word}+ (and {number_word}+)*\")\ndef number_string(m) -> str:\n \"\"\"Parses a number phrase, returning that number as a string.\"\"\"\n return parse_number(list(m))\n\[email protected](\"number\", rule=\"<user.number_string>\")\ndef number(m) -> int:\n \"\"\"Parses a number phrase, returning it as an integer.\"\"\"\n return int(m.number_string)\n\[email protected](\"number_signed\", rule=f\"[negative|minus] <number>\")\ndef number_signed(m):\n number = m[-1]\n return -number if (m[0] in [\"negative\", \"minus\"]) else number\n\[email protected](\n \"number_small\", rule=f\"({alt_digits} | {alt_teens} | {alt_tens} [{alt_digits}])\"\n)\ndef number_small(m): return int(parse_number(list(m)))\n", "path": "code/numbers.py"}]} | 2,895 | 438 |
gh_patches_debug_5419 | rasdani/github-patches | git_diff | scrapy__scrapy-475 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ability to not send specific headers in HTTP requests
Some web servers behave differently when they are receive or don't receive specific headers.
For example FeedBurner (http://feeds.feedburner.com/someblog) sends out XML RSS feeds **only is you do not set the "Referer" header.**
The idea would be to use the `headers` dict with some keys with a `None` value, and skip these headers when sending the HTTP request.
Currently, for the "Referer" example:
- `headers={"Referer": None}` sends "Referer: None"
- `headers={"Referer": ""}` sends "Referer: " (which works for the FeedBurner case, but is not satisfactory)
- disable `RefererMiddleware` but that feels a bit heavy
(for this FeedBurner thing, apparently adding `?format=xml` also does the trick)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/http/headers.py`
Content:
```
1 from w3lib.http import headers_dict_to_raw
2 from scrapy.utils.datatypes import CaselessDict
3
4
5 class Headers(CaselessDict):
6 """Case insensitive http headers dictionary"""
7
8 def __init__(self, seq=None, encoding='utf-8'):
9 self.encoding = encoding
10 super(Headers, self).__init__(seq)
11
12 def normkey(self, key):
13 """Headers must not be unicode"""
14 if isinstance(key, unicode):
15 return key.title().encode(self.encoding)
16 return key.title()
17
18 def normvalue(self, value):
19 """Headers must not be unicode"""
20 if not hasattr(value, '__iter__'):
21 value = [value]
22 return [x.encode(self.encoding) if isinstance(x, unicode) else x \
23 for x in value]
24
25 def __getitem__(self, key):
26 try:
27 return super(Headers, self).__getitem__(key)[-1]
28 except IndexError:
29 return None
30
31 def get(self, key, def_val=None):
32 try:
33 return super(Headers, self).get(key, def_val)[-1]
34 except IndexError:
35 return None
36
37 def getlist(self, key, def_val=None):
38 try:
39 return super(Headers, self).__getitem__(key)
40 except KeyError:
41 if def_val is not None:
42 return self.normvalue(def_val)
43 return []
44
45 def setlist(self, key, list_):
46 self[key] = list_
47
48 def setlistdefault(self, key, default_list=()):
49 return self.setdefault(key, default_list)
50
51 def appendlist(self, key, value):
52 lst = self.getlist(key)
53 lst.extend(self.normvalue(value))
54 self[key] = lst
55
56 def items(self):
57 return list(self.iteritems())
58
59 def iteritems(self):
60 return ((k, self.getlist(k)) for k in self.keys())
61
62 def values(self):
63 return [self[k] for k in self.keys()]
64
65 def to_string(self):
66 return headers_dict_to_raw(self)
67
68 def __copy__(self):
69 return self.__class__(self)
70 copy = __copy__
71
72
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/http/headers.py b/scrapy/http/headers.py
--- a/scrapy/http/headers.py
+++ b/scrapy/http/headers.py
@@ -17,7 +17,9 @@
def normvalue(self, value):
"""Headers must not be unicode"""
- if not hasattr(value, '__iter__'):
+ if value is None:
+ value = []
+ elif not hasattr(value, '__iter__'):
value = [value]
return [x.encode(self.encoding) if isinstance(x, unicode) else x \
for x in value]
| {"golden_diff": "diff --git a/scrapy/http/headers.py b/scrapy/http/headers.py\n--- a/scrapy/http/headers.py\n+++ b/scrapy/http/headers.py\n@@ -17,7 +17,9 @@\n \n def normvalue(self, value):\n \"\"\"Headers must not be unicode\"\"\"\n- if not hasattr(value, '__iter__'):\n+ if value is None:\n+ value = []\n+ elif not hasattr(value, '__iter__'):\n value = [value]\n return [x.encode(self.encoding) if isinstance(x, unicode) else x \\\n for x in value]\n", "issue": "Ability to not send specific headers in HTTP requests\nSome web servers behave differently when they are receive or don't receive specific headers.\n\nFor example FeedBurner (http://feeds.feedburner.com/someblog) sends out XML RSS feeds **only is you do not set the \"Referer\" header.**\n\nThe idea would be to use the `headers` dict with some keys with a `None` value, and skip these headers when sending the HTTP request.\n\nCurrently, for the \"Referer\" example:\n- `headers={\"Referer\": None}` sends \"Referer: None\"\n- `headers={\"Referer\": \"\"}` sends \"Referer: \" (which works for the FeedBurner case, but is not satisfactory)\n- disable `RefererMiddleware` but that feels a bit heavy\n\n(for this FeedBurner thing, apparently adding `?format=xml` also does the trick)\n\n", "before_files": [{"content": "from w3lib.http import headers_dict_to_raw\nfrom scrapy.utils.datatypes import CaselessDict\n\n\nclass Headers(CaselessDict):\n \"\"\"Case insensitive http headers dictionary\"\"\"\n\n def __init__(self, seq=None, encoding='utf-8'):\n self.encoding = encoding\n super(Headers, self).__init__(seq)\n\n def normkey(self, key):\n \"\"\"Headers must not be unicode\"\"\"\n if isinstance(key, unicode):\n return key.title().encode(self.encoding)\n return key.title()\n\n def normvalue(self, value):\n \"\"\"Headers must not be unicode\"\"\"\n if not hasattr(value, '__iter__'):\n value = [value]\n return [x.encode(self.encoding) if isinstance(x, unicode) else x \\\n for x in value]\n\n def __getitem__(self, key):\n try:\n return super(Headers, self).__getitem__(key)[-1]\n except IndexError:\n return None\n\n def get(self, key, def_val=None):\n try:\n return super(Headers, self).get(key, def_val)[-1]\n except IndexError:\n return None\n\n def getlist(self, key, def_val=None):\n try:\n return super(Headers, self).__getitem__(key)\n except KeyError:\n if def_val is not None:\n return self.normvalue(def_val)\n return []\n\n def setlist(self, key, list_):\n self[key] = list_\n\n def setlistdefault(self, key, default_list=()):\n return self.setdefault(key, default_list)\n\n def appendlist(self, key, value):\n lst = self.getlist(key)\n lst.extend(self.normvalue(value))\n self[key] = lst\n\n def items(self):\n return list(self.iteritems())\n\n def iteritems(self):\n return ((k, self.getlist(k)) for k in self.keys())\n\n def values(self):\n return [self[k] for k in self.keys()]\n\n def to_string(self):\n return headers_dict_to_raw(self)\n\n def __copy__(self):\n return self.__class__(self)\n copy = __copy__\n\n\n", "path": "scrapy/http/headers.py"}], "after_files": [{"content": "from w3lib.http import headers_dict_to_raw\nfrom scrapy.utils.datatypes import CaselessDict\n\n\nclass Headers(CaselessDict):\n \"\"\"Case insensitive http headers dictionary\"\"\"\n\n def __init__(self, seq=None, encoding='utf-8'):\n self.encoding = encoding\n super(Headers, self).__init__(seq)\n\n def normkey(self, key):\n \"\"\"Headers must not be unicode\"\"\"\n if isinstance(key, unicode):\n return key.title().encode(self.encoding)\n return key.title()\n\n def normvalue(self, value):\n \"\"\"Headers must not be unicode\"\"\"\n if value is None:\n value = []\n elif not hasattr(value, '__iter__'):\n value = [value]\n return [x.encode(self.encoding) if isinstance(x, unicode) else x \\\n for x in value]\n\n def __getitem__(self, key):\n try:\n return super(Headers, self).__getitem__(key)[-1]\n except IndexError:\n return None\n\n def get(self, key, def_val=None):\n try:\n return super(Headers, self).get(key, def_val)[-1]\n except IndexError:\n return None\n\n def getlist(self, key, def_val=None):\n try:\n return super(Headers, self).__getitem__(key)\n except KeyError:\n if def_val is not None:\n return self.normvalue(def_val)\n return []\n\n def setlist(self, key, list_):\n self[key] = list_\n\n def setlistdefault(self, key, default_list=()):\n return self.setdefault(key, default_list)\n\n def appendlist(self, key, value):\n lst = self.getlist(key)\n lst.extend(self.normvalue(value))\n self[key] = lst\n\n def items(self):\n return list(self.iteritems())\n\n def iteritems(self):\n return ((k, self.getlist(k)) for k in self.keys())\n\n def values(self):\n return [self[k] for k in self.keys()]\n\n def to_string(self):\n return headers_dict_to_raw(self)\n\n def __copy__(self):\n return self.__class__(self)\n copy = __copy__\n\n\n", "path": "scrapy/http/headers.py"}]} | 1,040 | 128 |
gh_patches_debug_6813 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-436 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make master exist when there are no tasks left.
Currently, master exists when there are no tasks left AND all workers are gone. It might left hanging if a worker got preempted.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/python/elasticdl/master/main.py`
Content:
```
1 import logging
2 import time
3 import argparse
4 import os
5
6 import grpc
7 import tensorflow as tf
8
9 tf.enable_eager_execution()
10
11 from concurrent import futures
12 from recordio import File
13 from elasticdl.proto import master_pb2_grpc
14 from elasticdl.master.servicer import MasterServicer
15 from elasticdl.master.task_queue import _TaskQueue
16 from elasticdl.master.k8s_worker_manager import WorkerManager
17 from elasticdl.common.model_helper import load_user_model, build_model
18
19
20 def _make_task_queue(data_dir, record_per_task, num_epoch):
21 f_records = {}
22 for f in os.listdir(data_dir):
23 p = os.path.join(data_dir, f)
24 with File(p, "r") as rio:
25 f_records[p] = rio.count()
26 return _TaskQueue(f_records, record_per_task, num_epoch)
27
28
29 def _parse_args():
30 parser = argparse.ArgumentParser(description="ElasticDL Master")
31 parser.add_argument(
32 "--model_file",
33 help="Full file path of user defined neural model",
34 required=True,
35 )
36 parser.add_argument(
37 "--train_data_dir",
38 help="Training data directory. Files should be in RecordIO format",
39 required=True,
40 )
41 parser.add_argument("--record_per_task", type=int, required=True)
42 parser.add_argument("--num_epoch", type=int, required=True)
43 parser.add_argument(
44 "--grads_to_wait",
45 type=int,
46 help="Number of gradients to wait before updating model",
47 required=True,
48 )
49 parser.add_argument(
50 "--minibatch_size",
51 type=int,
52 help="Minibatch size used by workers to compute gradients",
53 required=True,
54 )
55 parser.add_argument(
56 "--num_worker",
57 type=int,
58 help="the number of workers used in training",
59 default=0,
60 )
61 parser.add_argument(
62 "--worker_cpu_request",
63 help="the minimal cpu required by worker in training",
64 default="1000m",
65 )
66 parser.add_argument(
67 "--worker_cpu_limit",
68 help="the maximal cpu used by worker in training",
69 default="1000m",
70 )
71 parser.add_argument(
72 "--worker_memory_request",
73 help="the minimal memory required by worker in training",
74 default="4096Mi",
75 )
76 parser.add_argument(
77 "--worker_memory_limit",
78 help="the maximal memory used by worker in training",
79 default="4096Mi",
80 )
81 parser.add_argument(
82 "--worker_pod_priority",
83 help="the requested priority of worker pod")
84 parser.add_argument(
85 "--worker_image", help="docker image for worker", default=None
86 )
87 parser.add_argument("--job_name", help="job name", required=True)
88 parser.add_argument(
89 "--codec_type",
90 default="bytes",
91 choices=["tf_example", "bytes"],
92 help="Type of codec(tf_example or bytes)",
93 )
94 return parser.parse_args()
95
96
97 def main():
98 # TODO: pass port via flags.
99 PORT = 50001
100 logger = logging.getLogger("master")
101 args = _parse_args()
102 task_q = _make_task_queue(
103 args.train_data_dir, args.record_per_task, args.num_epoch
104 )
105 model_module = load_user_model(args.model_file)
106 model_inst = model_module.model
107 build_model(model_inst, model_module.feature_columns())
108 optimizer = model_module.optimizer()
109
110 server = grpc.server(futures.ThreadPoolExecutor(max_workers=64))
111 master_pb2_grpc.add_MasterServicer_to_server(
112 MasterServicer(
113 logger,
114 args.grads_to_wait,
115 args.minibatch_size,
116 optimizer,
117 task_q,
118 init_var=model_inst.trainable_variables,
119 ),
120 server,
121 )
122 server.add_insecure_port("[::]:{}".format(PORT))
123 server.start()
124 logger.warning("Server started at port: %d", PORT)
125
126 if args.num_worker:
127 master_addr = "%s:%d" % (os.getenv("MY_POD_IP", "localhost"), PORT)
128 worker_command = ["python"]
129 worker_args = [
130 "-m",
131 "elasticdl.worker.main",
132 "--model_file",
133 args.model_file,
134 "--master_addr",
135 master_addr,
136 "--codec_type",
137 args.codec_type
138 ]
139
140 worker_manager = WorkerManager(
141 job_name=args.job_name,
142 worker_image=args.worker_image,
143 command=worker_command,
144 args=worker_args,
145 namespace="default",
146 num_worker=args.num_worker,
147 cpu_request=args.worker_cpu_request,
148 cpu_limit=args.worker_cpu_limit,
149 memory_request=args.worker_memory_request,
150 memory_limit=args.worker_memory_limit,
151 pod_priority=args.worker_pod_priority,
152 )
153 worker_manager.start_workers(restart_policy="Never")
154
155 try:
156 while True:
157 if task_q.finished():
158 break
159 time.sleep(30)
160 except KeyboardInterrupt:
161 logger.warning("Server stopping")
162
163 if args.num_worker:
164 # TODO: worker_manager.remove_workers supports synchronized call
165 worker_manager.remove_workers()
166 # wait for worker pod to be deleted
167 max_check_num = 10
168 for _ in range(max_check_num):
169 time.sleep(3)
170 counters = worker_manager.get_counters()
171 if not counters:
172 break
173 server.stop(0)
174
175
176 if __name__ == "__main__":
177 logging.basicConfig()
178 main()
179
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticdl/python/elasticdl/master/main.py b/elasticdl/python/elasticdl/master/main.py
--- a/elasticdl/python/elasticdl/master/main.py
+++ b/elasticdl/python/elasticdl/master/main.py
@@ -163,13 +163,7 @@
if args.num_worker:
# TODO: worker_manager.remove_workers supports synchronized call
worker_manager.remove_workers()
- # wait for worker pod to be deleted
- max_check_num = 10
- for _ in range(max_check_num):
- time.sleep(3)
- counters = worker_manager.get_counters()
- if not counters:
- break
+
server.stop(0)
| {"golden_diff": "diff --git a/elasticdl/python/elasticdl/master/main.py b/elasticdl/python/elasticdl/master/main.py\n--- a/elasticdl/python/elasticdl/master/main.py\n+++ b/elasticdl/python/elasticdl/master/main.py\n@@ -163,13 +163,7 @@\n if args.num_worker:\n # TODO: worker_manager.remove_workers supports synchronized call\n worker_manager.remove_workers()\n- # wait for worker pod to be deleted\n- max_check_num = 10\n- for _ in range(max_check_num):\n- time.sleep(3)\n- counters = worker_manager.get_counters()\n- if not counters:\n- break\n+\n server.stop(0)\n", "issue": "Make master exist when there are no tasks left.\nCurrently, master exists when there are no tasks left AND all workers are gone. It might left hanging if a worker got preempted.\n", "before_files": [{"content": "import logging\nimport time\nimport argparse\nimport os\n\nimport grpc\nimport tensorflow as tf\n\ntf.enable_eager_execution()\n\nfrom concurrent import futures\nfrom recordio import File\nfrom elasticdl.proto import master_pb2_grpc\nfrom elasticdl.master.servicer import MasterServicer\nfrom elasticdl.master.task_queue import _TaskQueue\nfrom elasticdl.master.k8s_worker_manager import WorkerManager\nfrom elasticdl.common.model_helper import load_user_model, build_model\n\n\ndef _make_task_queue(data_dir, record_per_task, num_epoch):\n f_records = {}\n for f in os.listdir(data_dir):\n p = os.path.join(data_dir, f)\n with File(p, \"r\") as rio:\n f_records[p] = rio.count()\n return _TaskQueue(f_records, record_per_task, num_epoch)\n\n\ndef _parse_args():\n parser = argparse.ArgumentParser(description=\"ElasticDL Master\")\n parser.add_argument(\n \"--model_file\",\n help=\"Full file path of user defined neural model\",\n required=True,\n )\n parser.add_argument(\n \"--train_data_dir\",\n help=\"Training data directory. Files should be in RecordIO format\",\n required=True,\n )\n parser.add_argument(\"--record_per_task\", type=int, required=True)\n parser.add_argument(\"--num_epoch\", type=int, required=True)\n parser.add_argument(\n \"--grads_to_wait\",\n type=int,\n help=\"Number of gradients to wait before updating model\",\n required=True,\n )\n parser.add_argument(\n \"--minibatch_size\",\n type=int,\n help=\"Minibatch size used by workers to compute gradients\",\n required=True,\n )\n parser.add_argument(\n \"--num_worker\",\n type=int,\n help=\"the number of workers used in training\",\n default=0,\n )\n parser.add_argument(\n \"--worker_cpu_request\",\n help=\"the minimal cpu required by worker in training\",\n default=\"1000m\",\n )\n parser.add_argument(\n \"--worker_cpu_limit\",\n help=\"the maximal cpu used by worker in training\",\n default=\"1000m\",\n )\n parser.add_argument(\n \"--worker_memory_request\",\n help=\"the minimal memory required by worker in training\",\n default=\"4096Mi\",\n )\n parser.add_argument(\n \"--worker_memory_limit\",\n help=\"the maximal memory used by worker in training\",\n default=\"4096Mi\",\n )\n parser.add_argument(\n \"--worker_pod_priority\",\n help=\"the requested priority of worker pod\")\n parser.add_argument(\n \"--worker_image\", help=\"docker image for worker\", default=None\n )\n parser.add_argument(\"--job_name\", help=\"job name\", required=True)\n parser.add_argument(\n \"--codec_type\",\n default=\"bytes\",\n choices=[\"tf_example\", \"bytes\"],\n help=\"Type of codec(tf_example or bytes)\",\n )\n return parser.parse_args()\n\n\ndef main():\n # TODO: pass port via flags.\n PORT = 50001\n logger = logging.getLogger(\"master\")\n args = _parse_args()\n task_q = _make_task_queue(\n args.train_data_dir, args.record_per_task, args.num_epoch\n )\n model_module = load_user_model(args.model_file)\n model_inst = model_module.model\n build_model(model_inst, model_module.feature_columns())\n optimizer = model_module.optimizer()\n\n server = grpc.server(futures.ThreadPoolExecutor(max_workers=64))\n master_pb2_grpc.add_MasterServicer_to_server(\n MasterServicer(\n logger,\n args.grads_to_wait,\n args.minibatch_size,\n optimizer,\n task_q,\n init_var=model_inst.trainable_variables,\n ),\n server,\n )\n server.add_insecure_port(\"[::]:{}\".format(PORT))\n server.start()\n logger.warning(\"Server started at port: %d\", PORT)\n\n if args.num_worker:\n master_addr = \"%s:%d\" % (os.getenv(\"MY_POD_IP\", \"localhost\"), PORT)\n worker_command = [\"python\"]\n worker_args = [\n \"-m\",\n \"elasticdl.worker.main\",\n \"--model_file\",\n args.model_file,\n \"--master_addr\",\n master_addr,\n \"--codec_type\",\n args.codec_type\n ]\n\n worker_manager = WorkerManager(\n job_name=args.job_name,\n worker_image=args.worker_image,\n command=worker_command,\n args=worker_args,\n namespace=\"default\",\n num_worker=args.num_worker,\n cpu_request=args.worker_cpu_request,\n cpu_limit=args.worker_cpu_limit,\n memory_request=args.worker_memory_request,\n memory_limit=args.worker_memory_limit,\n pod_priority=args.worker_pod_priority,\n )\n worker_manager.start_workers(restart_policy=\"Never\")\n\n try:\n while True:\n if task_q.finished():\n break\n time.sleep(30)\n except KeyboardInterrupt:\n logger.warning(\"Server stopping\")\n\n if args.num_worker:\n # TODO: worker_manager.remove_workers supports synchronized call\n worker_manager.remove_workers()\n # wait for worker pod to be deleted\n max_check_num = 10\n for _ in range(max_check_num):\n time.sleep(3)\n counters = worker_manager.get_counters()\n if not counters:\n break\n server.stop(0)\n\n\nif __name__ == \"__main__\":\n logging.basicConfig()\n main()\n", "path": "elasticdl/python/elasticdl/master/main.py"}], "after_files": [{"content": "import logging\nimport time\nimport argparse\nimport os\n\nimport grpc\nimport tensorflow as tf\n\ntf.enable_eager_execution()\n\nfrom concurrent import futures\nfrom recordio import File\nfrom elasticdl.proto import master_pb2_grpc\nfrom elasticdl.master.servicer import MasterServicer\nfrom elasticdl.master.task_queue import _TaskQueue\nfrom elasticdl.master.k8s_worker_manager import WorkerManager\nfrom elasticdl.common.model_helper import load_user_model, build_model\n\n\ndef _make_task_queue(data_dir, record_per_task, num_epoch):\n f_records = {}\n for f in os.listdir(data_dir):\n p = os.path.join(data_dir, f)\n with File(p, \"r\") as rio:\n f_records[p] = rio.count()\n return _TaskQueue(f_records, record_per_task, num_epoch)\n\n\ndef _parse_args():\n parser = argparse.ArgumentParser(description=\"ElasticDL Master\")\n parser.add_argument(\n \"--model_file\",\n help=\"Full file path of user defined neural model\",\n required=True,\n )\n parser.add_argument(\n \"--train_data_dir\",\n help=\"Training data directory. Files should be in RecordIO format\",\n required=True,\n )\n parser.add_argument(\"--record_per_task\", type=int, required=True)\n parser.add_argument(\"--num_epoch\", type=int, required=True)\n parser.add_argument(\n \"--grads_to_wait\",\n type=int,\n help=\"Number of gradients to wait before updating model\",\n required=True,\n )\n parser.add_argument(\n \"--minibatch_size\",\n type=int,\n help=\"Minibatch size used by workers to compute gradients\",\n required=True,\n )\n parser.add_argument(\n \"--num_worker\",\n type=int,\n help=\"the number of workers used in training\",\n default=0,\n )\n parser.add_argument(\n \"--worker_cpu_request\",\n help=\"the minimal cpu required by worker in training\",\n default=\"1000m\",\n )\n parser.add_argument(\n \"--worker_cpu_limit\",\n help=\"the maximal cpu used by worker in training\",\n default=\"1000m\",\n )\n parser.add_argument(\n \"--worker_memory_request\",\n help=\"the minimal memory required by worker in training\",\n default=\"4096Mi\",\n )\n parser.add_argument(\n \"--worker_memory_limit\",\n help=\"the maximal memory used by worker in training\",\n default=\"4096Mi\",\n )\n parser.add_argument(\n \"--worker_pod_priority\",\n help=\"the requested priority of worker pod\")\n parser.add_argument(\n \"--worker_image\", help=\"docker image for worker\", default=None\n )\n parser.add_argument(\"--job_name\", help=\"job name\", required=True)\n parser.add_argument(\n \"--codec_type\",\n default=\"bytes\",\n choices=[\"tf_example\", \"bytes\"],\n help=\"Type of codec(tf_example or bytes)\",\n )\n return parser.parse_args()\n\n\ndef main():\n # TODO: pass port via flags.\n PORT = 50001\n logger = logging.getLogger(\"master\")\n args = _parse_args()\n task_q = _make_task_queue(\n args.train_data_dir, args.record_per_task, args.num_epoch\n )\n model_module = load_user_model(args.model_file)\n model_inst = model_module.model\n build_model(model_inst, model_module.feature_columns())\n optimizer = model_module.optimizer()\n\n server = grpc.server(futures.ThreadPoolExecutor(max_workers=64))\n master_pb2_grpc.add_MasterServicer_to_server(\n MasterServicer(\n logger,\n args.grads_to_wait,\n args.minibatch_size,\n optimizer,\n task_q,\n init_var=model_inst.trainable_variables,\n ),\n server,\n )\n server.add_insecure_port(\"[::]:{}\".format(PORT))\n server.start()\n logger.warning(\"Server started at port: %d\", PORT)\n\n if args.num_worker:\n master_addr = \"%s:%d\" % (os.getenv(\"MY_POD_IP\", \"localhost\"), PORT)\n worker_command = [\"python\"]\n worker_args = [\n \"-m\",\n \"elasticdl.worker.main\",\n \"--model_file\",\n args.model_file,\n \"--master_addr\",\n master_addr,\n \"--codec_type\",\n args.codec_type\n ]\n\n worker_manager = WorkerManager(\n job_name=args.job_name,\n worker_image=args.worker_image,\n command=worker_command,\n args=worker_args,\n namespace=\"default\",\n num_worker=args.num_worker,\n cpu_request=args.worker_cpu_request,\n cpu_limit=args.worker_cpu_limit,\n memory_request=args.worker_memory_request,\n memory_limit=args.worker_memory_limit,\n pod_priority=args.worker_pod_priority,\n )\n worker_manager.start_workers(restart_policy=\"Never\")\n\n try:\n while True:\n if task_q.finished():\n break\n time.sleep(30)\n except KeyboardInterrupt:\n logger.warning(\"Server stopping\")\n\n if args.num_worker:\n # TODO: worker_manager.remove_workers supports synchronized call\n worker_manager.remove_workers()\n\n server.stop(0)\n\n\nif __name__ == \"__main__\":\n logging.basicConfig()\n main()\n", "path": "elasticdl/python/elasticdl/master/main.py"}]} | 1,879 | 155 |
gh_patches_debug_36915 | rasdani/github-patches | git_diff | scikit-image__scikit-image-6306 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Gallery example 'Using Polar and Log-Polar Transformations for Registration' needs fixing
## Description
The gallery example for Using Polar and Log-Polar Transformations for Registration seems to be broken as of 0.19.x. Was working in 0.18.x.
## Way to reproduce
Unsure if this is specific to the production doc build or an issue in the code. I am raising this issue to get it on the board before starting my testing.
## Version information
See description. I'll report back in this thread when I know more.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `doc/examples/registration/plot_register_rotation.py`
Content:
```
1 r"""
2 ==========================================================
3 Using Polar and Log-Polar Transformations for Registration
4 ==========================================================
5
6 Phase correlation (``registration.phase_cross_correlation``) is an efficient
7 method for determining translation offset between pairs of similar images.
8 However this approach relies on a near absence of rotation/scaling differences
9 between the images, which are typical in real-world examples.
10
11 To recover rotation and scaling differences between two images, we can take
12 advantage of two geometric properties of the log-polar transform and the
13 translation invariance of the frequency domain. First, rotation in Cartesian
14 space becomes translation along the angular coordinate (:math:`\theta`) axis
15 of log-polar space. Second, scaling in Cartesian space becomes translation
16 along the radial coordinate (:math:`\rho = \ln\sqrt{x^2 + y^2}`) of log-polar
17 space. Finally, differences in translation in the spatial domain do not impact
18 magnitude spectrum in the frequency domain.
19
20 In this series of examples, we build on these concepts to show how the
21 log-polar transform ``transform.warp_polar`` can be used in conjunction with
22 phase correlation to recover rotation and scaling differences between two
23 images that also have a translation offset.
24 """
25
26 ######################################################################
27 # Recover rotation difference with a polar transform
28 # ==================================================
29 #
30 # In this first example, we consider the simple case of two images that only
31 # differ with respect to rotation around a common center point. By remapping
32 # these images into polar space, the rotation difference becomes a simple
33 # translation difference that can be recovered by phase correlation.
34
35 import numpy as np
36 import matplotlib.pyplot as plt
37
38 from skimage import data
39 from skimage.registration import phase_cross_correlation
40 from skimage.transform import warp_polar, rotate, rescale
41 from skimage.util import img_as_float
42
43 radius = 705
44 angle = 35
45 image = data.retina()
46 image = img_as_float(image)
47 rotated = rotate(image, angle)
48 image_polar = warp_polar(image, radius=radius, channel_axis=-1)
49 rotated_polar = warp_polar(rotated, radius=radius, channel_axis=-1)
50
51 fig, axes = plt.subplots(2, 2, figsize=(8, 8))
52 ax = axes.ravel()
53 ax[0].set_title("Original")
54 ax[0].imshow(image)
55 ax[1].set_title("Rotated")
56 ax[1].imshow(rotated)
57 ax[2].set_title("Polar-Transformed Original")
58 ax[2].imshow(image_polar)
59 ax[3].set_title("Polar-Transformed Rotated")
60 ax[3].imshow(rotated_polar)
61 plt.show()
62
63 shifts, error, phasediff = phase_cross_correlation(image_polar, rotated_polar)
64 print(f'Expected value for counterclockwise rotation in degrees: '
65 f'{angle}')
66 print(f'Recovered value for counterclockwise rotation: '
67 f'{shifts[0]}')
68
69 ######################################################################
70 # Recover rotation and scaling differences with log-polar transform
71 # =================================================================
72 #
73 # In this second example, the images differ by both rotation and scaling (note
74 # the axis tick values). By remapping these images into log-polar space,
75 # we can recover rotation as before, and now also scaling, by phase
76 # correlation.
77
78 # radius must be large enough to capture useful info in larger image
79 radius = 1500
80 angle = 53.7
81 scale = 2.2
82 image = data.retina()
83 image = img_as_float(image)
84 rotated = rotate(image, angle)
85 rescaled = rescale(rotated, scale, channel_axis=-1)
86 image_polar = warp_polar(image, radius=radius,
87 scaling='log', channel_axis=-1)
88 rescaled_polar = warp_polar(rescaled, radius=radius,
89 scaling='log', channel_axis=-1)
90
91 fig, axes = plt.subplots(2, 2, figsize=(8, 8))
92 ax = axes.ravel()
93 ax[0].set_title("Original")
94 ax[0].imshow(image)
95 ax[1].set_title("Rotated and Rescaled")
96 ax[1].imshow(rescaled)
97 ax[2].set_title("Log-Polar-Transformed Original")
98 ax[2].imshow(image_polar)
99 ax[3].set_title("Log-Polar-Transformed Rotated and Rescaled")
100 ax[3].imshow(rescaled_polar)
101 plt.show()
102
103 # setting `upsample_factor` can increase precision
104 shifts, error, phasediff = phase_cross_correlation(image_polar, rescaled_polar,
105 upsample_factor=20)
106 shiftr, shiftc = shifts[:2]
107
108 # Calculate scale factor from translation
109 klog = radius / np.log(radius)
110 shift_scale = 1 / (np.exp(shiftc / klog))
111
112 print(f'Expected value for cc rotation in degrees: {angle}')
113 print(f'Recovered value for cc rotation: {shiftr}')
114 print()
115 print(f'Expected value for scaling difference: {scale}')
116 print(f'Recovered value for scaling difference: {shift_scale}')
117
118 ######################################################################
119 # Register rotation and scaling on a translated image - Part 1
120 # =================================================================
121 #
122 # The above examples only work when the images to be registered share a
123 # center. However, it is more often the case that there is also a translation
124 # component to the difference between two images to be registered. One
125 # approach to register rotation, scaling and translation is to first correct
126 # for rotation and scaling, then solve for translation. It is possible to
127 # resolve rotation and scaling differences for translated images by working on
128 # the magnitude spectra of the Fourier-transformed images.
129 #
130 # In this next example, we first show how the above approaches fail when two
131 # images differ by rotation, scaling, and translation.
132
133 from skimage.color import rgb2gray
134 from skimage.filters import window, difference_of_gaussians
135 from scipy.fft import fft2, fftshift
136
137 angle = 24
138 scale = 1.4
139 shiftr = 30
140 shiftc = 15
141
142 image = rgb2gray(data.retina())
143 translated = image[shiftr:, shiftc:]
144 rotated = rotate(translated, angle)
145 rescaled = rescale(rotated, scale)
146 sizer, sizec = image.shape
147 rts_image = rescaled[:sizer, :sizec]
148
149 # When center is not shared, log-polar transform is not helpful!
150 radius = 705
151 warped_image = warp_polar(image, radius=radius, scaling="log")
152 warped_rts = warp_polar(rts_image, radius=radius, scaling="log")
153 shifts, error, phasediff = phase_cross_correlation(warped_image, warped_rts,
154 upsample_factor=20)
155 shiftr, shiftc = shifts[:2]
156 klog = radius / np.log(radius)
157 shift_scale = 1 / (np.exp(shiftc / klog))
158
159 fig, axes = plt.subplots(2, 2, figsize=(8, 8))
160 ax = axes.ravel()
161 ax[0].set_title("Original Image")
162 ax[0].imshow(image, cmap='gray')
163 ax[1].set_title("Modified Image")
164 ax[1].imshow(rts_image, cmap='gray')
165 ax[2].set_title("Log-Polar-Transformed Original")
166 ax[2].imshow(warped_image)
167 ax[3].set_title("Log-Polar-Transformed Modified")
168 ax[3].imshow(warped_rts)
169 fig.suptitle('log-polar-based registration fails when no shared center')
170 plt.show()
171
172 print(f'Expected value for cc rotation in degrees: {angle}')
173 print(f'Recovered value for cc rotation: {shiftr}')
174 print()
175 print(f'Expected value for scaling difference: {scale}')
176 print(f'Recovered value for scaling difference: {shift_scale}')
177
178 ######################################################################
179 # Register rotation and scaling on a translated image - Part 2
180 # =================================================================
181 #
182 # We next show how rotation and scaling differences, but not translation
183 # differences, are apparent in the frequency magnitude spectra of the images.
184 # These differences can be recovered by treating the magnitude spectra as
185 # images themselves, and applying the same log-polar + phase correlation
186 # approach taken above.
187
188 # First, band-pass filter both images
189 image = difference_of_gaussians(image, 5, 20)
190 rts_image = difference_of_gaussians(rts_image, 5, 20)
191
192 # window images
193 wimage = image * window('hann', image.shape)
194 rts_wimage = rts_image * window('hann', image.shape)
195
196 # work with shifted FFT magnitudes
197 image_fs = np.abs(fftshift(fft2(wimage)))
198 rts_fs = np.abs(fftshift(fft2(rts_wimage)))
199
200 # Create log-polar transformed FFT mag images and register
201 shape = image_fs.shape
202 radius = shape[0] // 8 # only take lower frequencies
203 warped_image_fs = warp_polar(image_fs, radius=radius, output_shape=shape,
204 scaling='log', order=0)
205 warped_rts_fs = warp_polar(rts_fs, radius=radius, output_shape=shape,
206 scaling='log', order=0)
207
208 warped_image_fs = warped_image_fs[:shape[0] // 2, :] # only use half of FFT
209 warped_rts_fs = warped_rts_fs[:shape[0] // 2, :]
210 shifts, error, phasediff = phase_cross_correlation(warped_image_fs,
211 warped_rts_fs,
212 upsample_factor=10)
213
214 # Use translation parameters to calculate rotation and scaling parameters
215 shiftr, shiftc = shifts[:2]
216 recovered_angle = (360 / shape[0]) * shiftr
217 klog = shape[1] / np.log(radius)
218 shift_scale = np.exp(shiftc / klog)
219
220 fig, axes = plt.subplots(2, 2, figsize=(8, 8))
221 ax = axes.ravel()
222 ax[0].set_title("Original Image FFT\n(magnitude; zoomed)")
223 center = np.array(shape) // 2
224 ax[0].imshow(image_fs[center[0] - radius:center[0] + radius,
225 center[1] - radius:center[1] + radius],
226 cmap='magma')
227 ax[1].set_title("Modified Image FFT\n(magnitude; zoomed)")
228 ax[1].imshow(rts_fs[center[0] - radius:center[0] + radius,
229 center[1] - radius:center[1] + radius],
230 cmap='magma')
231 ax[2].set_title("Log-Polar-Transformed\nOriginal FFT")
232 ax[2].imshow(warped_image_fs, cmap='magma')
233 ax[3].set_title("Log-Polar-Transformed\nModified FFT")
234 ax[3].imshow(warped_rts_fs, cmap='magma')
235 fig.suptitle('Working in frequency domain can recover rotation and scaling')
236 plt.show()
237
238 print(f'Expected value for cc rotation in degrees: {angle}')
239 print(f'Recovered value for cc rotation: {recovered_angle}')
240 print()
241 print(f'Expected value for scaling difference: {scale}')
242 print(f'Recovered value for scaling difference: {shift_scale}')
243
244 ######################################################################
245 # Some notes on this approach
246 # =================================================================
247 #
248 # It should be noted that this approach relies on a couple of parameters
249 # that have to be chosen ahead of time, and for which there are no clearly
250 # optimal choices:
251 #
252 # 1. The images should have some degree of bandpass filtering
253 # applied, particularly to remove high frequencies, and different choices here
254 # may impact outcome. The bandpass filter also complicates matters because,
255 # since the images to be registered will differ in scale and these scale
256 # differences are unknown, any bandpass filter will necessarily attenuate
257 # different features between the images. For example, the log-polar transformed
258 # magnitude spectra don't really look "alike" in the last example here. Yet if
259 # you look closely, there are some common patterns in those spectra, and they
260 # do end up aligning well by phase correlation as demonstrated.
261 #
262 # 2. Images must be windowed using windows with circular symmetry, to remove
263 # the spectral leakage coming from image borders. There is no clearly optimal
264 # choice of window.
265 #
266 # Finally, we note that large changes in scale will dramatically alter the
267 # magnitude spectra, especially since a big change in scale will usually be
268 # accompanied by some cropping and loss of information content. The literature
269 # advises staying within 1.8-2x scale change [1]_ [2]_. This is fine for most
270 # biological imaging applications.
271 #
272 # References
273 # ----------
274 #
275 # .. [1] B.S. Reddy and B.N. Chatterji. An FFT-based technique for translation,
276 # rotation and scale- invariant image registration. IEEE Trans. Image
277 # Processing, 5(8):1266–1271, 1996. :DOI:`10.1109/83.506761`
278 #
279 # .. [2] Tzimiropoulos, Georgios, and Tania Stathaki. "Robust FFT-based
280 # scale-invariant image registration." In 4th SEAS DTC Technical
281 # Conference. 2009. :DOI:`10.1109/TPAMI.2010.107`
282
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/doc/examples/registration/plot_register_rotation.py b/doc/examples/registration/plot_register_rotation.py
--- a/doc/examples/registration/plot_register_rotation.py
+++ b/doc/examples/registration/plot_register_rotation.py
@@ -60,7 +60,9 @@
ax[3].imshow(rotated_polar)
plt.show()
-shifts, error, phasediff = phase_cross_correlation(image_polar, rotated_polar)
+shifts, error, phasediff = phase_cross_correlation(image_polar,
+ rotated_polar,
+ normalization=None)
print(f'Expected value for counterclockwise rotation in degrees: '
f'{angle}')
print(f'Recovered value for counterclockwise rotation: '
@@ -102,7 +104,8 @@
# setting `upsample_factor` can increase precision
shifts, error, phasediff = phase_cross_correlation(image_polar, rescaled_polar,
- upsample_factor=20)
+ upsample_factor=20,
+ normalization=None)
shiftr, shiftc = shifts[:2]
# Calculate scale factor from translation
@@ -151,7 +154,8 @@
warped_image = warp_polar(image, radius=radius, scaling="log")
warped_rts = warp_polar(rts_image, radius=radius, scaling="log")
shifts, error, phasediff = phase_cross_correlation(warped_image, warped_rts,
- upsample_factor=20)
+ upsample_factor=20,
+ normalization=None)
shiftr, shiftc = shifts[:2]
klog = radius / np.log(radius)
shift_scale = 1 / (np.exp(shiftc / klog))
@@ -209,7 +213,8 @@
warped_rts_fs = warped_rts_fs[:shape[0] // 2, :]
shifts, error, phasediff = phase_cross_correlation(warped_image_fs,
warped_rts_fs,
- upsample_factor=10)
+ upsample_factor=10,
+ normalization=None)
# Use translation parameters to calculate rotation and scaling parameters
shiftr, shiftc = shifts[:2]
| {"golden_diff": "diff --git a/doc/examples/registration/plot_register_rotation.py b/doc/examples/registration/plot_register_rotation.py\n--- a/doc/examples/registration/plot_register_rotation.py\n+++ b/doc/examples/registration/plot_register_rotation.py\n@@ -60,7 +60,9 @@\n ax[3].imshow(rotated_polar)\n plt.show()\n \n-shifts, error, phasediff = phase_cross_correlation(image_polar, rotated_polar)\n+shifts, error, phasediff = phase_cross_correlation(image_polar,\n+ rotated_polar,\n+ normalization=None)\n print(f'Expected value for counterclockwise rotation in degrees: '\n f'{angle}')\n print(f'Recovered value for counterclockwise rotation: '\n@@ -102,7 +104,8 @@\n \n # setting `upsample_factor` can increase precision\n shifts, error, phasediff = phase_cross_correlation(image_polar, rescaled_polar,\n- upsample_factor=20)\n+ upsample_factor=20,\n+ normalization=None)\n shiftr, shiftc = shifts[:2]\n \n # Calculate scale factor from translation\n@@ -151,7 +154,8 @@\n warped_image = warp_polar(image, radius=radius, scaling=\"log\")\n warped_rts = warp_polar(rts_image, radius=radius, scaling=\"log\")\n shifts, error, phasediff = phase_cross_correlation(warped_image, warped_rts,\n- upsample_factor=20)\n+ upsample_factor=20,\n+ normalization=None)\n shiftr, shiftc = shifts[:2]\n klog = radius / np.log(radius)\n shift_scale = 1 / (np.exp(shiftc / klog))\n@@ -209,7 +213,8 @@\n warped_rts_fs = warped_rts_fs[:shape[0] // 2, :]\n shifts, error, phasediff = phase_cross_correlation(warped_image_fs,\n warped_rts_fs,\n- upsample_factor=10)\n+ upsample_factor=10,\n+ normalization=None)\n \n # Use translation parameters to calculate rotation and scaling parameters\n shiftr, shiftc = shifts[:2]\n", "issue": "Gallery example 'Using Polar and Log-Polar Transformations for Registration' needs fixing\n## Description\r\nThe gallery example for Using Polar and Log-Polar Transformations for Registration seems to be broken as of 0.19.x. Was working in 0.18.x.\r\n\r\n## Way to reproduce\r\nUnsure if this is specific to the production doc build or an issue in the code. I am raising this issue to get it on the board before starting my testing.\r\n\r\n\r\n## Version information\r\nSee description. I'll report back in this thread when I know more.\r\n\n", "before_files": [{"content": "r\"\"\"\n==========================================================\nUsing Polar and Log-Polar Transformations for Registration\n==========================================================\n\nPhase correlation (``registration.phase_cross_correlation``) is an efficient\nmethod for determining translation offset between pairs of similar images.\nHowever this approach relies on a near absence of rotation/scaling differences\nbetween the images, which are typical in real-world examples.\n\nTo recover rotation and scaling differences between two images, we can take\nadvantage of two geometric properties of the log-polar transform and the\ntranslation invariance of the frequency domain. First, rotation in Cartesian\nspace becomes translation along the angular coordinate (:math:`\\theta`) axis\nof log-polar space. Second, scaling in Cartesian space becomes translation\nalong the radial coordinate (:math:`\\rho = \\ln\\sqrt{x^2 + y^2}`) of log-polar\nspace. Finally, differences in translation in the spatial domain do not impact\nmagnitude spectrum in the frequency domain.\n\nIn this series of examples, we build on these concepts to show how the\nlog-polar transform ``transform.warp_polar`` can be used in conjunction with\nphase correlation to recover rotation and scaling differences between two\nimages that also have a translation offset.\n\"\"\"\n\n######################################################################\n# Recover rotation difference with a polar transform\n# ==================================================\n#\n# In this first example, we consider the simple case of two images that only\n# differ with respect to rotation around a common center point. By remapping\n# these images into polar space, the rotation difference becomes a simple\n# translation difference that can be recovered by phase correlation.\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom skimage import data\nfrom skimage.registration import phase_cross_correlation\nfrom skimage.transform import warp_polar, rotate, rescale\nfrom skimage.util import img_as_float\n\nradius = 705\nangle = 35\nimage = data.retina()\nimage = img_as_float(image)\nrotated = rotate(image, angle)\nimage_polar = warp_polar(image, radius=radius, channel_axis=-1)\nrotated_polar = warp_polar(rotated, radius=radius, channel_axis=-1)\n\nfig, axes = plt.subplots(2, 2, figsize=(8, 8))\nax = axes.ravel()\nax[0].set_title(\"Original\")\nax[0].imshow(image)\nax[1].set_title(\"Rotated\")\nax[1].imshow(rotated)\nax[2].set_title(\"Polar-Transformed Original\")\nax[2].imshow(image_polar)\nax[3].set_title(\"Polar-Transformed Rotated\")\nax[3].imshow(rotated_polar)\nplt.show()\n\nshifts, error, phasediff = phase_cross_correlation(image_polar, rotated_polar)\nprint(f'Expected value for counterclockwise rotation in degrees: '\n f'{angle}')\nprint(f'Recovered value for counterclockwise rotation: '\n f'{shifts[0]}')\n\n######################################################################\n# Recover rotation and scaling differences with log-polar transform\n# =================================================================\n#\n# In this second example, the images differ by both rotation and scaling (note\n# the axis tick values). By remapping these images into log-polar space,\n# we can recover rotation as before, and now also scaling, by phase\n# correlation.\n\n# radius must be large enough to capture useful info in larger image\nradius = 1500\nangle = 53.7\nscale = 2.2\nimage = data.retina()\nimage = img_as_float(image)\nrotated = rotate(image, angle)\nrescaled = rescale(rotated, scale, channel_axis=-1)\nimage_polar = warp_polar(image, radius=radius,\n scaling='log', channel_axis=-1)\nrescaled_polar = warp_polar(rescaled, radius=radius,\n scaling='log', channel_axis=-1)\n\nfig, axes = plt.subplots(2, 2, figsize=(8, 8))\nax = axes.ravel()\nax[0].set_title(\"Original\")\nax[0].imshow(image)\nax[1].set_title(\"Rotated and Rescaled\")\nax[1].imshow(rescaled)\nax[2].set_title(\"Log-Polar-Transformed Original\")\nax[2].imshow(image_polar)\nax[3].set_title(\"Log-Polar-Transformed Rotated and Rescaled\")\nax[3].imshow(rescaled_polar)\nplt.show()\n\n# setting `upsample_factor` can increase precision\nshifts, error, phasediff = phase_cross_correlation(image_polar, rescaled_polar,\n upsample_factor=20)\nshiftr, shiftc = shifts[:2]\n\n# Calculate scale factor from translation\nklog = radius / np.log(radius)\nshift_scale = 1 / (np.exp(shiftc / klog))\n\nprint(f'Expected value for cc rotation in degrees: {angle}')\nprint(f'Recovered value for cc rotation: {shiftr}')\nprint()\nprint(f'Expected value for scaling difference: {scale}')\nprint(f'Recovered value for scaling difference: {shift_scale}')\n\n######################################################################\n# Register rotation and scaling on a translated image - Part 1\n# =================================================================\n#\n# The above examples only work when the images to be registered share a\n# center. However, it is more often the case that there is also a translation\n# component to the difference between two images to be registered. One\n# approach to register rotation, scaling and translation is to first correct\n# for rotation and scaling, then solve for translation. It is possible to\n# resolve rotation and scaling differences for translated images by working on\n# the magnitude spectra of the Fourier-transformed images.\n#\n# In this next example, we first show how the above approaches fail when two\n# images differ by rotation, scaling, and translation.\n\nfrom skimage.color import rgb2gray\nfrom skimage.filters import window, difference_of_gaussians\nfrom scipy.fft import fft2, fftshift\n\nangle = 24\nscale = 1.4\nshiftr = 30\nshiftc = 15\n\nimage = rgb2gray(data.retina())\ntranslated = image[shiftr:, shiftc:]\nrotated = rotate(translated, angle)\nrescaled = rescale(rotated, scale)\nsizer, sizec = image.shape\nrts_image = rescaled[:sizer, :sizec]\n\n# When center is not shared, log-polar transform is not helpful!\nradius = 705\nwarped_image = warp_polar(image, radius=radius, scaling=\"log\")\nwarped_rts = warp_polar(rts_image, radius=radius, scaling=\"log\")\nshifts, error, phasediff = phase_cross_correlation(warped_image, warped_rts,\n upsample_factor=20)\nshiftr, shiftc = shifts[:2]\nklog = radius / np.log(radius)\nshift_scale = 1 / (np.exp(shiftc / klog))\n\nfig, axes = plt.subplots(2, 2, figsize=(8, 8))\nax = axes.ravel()\nax[0].set_title(\"Original Image\")\nax[0].imshow(image, cmap='gray')\nax[1].set_title(\"Modified Image\")\nax[1].imshow(rts_image, cmap='gray')\nax[2].set_title(\"Log-Polar-Transformed Original\")\nax[2].imshow(warped_image)\nax[3].set_title(\"Log-Polar-Transformed Modified\")\nax[3].imshow(warped_rts)\nfig.suptitle('log-polar-based registration fails when no shared center')\nplt.show()\n\nprint(f'Expected value for cc rotation in degrees: {angle}')\nprint(f'Recovered value for cc rotation: {shiftr}')\nprint()\nprint(f'Expected value for scaling difference: {scale}')\nprint(f'Recovered value for scaling difference: {shift_scale}')\n\n######################################################################\n# Register rotation and scaling on a translated image - Part 2\n# =================================================================\n#\n# We next show how rotation and scaling differences, but not translation\n# differences, are apparent in the frequency magnitude spectra of the images.\n# These differences can be recovered by treating the magnitude spectra as\n# images themselves, and applying the same log-polar + phase correlation\n# approach taken above.\n\n# First, band-pass filter both images\nimage = difference_of_gaussians(image, 5, 20)\nrts_image = difference_of_gaussians(rts_image, 5, 20)\n\n# window images\nwimage = image * window('hann', image.shape)\nrts_wimage = rts_image * window('hann', image.shape)\n\n# work with shifted FFT magnitudes\nimage_fs = np.abs(fftshift(fft2(wimage)))\nrts_fs = np.abs(fftshift(fft2(rts_wimage)))\n\n# Create log-polar transformed FFT mag images and register\nshape = image_fs.shape\nradius = shape[0] // 8 # only take lower frequencies\nwarped_image_fs = warp_polar(image_fs, radius=radius, output_shape=shape,\n scaling='log', order=0)\nwarped_rts_fs = warp_polar(rts_fs, radius=radius, output_shape=shape,\n scaling='log', order=0)\n\nwarped_image_fs = warped_image_fs[:shape[0] // 2, :] # only use half of FFT\nwarped_rts_fs = warped_rts_fs[:shape[0] // 2, :]\nshifts, error, phasediff = phase_cross_correlation(warped_image_fs,\n warped_rts_fs,\n upsample_factor=10)\n\n# Use translation parameters to calculate rotation and scaling parameters\nshiftr, shiftc = shifts[:2]\nrecovered_angle = (360 / shape[0]) * shiftr\nklog = shape[1] / np.log(radius)\nshift_scale = np.exp(shiftc / klog)\n\nfig, axes = plt.subplots(2, 2, figsize=(8, 8))\nax = axes.ravel()\nax[0].set_title(\"Original Image FFT\\n(magnitude; zoomed)\")\ncenter = np.array(shape) // 2\nax[0].imshow(image_fs[center[0] - radius:center[0] + radius,\n center[1] - radius:center[1] + radius],\n cmap='magma')\nax[1].set_title(\"Modified Image FFT\\n(magnitude; zoomed)\")\nax[1].imshow(rts_fs[center[0] - radius:center[0] + radius,\n center[1] - radius:center[1] + radius],\n cmap='magma')\nax[2].set_title(\"Log-Polar-Transformed\\nOriginal FFT\")\nax[2].imshow(warped_image_fs, cmap='magma')\nax[3].set_title(\"Log-Polar-Transformed\\nModified FFT\")\nax[3].imshow(warped_rts_fs, cmap='magma')\nfig.suptitle('Working in frequency domain can recover rotation and scaling')\nplt.show()\n\nprint(f'Expected value for cc rotation in degrees: {angle}')\nprint(f'Recovered value for cc rotation: {recovered_angle}')\nprint()\nprint(f'Expected value for scaling difference: {scale}')\nprint(f'Recovered value for scaling difference: {shift_scale}')\n\n######################################################################\n# Some notes on this approach\n# =================================================================\n#\n# It should be noted that this approach relies on a couple of parameters\n# that have to be chosen ahead of time, and for which there are no clearly\n# optimal choices:\n#\n# 1. The images should have some degree of bandpass filtering\n# applied, particularly to remove high frequencies, and different choices here\n# may impact outcome. The bandpass filter also complicates matters because,\n# since the images to be registered will differ in scale and these scale\n# differences are unknown, any bandpass filter will necessarily attenuate\n# different features between the images. For example, the log-polar transformed\n# magnitude spectra don't really look \"alike\" in the last example here. Yet if\n# you look closely, there are some common patterns in those spectra, and they\n# do end up aligning well by phase correlation as demonstrated.\n#\n# 2. Images must be windowed using windows with circular symmetry, to remove\n# the spectral leakage coming from image borders. There is no clearly optimal\n# choice of window.\n#\n# Finally, we note that large changes in scale will dramatically alter the\n# magnitude spectra, especially since a big change in scale will usually be\n# accompanied by some cropping and loss of information content. The literature\n# advises staying within 1.8-2x scale change [1]_ [2]_. This is fine for most\n# biological imaging applications.\n#\n# References\n# ----------\n#\n# .. [1] B.S. Reddy and B.N. Chatterji. An FFT-based technique for translation,\n# rotation and scale- invariant image registration. IEEE Trans. Image\n# Processing, 5(8):1266\u20131271, 1996. :DOI:`10.1109/83.506761`\n#\n# .. [2] Tzimiropoulos, Georgios, and Tania Stathaki. \"Robust FFT-based\n# scale-invariant image registration.\" In 4th SEAS DTC Technical\n# Conference. 2009. :DOI:`10.1109/TPAMI.2010.107`\n", "path": "doc/examples/registration/plot_register_rotation.py"}], "after_files": [{"content": "r\"\"\"\n==========================================================\nUsing Polar and Log-Polar Transformations for Registration\n==========================================================\n\nPhase correlation (``registration.phase_cross_correlation``) is an efficient\nmethod for determining translation offset between pairs of similar images.\nHowever this approach relies on a near absence of rotation/scaling differences\nbetween the images, which are typical in real-world examples.\n\nTo recover rotation and scaling differences between two images, we can take\nadvantage of two geometric properties of the log-polar transform and the\ntranslation invariance of the frequency domain. First, rotation in Cartesian\nspace becomes translation along the angular coordinate (:math:`\\theta`) axis\nof log-polar space. Second, scaling in Cartesian space becomes translation\nalong the radial coordinate (:math:`\\rho = \\ln\\sqrt{x^2 + y^2}`) of log-polar\nspace. Finally, differences in translation in the spatial domain do not impact\nmagnitude spectrum in the frequency domain.\n\nIn this series of examples, we build on these concepts to show how the\nlog-polar transform ``transform.warp_polar`` can be used in conjunction with\nphase correlation to recover rotation and scaling differences between two\nimages that also have a translation offset.\n\"\"\"\n\n######################################################################\n# Recover rotation difference with a polar transform\n# ==================================================\n#\n# In this first example, we consider the simple case of two images that only\n# differ with respect to rotation around a common center point. By remapping\n# these images into polar space, the rotation difference becomes a simple\n# translation difference that can be recovered by phase correlation.\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom skimage import data\nfrom skimage.registration import phase_cross_correlation\nfrom skimage.transform import warp_polar, rotate, rescale\nfrom skimage.util import img_as_float\n\nradius = 705\nangle = 35\nimage = data.retina()\nimage = img_as_float(image)\nrotated = rotate(image, angle)\nimage_polar = warp_polar(image, radius=radius, channel_axis=-1)\nrotated_polar = warp_polar(rotated, radius=radius, channel_axis=-1)\n\nfig, axes = plt.subplots(2, 2, figsize=(8, 8))\nax = axes.ravel()\nax[0].set_title(\"Original\")\nax[0].imshow(image)\nax[1].set_title(\"Rotated\")\nax[1].imshow(rotated)\nax[2].set_title(\"Polar-Transformed Original\")\nax[2].imshow(image_polar)\nax[3].set_title(\"Polar-Transformed Rotated\")\nax[3].imshow(rotated_polar)\nplt.show()\n\nshifts, error, phasediff = phase_cross_correlation(image_polar,\n rotated_polar,\n normalization=None)\nprint(f'Expected value for counterclockwise rotation in degrees: '\n f'{angle}')\nprint(f'Recovered value for counterclockwise rotation: '\n f'{shifts[0]}')\n\n######################################################################\n# Recover rotation and scaling differences with log-polar transform\n# =================================================================\n#\n# In this second example, the images differ by both rotation and scaling (note\n# the axis tick values). By remapping these images into log-polar space,\n# we can recover rotation as before, and now also scaling, by phase\n# correlation.\n\n# radius must be large enough to capture useful info in larger image\nradius = 1500\nangle = 53.7\nscale = 2.2\nimage = data.retina()\nimage = img_as_float(image)\nrotated = rotate(image, angle)\nrescaled = rescale(rotated, scale, channel_axis=-1)\nimage_polar = warp_polar(image, radius=radius,\n scaling='log', channel_axis=-1)\nrescaled_polar = warp_polar(rescaled, radius=radius,\n scaling='log', channel_axis=-1)\n\nfig, axes = plt.subplots(2, 2, figsize=(8, 8))\nax = axes.ravel()\nax[0].set_title(\"Original\")\nax[0].imshow(image)\nax[1].set_title(\"Rotated and Rescaled\")\nax[1].imshow(rescaled)\nax[2].set_title(\"Log-Polar-Transformed Original\")\nax[2].imshow(image_polar)\nax[3].set_title(\"Log-Polar-Transformed Rotated and Rescaled\")\nax[3].imshow(rescaled_polar)\nplt.show()\n\n# setting `upsample_factor` can increase precision\nshifts, error, phasediff = phase_cross_correlation(image_polar, rescaled_polar,\n upsample_factor=20,\n normalization=None)\nshiftr, shiftc = shifts[:2]\n\n# Calculate scale factor from translation\nklog = radius / np.log(radius)\nshift_scale = 1 / (np.exp(shiftc / klog))\n\nprint(f'Expected value for cc rotation in degrees: {angle}')\nprint(f'Recovered value for cc rotation: {shiftr}')\nprint()\nprint(f'Expected value for scaling difference: {scale}')\nprint(f'Recovered value for scaling difference: {shift_scale}')\n\n######################################################################\n# Register rotation and scaling on a translated image - Part 1\n# =================================================================\n#\n# The above examples only work when the images to be registered share a\n# center. However, it is more often the case that there is also a translation\n# component to the difference between two images to be registered. One\n# approach to register rotation, scaling and translation is to first correct\n# for rotation and scaling, then solve for translation. It is possible to\n# resolve rotation and scaling differences for translated images by working on\n# the magnitude spectra of the Fourier-transformed images.\n#\n# In this next example, we first show how the above approaches fail when two\n# images differ by rotation, scaling, and translation.\n\nfrom skimage.color import rgb2gray\nfrom skimage.filters import window, difference_of_gaussians\nfrom scipy.fft import fft2, fftshift\n\nangle = 24\nscale = 1.4\nshiftr = 30\nshiftc = 15\n\nimage = rgb2gray(data.retina())\ntranslated = image[shiftr:, shiftc:]\nrotated = rotate(translated, angle)\nrescaled = rescale(rotated, scale)\nsizer, sizec = image.shape\nrts_image = rescaled[:sizer, :sizec]\n\n# When center is not shared, log-polar transform is not helpful!\nradius = 705\nwarped_image = warp_polar(image, radius=radius, scaling=\"log\")\nwarped_rts = warp_polar(rts_image, radius=radius, scaling=\"log\")\nshifts, error, phasediff = phase_cross_correlation(warped_image, warped_rts,\n upsample_factor=20,\n normalization=None)\nshiftr, shiftc = shifts[:2]\nklog = radius / np.log(radius)\nshift_scale = 1 / (np.exp(shiftc / klog))\n\nfig, axes = plt.subplots(2, 2, figsize=(8, 8))\nax = axes.ravel()\nax[0].set_title(\"Original Image\")\nax[0].imshow(image, cmap='gray')\nax[1].set_title(\"Modified Image\")\nax[1].imshow(rts_image, cmap='gray')\nax[2].set_title(\"Log-Polar-Transformed Original\")\nax[2].imshow(warped_image)\nax[3].set_title(\"Log-Polar-Transformed Modified\")\nax[3].imshow(warped_rts)\nfig.suptitle('log-polar-based registration fails when no shared center')\nplt.show()\n\nprint(f'Expected value for cc rotation in degrees: {angle}')\nprint(f'Recovered value for cc rotation: {shiftr}')\nprint()\nprint(f'Expected value for scaling difference: {scale}')\nprint(f'Recovered value for scaling difference: {shift_scale}')\n\n######################################################################\n# Register rotation and scaling on a translated image - Part 2\n# =================================================================\n#\n# We next show how rotation and scaling differences, but not translation\n# differences, are apparent in the frequency magnitude spectra of the images.\n# These differences can be recovered by treating the magnitude spectra as\n# images themselves, and applying the same log-polar + phase correlation\n# approach taken above.\n\n# First, band-pass filter both images\nimage = difference_of_gaussians(image, 5, 20)\nrts_image = difference_of_gaussians(rts_image, 5, 20)\n\n# window images\nwimage = image * window('hann', image.shape)\nrts_wimage = rts_image * window('hann', image.shape)\n\n# work with shifted FFT magnitudes\nimage_fs = np.abs(fftshift(fft2(wimage)))\nrts_fs = np.abs(fftshift(fft2(rts_wimage)))\n\n# Create log-polar transformed FFT mag images and register\nshape = image_fs.shape\nradius = shape[0] // 8 # only take lower frequencies\nwarped_image_fs = warp_polar(image_fs, radius=radius, output_shape=shape,\n scaling='log', order=0)\nwarped_rts_fs = warp_polar(rts_fs, radius=radius, output_shape=shape,\n scaling='log', order=0)\n\nwarped_image_fs = warped_image_fs[:shape[0] // 2, :] # only use half of FFT\nwarped_rts_fs = warped_rts_fs[:shape[0] // 2, :]\nshifts, error, phasediff = phase_cross_correlation(warped_image_fs,\n warped_rts_fs,\n upsample_factor=10,\n normalization=None)\n\n# Use translation parameters to calculate rotation and scaling parameters\nshiftr, shiftc = shifts[:2]\nrecovered_angle = (360 / shape[0]) * shiftr\nklog = shape[1] / np.log(radius)\nshift_scale = np.exp(shiftc / klog)\n\nfig, axes = plt.subplots(2, 2, figsize=(8, 8))\nax = axes.ravel()\nax[0].set_title(\"Original Image FFT\\n(magnitude; zoomed)\")\ncenter = np.array(shape) // 2\nax[0].imshow(image_fs[center[0] - radius:center[0] + radius,\n center[1] - radius:center[1] + radius],\n cmap='magma')\nax[1].set_title(\"Modified Image FFT\\n(magnitude; zoomed)\")\nax[1].imshow(rts_fs[center[0] - radius:center[0] + radius,\n center[1] - radius:center[1] + radius],\n cmap='magma')\nax[2].set_title(\"Log-Polar-Transformed\\nOriginal FFT\")\nax[2].imshow(warped_image_fs, cmap='magma')\nax[3].set_title(\"Log-Polar-Transformed\\nModified FFT\")\nax[3].imshow(warped_rts_fs, cmap='magma')\nfig.suptitle('Working in frequency domain can recover rotation and scaling')\nplt.show()\n\nprint(f'Expected value for cc rotation in degrees: {angle}')\nprint(f'Recovered value for cc rotation: {recovered_angle}')\nprint()\nprint(f'Expected value for scaling difference: {scale}')\nprint(f'Recovered value for scaling difference: {shift_scale}')\n\n######################################################################\n# Some notes on this approach\n# =================================================================\n#\n# It should be noted that this approach relies on a couple of parameters\n# that have to be chosen ahead of time, and for which there are no clearly\n# optimal choices:\n#\n# 1. The images should have some degree of bandpass filtering\n# applied, particularly to remove high frequencies, and different choices here\n# may impact outcome. The bandpass filter also complicates matters because,\n# since the images to be registered will differ in scale and these scale\n# differences are unknown, any bandpass filter will necessarily attenuate\n# different features between the images. For example, the log-polar transformed\n# magnitude spectra don't really look \"alike\" in the last example here. Yet if\n# you look closely, there are some common patterns in those spectra, and they\n# do end up aligning well by phase correlation as demonstrated.\n#\n# 2. Images must be windowed using windows with circular symmetry, to remove\n# the spectral leakage coming from image borders. There is no clearly optimal\n# choice of window.\n#\n# Finally, we note that large changes in scale will dramatically alter the\n# magnitude spectra, especially since a big change in scale will usually be\n# accompanied by some cropping and loss of information content. The literature\n# advises staying within 1.8-2x scale change [1]_ [2]_. This is fine for most\n# biological imaging applications.\n#\n# References\n# ----------\n#\n# .. [1] B.S. Reddy and B.N. Chatterji. An FFT-based technique for translation,\n# rotation and scale- invariant image registration. IEEE Trans. Image\n# Processing, 5(8):1266\u20131271, 1996. :DOI:`10.1109/83.506761`\n#\n# .. [2] Tzimiropoulos, Georgios, and Tania Stathaki. \"Robust FFT-based\n# scale-invariant image registration.\" In 4th SEAS DTC Technical\n# Conference. 2009. :DOI:`10.1109/TPAMI.2010.107`\n", "path": "doc/examples/registration/plot_register_rotation.py"}]} | 3,979 | 472 |
gh_patches_debug_38762 | rasdani/github-patches | git_diff | nilearn__nilearn-1225 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
remove examples/03_connectivity/plot_power_connectome.py ?
- Signal extraction from spheres placed on Power coordinates is already done in `examples/03_connectivity/plot_seed_based_connectome.py`
- Sparse inverse covariance estimation is already explained in `examples/03_connectivity/plot_inverse_covariance_connectome.py` for MSDL atlas. For me, it doesn't really make a difference estimating it on timeseries extracted from probabilistic maps or spheric ROIs.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/03_connectivity/plot_power_connectome.py`
Content:
```
1 """
2 Extracting signals and plotting a connectome for the Power-264 seed-region atlas
3 ================================================================================
4
5 This example shows how to extract signals from spherical seed-regions based
6 on the Power-264 atlas (Power, 2011) and estimating a connectome using sparse
7 inverse covariance.
8
9 Power, Jonathan D., et al. "Functional network organization of the
10 human brain." Neuron 72.4 (2011): 665-678.
11
12 """
13
14 import numpy as np
15 import matplotlib.pyplot as plt
16 from nilearn import datasets, connectome, plotting, input_data
17
18
19 ###############################################################################
20 # Atlas and dataset fetching
21
22 # Fetch the coordinates of power atlas
23 power = datasets.fetch_coords_power_2011()
24 power_coords = np.vstack((
25 power.rois['x'],
26 power.rois['y'],
27 power.rois['z'],
28 )).T
29
30 # Fetch the first subject of ADHD dataset
31 adhd = datasets.fetch_adhd(n_subjects=1)
32
33
34 ###############################################################################
35 # Masking: taking the signal in a sphere of radius 5mm around Power coords
36
37 masker = input_data.NiftiSpheresMasker(seeds=power_coords,
38 smoothing_fwhm=4,
39 radius=5.,
40 standardize=True,
41 detrend=True,
42 low_pass=0.1,
43 high_pass=0.01,
44 t_r=2.5)
45
46 timeseries = masker.fit_transform(adhd.func[0], confounds=adhd.confounds[0])
47
48 ###############################################################################
49 # Extract and plot correlation matrix
50
51 # calculate connectivity and plot Power-264 correlation matrix
52 connectivity = connectome.ConnectivityMeasure(kind='correlation')
53 corr_matrix = connectivity.fit_transform([timeseries])[0]
54 np.fill_diagonal(corr_matrix, 0)
55 plt.imshow(corr_matrix, vmin=-1., vmax=1., cmap='RdBu_r')
56 plt.colorbar()
57 plt.title('Power 264 Connectivity')
58
59 # Plot the connectome
60
61 plotting.plot_connectome(corr_matrix,
62 power_coords,
63 edge_threshold='99.8%',
64 node_size=20)
65
66
67 ###############################################################################
68 # Extract and plot covariance and sparse covariance
69
70 # Compute the sparse inverse covariance
71 from sklearn.covariance import GraphLassoCV
72
73 estimator = GraphLassoCV()
74 estimator.fit(timeseries)
75
76 # Display the covariance
77 plt.figure(figsize=(5, 5))
78 plt.imshow(estimator.covariance_, interpolation="nearest",
79 vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)
80 plt.title('Covariance matrix')
81
82 # display the corresponding graph
83 plotting.plot_connectome(estimator.covariance_,
84 power_coords,
85 title='Covariance connectome',
86 edge_threshold='99.8%',
87 node_size=20)
88
89 # Display the sparse inverse covariance
90 plt.figure(figsize=(5, 5))
91 plt.imshow(estimator.precision_, interpolation="nearest",
92 vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)
93 plt.title('Precision matrix')
94
95 # And now display the corresponding graph
96 plotting.plot_connectome(estimator.precision_, power_coords,
97 title='Precision connectome',
98 edge_threshold="99.8%",
99 node_size=20)
100 plotting.show()
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/03_connectivity/plot_power_connectome.py b/examples/03_connectivity/plot_power_connectome.py
deleted file mode 100644
--- a/examples/03_connectivity/plot_power_connectome.py
+++ /dev/null
@@ -1,100 +0,0 @@
-"""
-Extracting signals and plotting a connectome for the Power-264 seed-region atlas
-================================================================================
-
-This example shows how to extract signals from spherical seed-regions based
-on the Power-264 atlas (Power, 2011) and estimating a connectome using sparse
-inverse covariance.
-
-Power, Jonathan D., et al. "Functional network organization of the
-human brain." Neuron 72.4 (2011): 665-678.
-
-"""
-
-import numpy as np
-import matplotlib.pyplot as plt
-from nilearn import datasets, connectome, plotting, input_data
-
-
-###############################################################################
-# Atlas and dataset fetching
-
-# Fetch the coordinates of power atlas
-power = datasets.fetch_coords_power_2011()
-power_coords = np.vstack((
- power.rois['x'],
- power.rois['y'],
- power.rois['z'],
-)).T
-
-# Fetch the first subject of ADHD dataset
-adhd = datasets.fetch_adhd(n_subjects=1)
-
-
-###############################################################################
-# Masking: taking the signal in a sphere of radius 5mm around Power coords
-
-masker = input_data.NiftiSpheresMasker(seeds=power_coords,
- smoothing_fwhm=4,
- radius=5.,
- standardize=True,
- detrend=True,
- low_pass=0.1,
- high_pass=0.01,
- t_r=2.5)
-
-timeseries = masker.fit_transform(adhd.func[0], confounds=adhd.confounds[0])
-
-###############################################################################
-# Extract and plot correlation matrix
-
-# calculate connectivity and plot Power-264 correlation matrix
-connectivity = connectome.ConnectivityMeasure(kind='correlation')
-corr_matrix = connectivity.fit_transform([timeseries])[0]
-np.fill_diagonal(corr_matrix, 0)
-plt.imshow(corr_matrix, vmin=-1., vmax=1., cmap='RdBu_r')
-plt.colorbar()
-plt.title('Power 264 Connectivity')
-
-# Plot the connectome
-
-plotting.plot_connectome(corr_matrix,
- power_coords,
- edge_threshold='99.8%',
- node_size=20)
-
-
-###############################################################################
-# Extract and plot covariance and sparse covariance
-
-# Compute the sparse inverse covariance
-from sklearn.covariance import GraphLassoCV
-
-estimator = GraphLassoCV()
-estimator.fit(timeseries)
-
-# Display the covariance
-plt.figure(figsize=(5, 5))
-plt.imshow(estimator.covariance_, interpolation="nearest",
- vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)
-plt.title('Covariance matrix')
-
-# display the corresponding graph
-plotting.plot_connectome(estimator.covariance_,
- power_coords,
- title='Covariance connectome',
- edge_threshold='99.8%',
- node_size=20)
-
-# Display the sparse inverse covariance
-plt.figure(figsize=(5, 5))
-plt.imshow(estimator.precision_, interpolation="nearest",
- vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)
-plt.title('Precision matrix')
-
-# And now display the corresponding graph
-plotting.plot_connectome(estimator.precision_, power_coords,
- title='Precision connectome',
- edge_threshold="99.8%",
- node_size=20)
-plotting.show()
| {"golden_diff": "diff --git a/examples/03_connectivity/plot_power_connectome.py b/examples/03_connectivity/plot_power_connectome.py\ndeleted file mode 100644\n--- a/examples/03_connectivity/plot_power_connectome.py\n+++ /dev/null\n@@ -1,100 +0,0 @@\n-\"\"\"\n-Extracting signals and plotting a connectome for the Power-264 seed-region atlas\n-================================================================================\n-\n-This example shows how to extract signals from spherical seed-regions based\n-on the Power-264 atlas (Power, 2011) and estimating a connectome using sparse\n-inverse covariance.\n-\n-Power, Jonathan D., et al. \"Functional network organization of the\n-human brain.\" Neuron 72.4 (2011): 665-678.\n-\n-\"\"\"\n-\n-import numpy as np\n-import matplotlib.pyplot as plt\n-from nilearn import datasets, connectome, plotting, input_data\n-\n-\n-###############################################################################\n-# Atlas and dataset fetching\n-\n-# Fetch the coordinates of power atlas\n-power = datasets.fetch_coords_power_2011()\n-power_coords = np.vstack((\n- power.rois['x'],\n- power.rois['y'],\n- power.rois['z'],\n-)).T\n-\n-# Fetch the first subject of ADHD dataset\n-adhd = datasets.fetch_adhd(n_subjects=1)\n-\n-\n-###############################################################################\n-# Masking: taking the signal in a sphere of radius 5mm around Power coords\n-\n-masker = input_data.NiftiSpheresMasker(seeds=power_coords,\n- smoothing_fwhm=4,\n- radius=5.,\n- standardize=True,\n- detrend=True,\n- low_pass=0.1,\n- high_pass=0.01,\n- t_r=2.5)\n-\n-timeseries = masker.fit_transform(adhd.func[0], confounds=adhd.confounds[0])\n-\n-###############################################################################\n-# Extract and plot correlation matrix\n-\n-# calculate connectivity and plot Power-264 correlation matrix\n-connectivity = connectome.ConnectivityMeasure(kind='correlation')\n-corr_matrix = connectivity.fit_transform([timeseries])[0]\n-np.fill_diagonal(corr_matrix, 0)\n-plt.imshow(corr_matrix, vmin=-1., vmax=1., cmap='RdBu_r')\n-plt.colorbar()\n-plt.title('Power 264 Connectivity')\n-\n-# Plot the connectome\n-\n-plotting.plot_connectome(corr_matrix,\n- power_coords,\n- edge_threshold='99.8%',\n- node_size=20)\n-\n-\n-###############################################################################\n-# Extract and plot covariance and sparse covariance\n-\n-# Compute the sparse inverse covariance\n-from sklearn.covariance import GraphLassoCV\n-\n-estimator = GraphLassoCV()\n-estimator.fit(timeseries)\n-\n-# Display the covariance\n-plt.figure(figsize=(5, 5))\n-plt.imshow(estimator.covariance_, interpolation=\"nearest\",\n- vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)\n-plt.title('Covariance matrix')\n-\n-# display the corresponding graph\n-plotting.plot_connectome(estimator.covariance_,\n- power_coords,\n- title='Covariance connectome',\n- edge_threshold='99.8%',\n- node_size=20)\n-\n-# Display the sparse inverse covariance\n-plt.figure(figsize=(5, 5))\n-plt.imshow(estimator.precision_, interpolation=\"nearest\",\n- vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)\n-plt.title('Precision matrix')\n-\n-# And now display the corresponding graph\n-plotting.plot_connectome(estimator.precision_, power_coords,\n- title='Precision connectome',\n- edge_threshold=\"99.8%\",\n- node_size=20)\n-plotting.show()\n", "issue": "remove examples/03_connectivity/plot_power_connectome.py ?\n- Signal extraction from spheres placed on Power coordinates is already done in `examples/03_connectivity/plot_seed_based_connectome.py`\n- Sparse inverse covariance estimation is already explained in `examples/03_connectivity/plot_inverse_covariance_connectome.py` for MSDL atlas. For me, it doesn't really make a difference estimating it on timeseries extracted from probabilistic maps or spheric ROIs.\n\n", "before_files": [{"content": "\"\"\"\nExtracting signals and plotting a connectome for the Power-264 seed-region atlas\n================================================================================\n\nThis example shows how to extract signals from spherical seed-regions based\non the Power-264 atlas (Power, 2011) and estimating a connectome using sparse\ninverse covariance.\n\nPower, Jonathan D., et al. \"Functional network organization of the\nhuman brain.\" Neuron 72.4 (2011): 665-678.\n\n\"\"\"\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom nilearn import datasets, connectome, plotting, input_data\n\n\n###############################################################################\n# Atlas and dataset fetching\n\n# Fetch the coordinates of power atlas\npower = datasets.fetch_coords_power_2011()\npower_coords = np.vstack((\n power.rois['x'],\n power.rois['y'],\n power.rois['z'],\n)).T\n\n# Fetch the first subject of ADHD dataset\nadhd = datasets.fetch_adhd(n_subjects=1)\n\n\n###############################################################################\n# Masking: taking the signal in a sphere of radius 5mm around Power coords\n\nmasker = input_data.NiftiSpheresMasker(seeds=power_coords,\n smoothing_fwhm=4,\n radius=5.,\n standardize=True,\n detrend=True,\n low_pass=0.1,\n high_pass=0.01,\n t_r=2.5)\n\ntimeseries = masker.fit_transform(adhd.func[0], confounds=adhd.confounds[0])\n\n###############################################################################\n# Extract and plot correlation matrix\n\n# calculate connectivity and plot Power-264 correlation matrix\nconnectivity = connectome.ConnectivityMeasure(kind='correlation')\ncorr_matrix = connectivity.fit_transform([timeseries])[0]\nnp.fill_diagonal(corr_matrix, 0)\nplt.imshow(corr_matrix, vmin=-1., vmax=1., cmap='RdBu_r')\nplt.colorbar()\nplt.title('Power 264 Connectivity')\n\n# Plot the connectome\n\nplotting.plot_connectome(corr_matrix,\n power_coords,\n edge_threshold='99.8%',\n node_size=20)\n\n\n###############################################################################\n# Extract and plot covariance and sparse covariance\n\n# Compute the sparse inverse covariance\nfrom sklearn.covariance import GraphLassoCV\n\nestimator = GraphLassoCV()\nestimator.fit(timeseries)\n\n# Display the covariance\nplt.figure(figsize=(5, 5))\nplt.imshow(estimator.covariance_, interpolation=\"nearest\",\n vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)\nplt.title('Covariance matrix')\n\n# display the corresponding graph\nplotting.plot_connectome(estimator.covariance_,\n power_coords,\n title='Covariance connectome',\n edge_threshold='99.8%',\n node_size=20)\n\n# Display the sparse inverse covariance\nplt.figure(figsize=(5, 5))\nplt.imshow(estimator.precision_, interpolation=\"nearest\",\n vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)\nplt.title('Precision matrix')\n\n# And now display the corresponding graph\nplotting.plot_connectome(estimator.precision_, power_coords,\n title='Precision connectome',\n edge_threshold=\"99.8%\",\n node_size=20)\nplotting.show()\n", "path": "examples/03_connectivity/plot_power_connectome.py"}], "after_files": [{"content": null, "path": "examples/03_connectivity/plot_power_connectome.py"}]} | 1,261 | 840 |
gh_patches_debug_4734 | rasdani/github-patches | git_diff | DDMAL__CantusDB-848 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Source.number_of_melodies is not updating properly
The `number_of_melodies` of a source should correspond to the number of chants in the source that contain a volpiano entry. When I check my test sources on the database that do not have any chants that contain a volpaino, the `number_of_melodies` field matches the total number of chants. I suspect the `update_source_melody_count()` function in `signals.py` is not working as expected.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/signals.py`
Content:
```
1 import operator
2 from functools import reduce
3
4 from django.contrib.postgres.search import SearchVector
5 from django.db import models
6 from django.db.models import Value
7 from django.db.models.signals import post_save, post_delete
8 from django.dispatch import receiver
9
10 import re
11
12 from main_app.models import Chant
13 from main_app.models import Sequence
14 from main_app.models import Feast
15 from main_app.models import Source
16
17
18 @receiver(post_save, sender=Chant)
19 def on_chant_save(instance, **kwargs):
20 update_source_chant_count(instance)
21 update_source_melody_count(instance)
22
23 update_chant_search_vector(instance)
24 update_volpiano_fields(instance)
25
26
27 @receiver(post_delete, sender=Chant)
28 def on_chant_delete(instance, **kwargs):
29 update_source_chant_count(instance)
30 update_source_melody_count(instance)
31
32
33 @receiver(post_save, sender=Sequence)
34 def on_sequence_save(instance, **kwargs):
35 update_source_chant_count(instance)
36
37
38 @receiver(post_delete, sender=Sequence)
39 def on_sequence_delete(instance, **kwargs):
40 update_source_chant_count(instance)
41
42
43 @receiver(post_save, sender=Feast)
44 def on_feast_save(instance, **kwargs):
45 update_prefix_field(instance)
46
47
48 def update_chant_search_vector(instance):
49 """When saving an instance of Chant, update its search vector field.
50
51 Called in on_chant_save()
52 """
53 index_components = instance.index_components()
54 pk = instance.pk
55 search_vectors = []
56
57 for weight, data in index_components.items():
58 search_vectors.append(
59 SearchVector(Value(data, output_field=models.TextField()), weight=weight)
60 )
61 instance.__class__.objects.filter(pk=pk).update(
62 search_vector=reduce(operator.add, search_vectors)
63 )
64
65
66 def update_source_chant_count(instance):
67 """When saving or deleting a Chant or Sequence, update its Source's number_of_chants field
68
69 Called in on_chant_save(), on_chant_delete(), on_sequence_save() and on_sequence_delete()
70 """
71
72 # When a source is deleted (which in turn calls on_chant_delete() on all of its chants) instance.source does not exist
73 try:
74 source = instance.source
75 except Source.DoesNotExist:
76 source = None
77 if source is not None:
78 source.number_of_chants = source.chant_set.count() + source.sequence_set.count()
79 source.save()
80
81
82 def update_source_melody_count(instance):
83 """When saving or deleting a Chant, update its Source's number_of_melodies field
84
85 Called in on_chant_save() and on_chant_delete()
86 """
87
88 # When a source is deleted (which in turn calls on_chant_delete() on all of its chants) instance.source does not exist
89 try:
90 source = instance.source
91 except Source.DoesNotExist:
92 source = None
93 if source is not None:
94 source.number_of_melodies = source.chant_set.filter(
95 volpiano__isnull=False
96 ).count()
97 source.save()
98
99
100 def update_volpiano_fields(instance):
101 """When saving a Chant, make sure the chant's volpiano_notes and volpiano_intervals are up-to-date
102
103 Called in on_chant_save()
104 """
105
106 def generate_volpiano_notes(volpiano):
107 """
108 Populate the ``volpiano_notes`` field of the ``Chant`` model
109
110 This field is used for melody search
111
112 Args:
113 volpiano (str): The content of ``chant.volpiano``
114
115 Returns:
116 str: Volpiano str with non-note chars and duplicate consecutive notes removed
117 """
118 # unwanted_chars are non-note chars, including the clefs, barlines, and accidentals etc.
119 # the `searchMelody.js` on old cantus makes no reference to the b-flat accidentals ("y", "i", "z")
120 # so put them in unwanted chars for now
121 unwanted_chars = [
122 "-",
123 "1",
124 "2",
125 "3",
126 "4",
127 "5",
128 "6",
129 "7",
130 "?",
131 ".",
132 " ",
133 "y",
134 "i",
135 "z",
136 ]
137 # convert all charactors to lower-case, upper-case letters stand for liquescent of the same pitch
138 volpiano_lower = volpiano.lower()
139 # `)` stands for the lowest `g` note liquescent in volpiano, its 'lower case' is `9`
140 volpiano_notes = volpiano_lower.replace(")", "9")
141 # remove none-note charactors
142 for unwanted_char in unwanted_chars:
143 volpiano_notes = volpiano_notes.replace(unwanted_char, "")
144 # remove duplicate consecutive chars
145 volpiano_notes = re.sub(r"(.)\1+", r"\1", volpiano_notes)
146 return volpiano_notes
147
148 def generate_volpiano_intervals(volpiano_notes):
149 """
150 Populate the ``volpiano_intervals`` field of the ``Chant`` model
151
152 This field is used for melody search when searching for transpositions
153
154 Args:
155 volpiano_notes (str): The content of ``chant.volpiano_notes``,
156 populated by the ``generate_volpiano_notes`` function
157
158 Returns:
159 str: A str of digits, recording the intervals between adjacent notes
160 """
161 # replace '9' (the note G) with the char corresponding to (ASCII(a) - 1), because 'a' denotes the note A
162 volpiano_notes = volpiano_notes.replace("9", chr(ord("a") - 1))
163 # we model the interval between notes using the difference between the ASCII codes of corresponding letters
164 # the letter for the note B is "j" (106), note A is "h" (104), the letter "i" (105) is skipped
165 # move all notes above A down by one letter
166 volpiano_notes = list(volpiano_notes)
167 for j, note in enumerate(volpiano_notes):
168 if ord(note) >= 106:
169 volpiano_notes[j] = chr(ord(note) - 1)
170
171 # `intervals` records the difference between two adjacent notes.
172 # Note that intervals are encoded by counting the number of scale
173 # steps between adjacent notes: an ascending second is thus encoded
174 # as "1"; a descending third is encoded "-2", and so on.
175 intervals = []
176 for j in range(1, len(volpiano_notes)):
177 intervals.append(ord(volpiano_notes[j]) - ord(volpiano_notes[j - 1]))
178 # convert `intervals` to str
179 volpiano_intervals = "".join([str(interval) for interval in intervals])
180 return volpiano_intervals
181
182 if instance.volpiano is None:
183 return
184
185 volpiano_notes = generate_volpiano_notes(instance.volpiano)
186 volpiano_intervals = generate_volpiano_intervals(volpiano_notes)
187
188 Chant.objects.filter(id=instance.id).update(
189 volpiano_notes=volpiano_notes,
190 volpiano_intervals=volpiano_intervals,
191 )
192
193
194 def update_prefix_field(instance):
195 pk = instance.pk
196
197 if instance.feast_code:
198 prefix = str(instance.feast_code)[0:2]
199 instance.__class__.objects.filter(pk=pk).update(prefix=prefix)
200 else: # feast_code is None, ""
201 instance.__class__.objects.filter(pk=pk).update(prefix="")
202
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django/cantusdb_project/main_app/signals.py b/django/cantusdb_project/main_app/signals.py
--- a/django/cantusdb_project/main_app/signals.py
+++ b/django/cantusdb_project/main_app/signals.py
@@ -91,9 +91,11 @@
except Source.DoesNotExist:
source = None
if source is not None:
- source.number_of_melodies = source.chant_set.filter(
- volpiano__isnull=False
- ).count()
+ source.number_of_melodies = (
+ source.chant_set.exclude(volpiano__isnull=True)
+ .exclude(volpiano__exact="")
+ .count()
+ )
source.save()
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/signals.py b/django/cantusdb_project/main_app/signals.py\n--- a/django/cantusdb_project/main_app/signals.py\n+++ b/django/cantusdb_project/main_app/signals.py\n@@ -91,9 +91,11 @@\n except Source.DoesNotExist:\n source = None\n if source is not None:\n- source.number_of_melodies = source.chant_set.filter(\n- volpiano__isnull=False\n- ).count()\n+ source.number_of_melodies = (\n+ source.chant_set.exclude(volpiano__isnull=True)\n+ .exclude(volpiano__exact=\"\")\n+ .count()\n+ )\n source.save()\n", "issue": "Source.number_of_melodies is not updating properly\nThe `number_of_melodies` of a source should correspond to the number of chants in the source that contain a volpiano entry. When I check my test sources on the database that do not have any chants that contain a volpaino, the `number_of_melodies` field matches the total number of chants. I suspect the `update_source_melody_count()` function in `signals.py` is not working as expected.\n", "before_files": [{"content": "import operator\nfrom functools import reduce\n\nfrom django.contrib.postgres.search import SearchVector\nfrom django.db import models\nfrom django.db.models import Value\nfrom django.db.models.signals import post_save, post_delete\nfrom django.dispatch import receiver\n\nimport re\n\nfrom main_app.models import Chant\nfrom main_app.models import Sequence\nfrom main_app.models import Feast\nfrom main_app.models import Source\n\n\n@receiver(post_save, sender=Chant)\ndef on_chant_save(instance, **kwargs):\n update_source_chant_count(instance)\n update_source_melody_count(instance)\n\n update_chant_search_vector(instance)\n update_volpiano_fields(instance)\n\n\n@receiver(post_delete, sender=Chant)\ndef on_chant_delete(instance, **kwargs):\n update_source_chant_count(instance)\n update_source_melody_count(instance)\n\n\n@receiver(post_save, sender=Sequence)\ndef on_sequence_save(instance, **kwargs):\n update_source_chant_count(instance)\n\n\n@receiver(post_delete, sender=Sequence)\ndef on_sequence_delete(instance, **kwargs):\n update_source_chant_count(instance)\n\n\n@receiver(post_save, sender=Feast)\ndef on_feast_save(instance, **kwargs):\n update_prefix_field(instance)\n\n\ndef update_chant_search_vector(instance):\n \"\"\"When saving an instance of Chant, update its search vector field.\n\n Called in on_chant_save()\n \"\"\"\n index_components = instance.index_components()\n pk = instance.pk\n search_vectors = []\n\n for weight, data in index_components.items():\n search_vectors.append(\n SearchVector(Value(data, output_field=models.TextField()), weight=weight)\n )\n instance.__class__.objects.filter(pk=pk).update(\n search_vector=reduce(operator.add, search_vectors)\n )\n\n\ndef update_source_chant_count(instance):\n \"\"\"When saving or deleting a Chant or Sequence, update its Source's number_of_chants field\n\n Called in on_chant_save(), on_chant_delete(), on_sequence_save() and on_sequence_delete()\n \"\"\"\n\n # When a source is deleted (which in turn calls on_chant_delete() on all of its chants) instance.source does not exist\n try:\n source = instance.source\n except Source.DoesNotExist:\n source = None\n if source is not None:\n source.number_of_chants = source.chant_set.count() + source.sequence_set.count()\n source.save()\n\n\ndef update_source_melody_count(instance):\n \"\"\"When saving or deleting a Chant, update its Source's number_of_melodies field\n\n Called in on_chant_save() and on_chant_delete()\n \"\"\"\n\n # When a source is deleted (which in turn calls on_chant_delete() on all of its chants) instance.source does not exist\n try:\n source = instance.source\n except Source.DoesNotExist:\n source = None\n if source is not None:\n source.number_of_melodies = source.chant_set.filter(\n volpiano__isnull=False\n ).count()\n source.save()\n\n\ndef update_volpiano_fields(instance):\n \"\"\"When saving a Chant, make sure the chant's volpiano_notes and volpiano_intervals are up-to-date\n\n Called in on_chant_save()\n \"\"\"\n\n def generate_volpiano_notes(volpiano):\n \"\"\"\n Populate the ``volpiano_notes`` field of the ``Chant`` model\n\n This field is used for melody search\n\n Args:\n volpiano (str): The content of ``chant.volpiano``\n\n Returns:\n str: Volpiano str with non-note chars and duplicate consecutive notes removed\n \"\"\"\n # unwanted_chars are non-note chars, including the clefs, barlines, and accidentals etc.\n # the `searchMelody.js` on old cantus makes no reference to the b-flat accidentals (\"y\", \"i\", \"z\")\n # so put them in unwanted chars for now\n unwanted_chars = [\n \"-\",\n \"1\",\n \"2\",\n \"3\",\n \"4\",\n \"5\",\n \"6\",\n \"7\",\n \"?\",\n \".\",\n \" \",\n \"y\",\n \"i\",\n \"z\",\n ]\n # convert all charactors to lower-case, upper-case letters stand for liquescent of the same pitch\n volpiano_lower = volpiano.lower()\n # `)` stands for the lowest `g` note liquescent in volpiano, its 'lower case' is `9`\n volpiano_notes = volpiano_lower.replace(\")\", \"9\")\n # remove none-note charactors\n for unwanted_char in unwanted_chars:\n volpiano_notes = volpiano_notes.replace(unwanted_char, \"\")\n # remove duplicate consecutive chars\n volpiano_notes = re.sub(r\"(.)\\1+\", r\"\\1\", volpiano_notes)\n return volpiano_notes\n\n def generate_volpiano_intervals(volpiano_notes):\n \"\"\"\n Populate the ``volpiano_intervals`` field of the ``Chant`` model\n\n This field is used for melody search when searching for transpositions\n\n Args:\n volpiano_notes (str): The content of ``chant.volpiano_notes``,\n populated by the ``generate_volpiano_notes`` function\n\n Returns:\n str: A str of digits, recording the intervals between adjacent notes\n \"\"\"\n # replace '9' (the note G) with the char corresponding to (ASCII(a) - 1), because 'a' denotes the note A\n volpiano_notes = volpiano_notes.replace(\"9\", chr(ord(\"a\") - 1))\n # we model the interval between notes using the difference between the ASCII codes of corresponding letters\n # the letter for the note B is \"j\" (106), note A is \"h\" (104), the letter \"i\" (105) is skipped\n # move all notes above A down by one letter\n volpiano_notes = list(volpiano_notes)\n for j, note in enumerate(volpiano_notes):\n if ord(note) >= 106:\n volpiano_notes[j] = chr(ord(note) - 1)\n\n # `intervals` records the difference between two adjacent notes.\n # Note that intervals are encoded by counting the number of scale\n # steps between adjacent notes: an ascending second is thus encoded\n # as \"1\"; a descending third is encoded \"-2\", and so on.\n intervals = []\n for j in range(1, len(volpiano_notes)):\n intervals.append(ord(volpiano_notes[j]) - ord(volpiano_notes[j - 1]))\n # convert `intervals` to str\n volpiano_intervals = \"\".join([str(interval) for interval in intervals])\n return volpiano_intervals\n\n if instance.volpiano is None:\n return\n\n volpiano_notes = generate_volpiano_notes(instance.volpiano)\n volpiano_intervals = generate_volpiano_intervals(volpiano_notes)\n\n Chant.objects.filter(id=instance.id).update(\n volpiano_notes=volpiano_notes,\n volpiano_intervals=volpiano_intervals,\n )\n\n\ndef update_prefix_field(instance):\n pk = instance.pk\n\n if instance.feast_code:\n prefix = str(instance.feast_code)[0:2]\n instance.__class__.objects.filter(pk=pk).update(prefix=prefix)\n else: # feast_code is None, \"\"\n instance.__class__.objects.filter(pk=pk).update(prefix=\"\")\n", "path": "django/cantusdb_project/main_app/signals.py"}], "after_files": [{"content": "import operator\nfrom functools import reduce\n\nfrom django.contrib.postgres.search import SearchVector\nfrom django.db import models\nfrom django.db.models import Value\nfrom django.db.models.signals import post_save, post_delete\nfrom django.dispatch import receiver\n\nimport re\n\nfrom main_app.models import Chant\nfrom main_app.models import Sequence\nfrom main_app.models import Feast\nfrom main_app.models import Source\n\n\n@receiver(post_save, sender=Chant)\ndef on_chant_save(instance, **kwargs):\n update_source_chant_count(instance)\n update_source_melody_count(instance)\n\n update_chant_search_vector(instance)\n update_volpiano_fields(instance)\n\n\n@receiver(post_delete, sender=Chant)\ndef on_chant_delete(instance, **kwargs):\n update_source_chant_count(instance)\n update_source_melody_count(instance)\n\n\n@receiver(post_save, sender=Sequence)\ndef on_sequence_save(instance, **kwargs):\n update_source_chant_count(instance)\n\n\n@receiver(post_delete, sender=Sequence)\ndef on_sequence_delete(instance, **kwargs):\n update_source_chant_count(instance)\n\n\n@receiver(post_save, sender=Feast)\ndef on_feast_save(instance, **kwargs):\n update_prefix_field(instance)\n\n\ndef update_chant_search_vector(instance):\n \"\"\"When saving an instance of Chant, update its search vector field.\n\n Called in on_chant_save()\n \"\"\"\n index_components = instance.index_components()\n pk = instance.pk\n search_vectors = []\n\n for weight, data in index_components.items():\n search_vectors.append(\n SearchVector(Value(data, output_field=models.TextField()), weight=weight)\n )\n instance.__class__.objects.filter(pk=pk).update(\n search_vector=reduce(operator.add, search_vectors)\n )\n\n\ndef update_source_chant_count(instance):\n \"\"\"When saving or deleting a Chant or Sequence, update its Source's number_of_chants field\n\n Called in on_chant_save(), on_chant_delete(), on_sequence_save() and on_sequence_delete()\n \"\"\"\n\n # When a source is deleted (which in turn calls on_chant_delete() on all of its chants) instance.source does not exist\n try:\n source = instance.source\n except Source.DoesNotExist:\n source = None\n if source is not None:\n source.number_of_chants = source.chant_set.count() + source.sequence_set.count()\n source.save()\n\n\ndef update_source_melody_count(instance):\n \"\"\"When saving or deleting a Chant, update its Source's number_of_melodies field\n\n Called in on_chant_save() and on_chant_delete()\n \"\"\"\n\n # When a source is deleted (which in turn calls on_chant_delete() on all of its chants) instance.source does not exist\n try:\n source = instance.source\n except Source.DoesNotExist:\n source = None\n if source is not None:\n source.number_of_melodies = (\n source.chant_set.exclude(volpiano__isnull=True)\n .exclude(volpiano__exact=\"\")\n .count()\n )\n source.save()\n\n\ndef update_volpiano_fields(instance):\n \"\"\"When saving a Chant, make sure the chant's volpiano_notes and volpiano_intervals are up-to-date\n\n Called in on_chant_save()\n \"\"\"\n\n def generate_volpiano_notes(volpiano):\n \"\"\"\n Populate the ``volpiano_notes`` field of the ``Chant`` model\n\n This field is used for melody search\n\n Args:\n volpiano (str): The content of ``chant.volpiano``\n\n Returns:\n str: Volpiano str with non-note chars and duplicate consecutive notes removed\n \"\"\"\n # unwanted_chars are non-note chars, including the clefs, barlines, and accidentals etc.\n # the `searchMelody.js` on old cantus makes no reference to the b-flat accidentals (\"y\", \"i\", \"z\")\n # so put them in unwanted chars for now\n unwanted_chars = [\n \"-\",\n \"1\",\n \"2\",\n \"3\",\n \"4\",\n \"5\",\n \"6\",\n \"7\",\n \"?\",\n \".\",\n \" \",\n \"y\",\n \"i\",\n \"z\",\n ]\n # convert all charactors to lower-case, upper-case letters stand for liquescent of the same pitch\n volpiano_lower = volpiano.lower()\n # `)` stands for the lowest `g` note liquescent in volpiano, its 'lower case' is `9`\n volpiano_notes = volpiano_lower.replace(\")\", \"9\")\n # remove none-note charactors\n for unwanted_char in unwanted_chars:\n volpiano_notes = volpiano_notes.replace(unwanted_char, \"\")\n # remove duplicate consecutive chars\n volpiano_notes = re.sub(r\"(.)\\1+\", r\"\\1\", volpiano_notes)\n return volpiano_notes\n\n def generate_volpiano_intervals(volpiano_notes):\n \"\"\"\n Populate the ``volpiano_intervals`` field of the ``Chant`` model\n\n This field is used for melody search when searching for transpositions\n\n Args:\n volpiano_notes (str): The content of ``chant.volpiano_notes``,\n populated by the ``generate_volpiano_notes`` function\n\n Returns:\n str: A str of digits, recording the intervals between adjacent notes\n \"\"\"\n # replace '9' (the note G) with the char corresponding to (ASCII(a) - 1), because 'a' denotes the note A\n volpiano_notes = volpiano_notes.replace(\"9\", chr(ord(\"a\") - 1))\n # we model the interval between notes using the difference between the ASCII codes of corresponding letters\n # the letter for the note B is \"j\" (106), note A is \"h\" (104), the letter \"i\" (105) is skipped\n # move all notes above A down by one letter\n volpiano_notes = list(volpiano_notes)\n for j, note in enumerate(volpiano_notes):\n if ord(note) >= 106:\n volpiano_notes[j] = chr(ord(note) - 1)\n\n # `intervals` records the difference between two adjacent notes.\n # Note that intervals are encoded by counting the number of scale\n # steps between adjacent notes: an ascending second is thus encoded\n # as \"1\"; a descending third is encoded \"-2\", and so on.\n intervals = []\n for j in range(1, len(volpiano_notes)):\n intervals.append(ord(volpiano_notes[j]) - ord(volpiano_notes[j - 1]))\n # convert `intervals` to str\n volpiano_intervals = \"\".join([str(interval) for interval in intervals])\n return volpiano_intervals\n\n if instance.volpiano is None:\n return\n\n volpiano_notes = generate_volpiano_notes(instance.volpiano)\n volpiano_intervals = generate_volpiano_intervals(volpiano_notes)\n\n Chant.objects.filter(id=instance.id).update(\n volpiano_notes=volpiano_notes,\n volpiano_intervals=volpiano_intervals,\n )\n\n\ndef update_prefix_field(instance):\n pk = instance.pk\n\n if instance.feast_code:\n prefix = str(instance.feast_code)[0:2]\n instance.__class__.objects.filter(pk=pk).update(prefix=prefix)\n else: # feast_code is None, \"\"\n instance.__class__.objects.filter(pk=pk).update(prefix=\"\")\n", "path": "django/cantusdb_project/main_app/signals.py"}]} | 2,492 | 167 |
gh_patches_debug_30188 | rasdani/github-patches | git_diff | internetarchive__openlibrary-8966 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support different seeds for random.hourly sort
These carousels are all sorted by random.hourly, but we want them to have a different random subset!

### Proposal & Constraints
Expand `random.hourly` sorting to support a custom seed like `random`
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
### Stakeholders
@RayBB
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openlibrary/plugins/worksearch/schemes/__init__.py`
Content:
```
1 import logging
2 from collections.abc import Callable
3
4 import luqum.tree
5 from luqum.exceptions import ParseError
6 from openlibrary.solr.query_utils import (
7 escape_unknown_fields,
8 fully_escape_query,
9 luqum_parser,
10 )
11
12 logger = logging.getLogger("openlibrary.worksearch")
13
14
15 class SearchScheme:
16 # Set of queries that define the universe of this scheme
17 universe: list[str]
18 # All actual solr fields that can be in a user query
19 all_fields: set[str]
20 # These fields are fetched for facets and can also be url params
21 facet_fields: set[str]
22 # Mapping of user-only fields to solr fields
23 field_name_map: dict[str, str]
24 # Mapping of user sort to solr sort
25 sorts: dict[str, str | Callable[[], str]]
26 # Default
27 default_fetched_fields: set[str]
28 # Fields that should be rewritten
29 facet_rewrites: dict[tuple[str, str], str | Callable[[], str]]
30
31 def is_search_field(self, field: str):
32 return field in self.all_fields or field in self.field_name_map
33
34 def process_user_sort(self, user_sort: str) -> str:
35 """
36 Convert a user-provided sort to a solr sort
37
38 >>> from openlibrary.plugins.worksearch.schemes.works import WorkSearchScheme
39 >>> scheme = WorkSearchScheme()
40 >>> scheme.process_user_sort('editions')
41 'edition_count desc'
42 >>> scheme.process_user_sort('editions, new')
43 'edition_count desc,first_publish_year desc'
44 >>> scheme.process_user_sort('random')
45 'random_1 asc'
46 >>> scheme.process_user_sort('random_custom_seed')
47 'random_custom_seed asc'
48 >>> scheme.process_user_sort('random_custom_seed desc')
49 'random_custom_seed desc'
50 >>> scheme.process_user_sort('random_custom_seed asc')
51 'random_custom_seed asc'
52 """
53
54 def process_individual_sort(sort: str):
55 if sort.startswith('random_'):
56 # Allow custom randoms; so anything random_* is allowed
57 return sort if ' ' in sort else f'{sort} asc'
58 else:
59 solr_sort = self.sorts[sort]
60 return solr_sort() if callable(solr_sort) else solr_sort
61
62 return ','.join(
63 process_individual_sort(s.strip()) for s in user_sort.split(',')
64 )
65
66 def process_user_query(self, q_param: str) -> str:
67 if q_param == '*:*':
68 # This is a special solr syntax; don't process
69 return q_param
70
71 try:
72 q_param = escape_unknown_fields(
73 (
74 # Solr 4+ has support for regexes (eg `key:/foo.*/`)! But for now,
75 # let's not expose that and escape all '/'. Otherwise
76 # `key:/works/OL1W` is interpreted as a regex.
77 q_param.strip()
78 .replace('/', '\\/')
79 # Also escape unexposed lucene features
80 .replace('?', '\\?')
81 .replace('~', '\\~')
82 ),
83 self.is_search_field,
84 lower=True,
85 )
86 q_tree = luqum_parser(q_param)
87 except ParseError:
88 # This isn't a syntactically valid lucene query
89 logger.warning("Invalid lucene query", exc_info=True)
90 # Escape everything we can
91 q_tree = luqum_parser(fully_escape_query(q_param))
92
93 q_tree = self.transform_user_query(q_param, q_tree)
94 return str(q_tree)
95
96 def transform_user_query(
97 self,
98 user_query: str,
99 q_tree: luqum.tree.Item,
100 ) -> luqum.tree.Item:
101 return q_tree
102
103 def build_q_from_params(self, params: dict) -> str | None:
104 return None
105
106 def q_to_solr_params(
107 self,
108 q: str,
109 solr_fields: set[str],
110 cur_solr_params: list[tuple[str, str]],
111 ) -> list[tuple[str, str]]:
112 return [('q', q)]
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openlibrary/plugins/worksearch/schemes/__init__.py b/openlibrary/plugins/worksearch/schemes/__init__.py
--- a/openlibrary/plugins/worksearch/schemes/__init__.py
+++ b/openlibrary/plugins/worksearch/schemes/__init__.py
@@ -44,17 +44,27 @@
>>> scheme.process_user_sort('random')
'random_1 asc'
>>> scheme.process_user_sort('random_custom_seed')
- 'random_custom_seed asc'
+ 'random_1_custom_seed asc'
>>> scheme.process_user_sort('random_custom_seed desc')
- 'random_custom_seed desc'
+ 'random_1_custom_seed desc'
>>> scheme.process_user_sort('random_custom_seed asc')
- 'random_custom_seed asc'
+ 'random_1_custom_seed asc'
"""
- def process_individual_sort(sort: str):
- if sort.startswith('random_'):
+ def process_individual_sort(sort: str) -> str:
+ if sort.startswith(('random_', 'random.hourly_', 'random.daily_')):
# Allow custom randoms; so anything random_* is allowed
- return sort if ' ' in sort else f'{sort} asc'
+ # Also Allow custom time randoms to allow carousels with overlapping
+ # books to have a fresh ordering when on the same collection
+ sort_order: str | None = None
+ if ' ' in sort:
+ sort, sort_order = sort.split(' ', 1)
+ random_type, random_seed = sort.split('_', 1)
+ solr_sort = self.sorts[random_type]
+ solr_sort_str = solr_sort() if callable(solr_sort) else solr_sort
+ solr_sort_field, solr_sort_order = solr_sort_str.split(' ', 1)
+ sort_order = sort_order or solr_sort_order
+ return f'{solr_sort_field}_{random_seed} {sort_order}'
else:
solr_sort = self.sorts[sort]
return solr_sort() if callable(solr_sort) else solr_sort
| {"golden_diff": "diff --git a/openlibrary/plugins/worksearch/schemes/__init__.py b/openlibrary/plugins/worksearch/schemes/__init__.py\n--- a/openlibrary/plugins/worksearch/schemes/__init__.py\n+++ b/openlibrary/plugins/worksearch/schemes/__init__.py\n@@ -44,17 +44,27 @@\n >>> scheme.process_user_sort('random')\n 'random_1 asc'\n >>> scheme.process_user_sort('random_custom_seed')\n- 'random_custom_seed asc'\n+ 'random_1_custom_seed asc'\n >>> scheme.process_user_sort('random_custom_seed desc')\n- 'random_custom_seed desc'\n+ 'random_1_custom_seed desc'\n >>> scheme.process_user_sort('random_custom_seed asc')\n- 'random_custom_seed asc'\n+ 'random_1_custom_seed asc'\n \"\"\"\n \n- def process_individual_sort(sort: str):\n- if sort.startswith('random_'):\n+ def process_individual_sort(sort: str) -> str:\n+ if sort.startswith(('random_', 'random.hourly_', 'random.daily_')):\n # Allow custom randoms; so anything random_* is allowed\n- return sort if ' ' in sort else f'{sort} asc'\n+ # Also Allow custom time randoms to allow carousels with overlapping\n+ # books to have a fresh ordering when on the same collection\n+ sort_order: str | None = None\n+ if ' ' in sort:\n+ sort, sort_order = sort.split(' ', 1)\n+ random_type, random_seed = sort.split('_', 1)\n+ solr_sort = self.sorts[random_type]\n+ solr_sort_str = solr_sort() if callable(solr_sort) else solr_sort\n+ solr_sort_field, solr_sort_order = solr_sort_str.split(' ', 1)\n+ sort_order = sort_order or solr_sort_order\n+ return f'{solr_sort_field}_{random_seed} {sort_order}'\n else:\n solr_sort = self.sorts[sort]\n return solr_sort() if callable(solr_sort) else solr_sort\n", "issue": "Support different seeds for random.hourly sort\nThese carousels are all sorted by random.hourly, but we want them to have a different random subset!\r\n\r\n\r\n\r\n\r\n### Proposal & Constraints\r\nExpand `random.hourly` sorting to support a custom seed like `random`\r\n\r\n### Additional context\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\n\r\n### Stakeholders\r\n@RayBB \n", "before_files": [{"content": "import logging\nfrom collections.abc import Callable\n\nimport luqum.tree\nfrom luqum.exceptions import ParseError\nfrom openlibrary.solr.query_utils import (\n escape_unknown_fields,\n fully_escape_query,\n luqum_parser,\n)\n\nlogger = logging.getLogger(\"openlibrary.worksearch\")\n\n\nclass SearchScheme:\n # Set of queries that define the universe of this scheme\n universe: list[str]\n # All actual solr fields that can be in a user query\n all_fields: set[str]\n # These fields are fetched for facets and can also be url params\n facet_fields: set[str]\n # Mapping of user-only fields to solr fields\n field_name_map: dict[str, str]\n # Mapping of user sort to solr sort\n sorts: dict[str, str | Callable[[], str]]\n # Default\n default_fetched_fields: set[str]\n # Fields that should be rewritten\n facet_rewrites: dict[tuple[str, str], str | Callable[[], str]]\n\n def is_search_field(self, field: str):\n return field in self.all_fields or field in self.field_name_map\n\n def process_user_sort(self, user_sort: str) -> str:\n \"\"\"\n Convert a user-provided sort to a solr sort\n\n >>> from openlibrary.plugins.worksearch.schemes.works import WorkSearchScheme\n >>> scheme = WorkSearchScheme()\n >>> scheme.process_user_sort('editions')\n 'edition_count desc'\n >>> scheme.process_user_sort('editions, new')\n 'edition_count desc,first_publish_year desc'\n >>> scheme.process_user_sort('random')\n 'random_1 asc'\n >>> scheme.process_user_sort('random_custom_seed')\n 'random_custom_seed asc'\n >>> scheme.process_user_sort('random_custom_seed desc')\n 'random_custom_seed desc'\n >>> scheme.process_user_sort('random_custom_seed asc')\n 'random_custom_seed asc'\n \"\"\"\n\n def process_individual_sort(sort: str):\n if sort.startswith('random_'):\n # Allow custom randoms; so anything random_* is allowed\n return sort if ' ' in sort else f'{sort} asc'\n else:\n solr_sort = self.sorts[sort]\n return solr_sort() if callable(solr_sort) else solr_sort\n\n return ','.join(\n process_individual_sort(s.strip()) for s in user_sort.split(',')\n )\n\n def process_user_query(self, q_param: str) -> str:\n if q_param == '*:*':\n # This is a special solr syntax; don't process\n return q_param\n\n try:\n q_param = escape_unknown_fields(\n (\n # Solr 4+ has support for regexes (eg `key:/foo.*/`)! But for now,\n # let's not expose that and escape all '/'. Otherwise\n # `key:/works/OL1W` is interpreted as a regex.\n q_param.strip()\n .replace('/', '\\\\/')\n # Also escape unexposed lucene features\n .replace('?', '\\\\?')\n .replace('~', '\\\\~')\n ),\n self.is_search_field,\n lower=True,\n )\n q_tree = luqum_parser(q_param)\n except ParseError:\n # This isn't a syntactically valid lucene query\n logger.warning(\"Invalid lucene query\", exc_info=True)\n # Escape everything we can\n q_tree = luqum_parser(fully_escape_query(q_param))\n\n q_tree = self.transform_user_query(q_param, q_tree)\n return str(q_tree)\n\n def transform_user_query(\n self,\n user_query: str,\n q_tree: luqum.tree.Item,\n ) -> luqum.tree.Item:\n return q_tree\n\n def build_q_from_params(self, params: dict) -> str | None:\n return None\n\n def q_to_solr_params(\n self,\n q: str,\n solr_fields: set[str],\n cur_solr_params: list[tuple[str, str]],\n ) -> list[tuple[str, str]]:\n return [('q', q)]\n", "path": "openlibrary/plugins/worksearch/schemes/__init__.py"}], "after_files": [{"content": "import logging\nfrom collections.abc import Callable\n\nimport luqum.tree\nfrom luqum.exceptions import ParseError\nfrom openlibrary.solr.query_utils import (\n escape_unknown_fields,\n fully_escape_query,\n luqum_parser,\n)\n\nlogger = logging.getLogger(\"openlibrary.worksearch\")\n\n\nclass SearchScheme:\n # Set of queries that define the universe of this scheme\n universe: list[str]\n # All actual solr fields that can be in a user query\n all_fields: set[str]\n # These fields are fetched for facets and can also be url params\n facet_fields: set[str]\n # Mapping of user-only fields to solr fields\n field_name_map: dict[str, str]\n # Mapping of user sort to solr sort\n sorts: dict[str, str | Callable[[], str]]\n # Default\n default_fetched_fields: set[str]\n # Fields that should be rewritten\n facet_rewrites: dict[tuple[str, str], str | Callable[[], str]]\n\n def is_search_field(self, field: str):\n return field in self.all_fields or field in self.field_name_map\n\n def process_user_sort(self, user_sort: str) -> str:\n \"\"\"\n Convert a user-provided sort to a solr sort\n\n >>> from openlibrary.plugins.worksearch.schemes.works import WorkSearchScheme\n >>> scheme = WorkSearchScheme()\n >>> scheme.process_user_sort('editions')\n 'edition_count desc'\n >>> scheme.process_user_sort('editions, new')\n 'edition_count desc,first_publish_year desc'\n >>> scheme.process_user_sort('random')\n 'random_1 asc'\n >>> scheme.process_user_sort('random_custom_seed')\n 'random_1_custom_seed asc'\n >>> scheme.process_user_sort('random_custom_seed desc')\n 'random_1_custom_seed desc'\n >>> scheme.process_user_sort('random_custom_seed asc')\n 'random_1_custom_seed asc'\n \"\"\"\n\n def process_individual_sort(sort: str) -> str:\n if sort.startswith(('random_', 'random.hourly_', 'random.daily_')):\n # Allow custom randoms; so anything random_* is allowed\n # Also Allow custom time randoms to allow carousels with overlapping\n # books to have a fresh ordering when on the same collection\n sort_order: str | None = None\n if ' ' in sort:\n sort, sort_order = sort.split(' ', 1)\n random_type, random_seed = sort.split('_', 1)\n solr_sort = self.sorts[random_type]\n solr_sort_str = solr_sort() if callable(solr_sort) else solr_sort\n solr_sort_field, solr_sort_order = solr_sort_str.split(' ', 1)\n sort_order = sort_order or solr_sort_order\n return f'{solr_sort_field}_{random_seed} {sort_order}'\n else:\n solr_sort = self.sorts[sort]\n return solr_sort() if callable(solr_sort) else solr_sort\n\n return ','.join(\n process_individual_sort(s.strip()) for s in user_sort.split(',')\n )\n\n def process_user_query(self, q_param: str) -> str:\n if q_param == '*:*':\n # This is a special solr syntax; don't process\n return q_param\n\n try:\n q_param = escape_unknown_fields(\n (\n # Solr 4+ has support for regexes (eg `key:/foo.*/`)! But for now,\n # let's not expose that and escape all '/'. Otherwise\n # `key:/works/OL1W` is interpreted as a regex.\n q_param.strip()\n .replace('/', '\\\\/')\n # Also escape unexposed lucene features\n .replace('?', '\\\\?')\n .replace('~', '\\\\~')\n ),\n self.is_search_field,\n lower=True,\n )\n q_tree = luqum_parser(q_param)\n except ParseError:\n # This isn't a syntactically valid lucene query\n logger.warning(\"Invalid lucene query\", exc_info=True)\n # Escape everything we can\n q_tree = luqum_parser(fully_escape_query(q_param))\n\n q_tree = self.transform_user_query(q_param, q_tree)\n return str(q_tree)\n\n def transform_user_query(\n self,\n user_query: str,\n q_tree: luqum.tree.Item,\n ) -> luqum.tree.Item:\n return q_tree\n\n def build_q_from_params(self, params: dict) -> str | None:\n return None\n\n def q_to_solr_params(\n self,\n q: str,\n solr_fields: set[str],\n cur_solr_params: list[tuple[str, str]],\n ) -> list[tuple[str, str]]:\n return [('q', q)]\n", "path": "openlibrary/plugins/worksearch/schemes/__init__.py"}]} | 1,515 | 454 |
gh_patches_debug_31570 | rasdani/github-patches | git_diff | bridgecrewio__checkov-854 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Feature Request: another flag to display failed checks
**Is your feature request related to a problem? Please describe.**
it seems that -o github_failed_only only returns failed but with plain text. if I use -o json then I get all checks(failed and success).
**Describe the solution you'd like**
Apart from json and github_failed_only parameters. It might be good to have to another flag to display failed only reports. It can be used with json output. Something like to see failed checks in json format.
```
$ checkov --display-failed-checks -o json -d .
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/common/output/report.py`
Content:
```
1 import json
2 from collections import defaultdict
3
4 from colorama import init
5 from junit_xml import TestCase, TestSuite
6 from termcolor import colored
7
8 from checkov.common.models.enums import CheckResult
9 from checkov.version import version
10 from tabulate import tabulate
11
12 init(autoreset=True)
13
14
15 class Report:
16
17 def __init__(self, check_type):
18 self.check_type = check_type
19 self.passed_checks = []
20 self.failed_checks = []
21 self.skipped_checks = []
22 self.parsing_errors = []
23
24 def add_parsing_errors(self, errors):
25 for file in errors:
26 self.add_parsing_error(file)
27
28 def add_parsing_error(self, file):
29 if file:
30 self.parsing_errors.append(file)
31
32 def add_record(self, record):
33 if record.check_result['result'] == CheckResult.PASSED:
34 self.passed_checks.append(record)
35 if record.check_result['result'] == CheckResult.FAILED:
36 self.failed_checks.append(record)
37 if record.check_result['result'] == CheckResult.SKIPPED:
38 self.skipped_checks.append(record)
39
40 def get_summary(self):
41 return {
42 "passed": len(self.passed_checks),
43 "failed": len(self.failed_checks),
44 "skipped": len(self.skipped_checks),
45 "parsing_errors": len(self.parsing_errors),
46 "checkov_version": version
47 }
48
49 def get_json(self):
50 return json.dumps(self.get_dict(), indent=4)
51
52 def get_dict(self):
53 return {
54 "check_type": self.check_type,
55 "results": {
56 "passed_checks": [check.__dict__ for check in self.passed_checks],
57 "failed_checks": [check.__dict__ for check in self.failed_checks],
58 "skipped_checks": [check.__dict__ for check in self.skipped_checks],
59 "parsing_errors": list(self.parsing_errors)
60 },
61 "summary": self.get_summary()
62 }
63
64 def get_exit_code(self, soft_fail):
65 if soft_fail:
66 return 0
67 elif len(self.failed_checks) > 0:
68 return 1
69 return 0
70
71 def is_empty(self):
72 return len(self.passed_checks) + len(self.failed_checks) + len(self.skipped_checks) + len(self.parsing_errors) == 0
73
74 def print_console(self, is_quiet=False, is_compact=False):
75 summary = self.get_summary()
76 print(colored(f"{self.check_type} scan results:", "blue"))
77 if self.parsing_errors:
78 message = "\nPassed checks: {}, Failed checks: {}, Skipped checks: {}, Parsing errors: {}\n".format(
79 summary["passed"], summary["failed"], summary["skipped"], summary["parsing_errors"])
80 else:
81 message = "\nPassed checks: {}, Failed checks: {}, Skipped checks: {}\n".format(
82 summary["passed"], summary["failed"], summary["skipped"])
83 print(colored(message, "cyan"))
84 if not is_quiet:
85 for record in self.passed_checks:
86 print(record.to_string(compact=is_compact))
87 for record in self.failed_checks:
88 print(record.to_string(compact=is_compact))
89 if not is_quiet:
90 for record in self.skipped_checks:
91 print(record.to_string(compact=is_compact))
92
93 if not is_quiet:
94 for file in self.parsing_errors:
95 Report._print_parsing_error_console(file)
96
97 @staticmethod
98 def _print_parsing_error_console(file):
99 print(colored(f'Error parsing file {file}', 'red'))
100
101 def print_junit_xml(self):
102 ts = self.get_test_suites()
103 print(TestSuite.to_xml_string(ts))
104
105 def print_failed_github_md(self):
106 result = []
107 for record in self.failed_checks:
108 result.append([record.check_id, record.file_path ,record.resource, record.check_name, record.guideline])
109 print(tabulate(result, headers=["check_id", "file" ,"resource", "check_name", "guideline"], tablefmt="github", showindex=True))
110 print("\n\n---\n\n")
111
112 def get_test_suites(self):
113 test_cases = defaultdict(list)
114 test_suites = []
115 records = self.passed_checks + self.failed_checks + self.skipped_checks
116 for record in records:
117 check_name = record.check_name
118
119 test_name = "{} {} {}".format(self.check_type, check_name, record.resource)
120 test_case = TestCase(name=test_name, file=record.file_path, classname=record.check_class)
121 if record.check_result['result'] == CheckResult.FAILED:
122 test_case.add_failure_info(
123 "Resource \"{}\" failed in check \"{}\"".format(record.resource, check_name))
124 if record.check_result['result'] == CheckResult.SKIPPED:
125 test_case.add_skipped_info(
126 "Resource \"{}\" skipped in check \"{}\"\n Suppress comment: {}".format(record.resource, check_name,
127 record.check_result[
128 'suppress_comment']))
129 test_cases[check_name].append(test_case)
130 for key in test_cases.keys():
131 test_suites.append(
132 TestSuite(name=key, test_cases=test_cases[key], package=test_cases[key][0].classname))
133 return test_suites
134
135 def print_json(self):
136 print(self.get_json())
137
138
```
Path: `checkov/common/runners/runner_registry.py`
Content:
```
1 import json
2 import logging
3 from abc import abstractmethod
4
5 from checkov.common.bridgecrew.integration_features.integration_feature_registry import integration_feature_registry
6 from checkov.common.output.report import Report
7
8 OUTPUT_CHOICES = ['cli', 'json', 'junitxml', 'github_failed_only']
9
10 from checkov.common.bridgecrew.platform_integration import BcPlatformIntegration
11
12
13 class RunnerRegistry(object):
14 runners = []
15 scan_reports = []
16 banner = ""
17
18 def __init__(self, banner, runner_filter, *runners):
19 self.logger = logging.getLogger(__name__)
20 self.runner_filter = runner_filter
21 self.runners = runners
22 self.banner = banner
23 self.scan_reports = []
24 self.filter_runner_framework()
25 self.bc_platform = BcPlatformIntegration()
26
27 @abstractmethod
28 def extract_entity_details(self, entity):
29 raise NotImplementedError()
30
31 def run(self, root_folder=None, external_checks_dir=None, files=None, guidelines=None, collect_skip_comments=True, bc_integration=None):
32 for runner in self.runners:
33 integration_feature_registry.run_pre_scan()
34 scan_report = runner.run(root_folder, external_checks_dir=external_checks_dir, files=files,
35 runner_filter=self.runner_filter, collect_skip_comments=collect_skip_comments)
36 integration_feature_registry.run_post_scan(scan_report)
37 if guidelines:
38 RunnerRegistry.enrich_report_with_guidelines(scan_report, guidelines)
39 self.scan_reports.append(scan_report)
40 return self.scan_reports
41
42 def print_reports(self, scan_reports, args, url=None):
43 if args.output == 'cli':
44 print(f"{self.banner}\n")
45 exit_codes = []
46 report_jsons = []
47 junit_reports = []
48 for report in scan_reports:
49 if not report.is_empty():
50 if args.output == "json":
51 report_jsons.append(report.get_dict())
52 elif args.output == "junitxml":
53 junit_reports.append(report)
54 # report.print_junit_xml()
55 elif args.output == 'github_failed_only':
56 report.print_failed_github_md()
57 else:
58 report.print_console(is_quiet=args.quiet, is_compact=args.compact)
59 if url:
60 print("More details: {}".format(url))
61 exit_codes.append(report.get_exit_code(args.soft_fail))
62 if args.output == "junitxml":
63 if len(junit_reports) == 1:
64 junit_reports[0].print_junit_xml()
65 else:
66 master_report = Report(None)
67 for report in junit_reports:
68 master_report.skipped_checks += report.skipped_checks
69 master_report.passed_checks += report.passed_checks
70 master_report.failed_checks += report.failed_checks
71 master_report.print_junit_xml()
72 if args.output == "json":
73 if len(report_jsons) == 1:
74 print(json.dumps(report_jsons[0], indent=4))
75 else:
76 print(json.dumps(report_jsons, indent=4))
77 if args.output == "cli":
78 self.bc_platform.get_report_to_platform(args,scan_reports)
79
80 exit_code = 1 if 1 in exit_codes else 0
81 exit(exit_code)
82
83 def filter_runner_framework(self):
84 if not self.runner_filter:
85 return
86 if self.runner_filter.framework == 'all':
87 return
88 for runner in self.runners:
89 if runner.check_type == self.runner_filter.framework:
90 self.runners = [runner]
91 return
92
93 @staticmethod
94 def enrich_report_with_guidelines(scan_report, guidelines):
95 for record in scan_report.failed_checks + scan_report.passed_checks + scan_report.skipped_checks:
96 if record.check_id in guidelines:
97 record.set_guideline(guidelines[record.check_id])
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/common/output/report.py b/checkov/common/output/report.py
--- a/checkov/common/output/report.py
+++ b/checkov/common/output/report.py
@@ -49,17 +49,26 @@
def get_json(self):
return json.dumps(self.get_dict(), indent=4)
- def get_dict(self):
- return {
- "check_type": self.check_type,
- "results": {
- "passed_checks": [check.__dict__ for check in self.passed_checks],
- "failed_checks": [check.__dict__ for check in self.failed_checks],
- "skipped_checks": [check.__dict__ for check in self.skipped_checks],
- "parsing_errors": list(self.parsing_errors)
- },
- "summary": self.get_summary()
+ def get_dict(self, is_quiet=False):
+ if is_quiet:
+ return {
+ "check_type": self.check_type,
+ "results": {
+ "failed_checks": [check.__dict__ for check in self.failed_checks]
+ },
+ "summary": self.get_summary()
}
+ else:
+ return {
+ "check_type": self.check_type,
+ "results": {
+ "passed_checks": [check.__dict__ for check in self.passed_checks],
+ "failed_checks": [check.__dict__ for check in self.failed_checks],
+ "skipped_checks": [check.__dict__ for check in self.skipped_checks],
+ "parsing_errors": list(self.parsing_errors)
+ },
+ "summary": self.get_summary()
+ }
def get_exit_code(self, soft_fail):
if soft_fail:
diff --git a/checkov/common/runners/runner_registry.py b/checkov/common/runners/runner_registry.py
--- a/checkov/common/runners/runner_registry.py
+++ b/checkov/common/runners/runner_registry.py
@@ -48,7 +48,7 @@
for report in scan_reports:
if not report.is_empty():
if args.output == "json":
- report_jsons.append(report.get_dict())
+ report_jsons.append(report.get_dict(is_quiet=args.quiet))
elif args.output == "junitxml":
junit_reports.append(report)
# report.print_junit_xml()
| {"golden_diff": "diff --git a/checkov/common/output/report.py b/checkov/common/output/report.py\n--- a/checkov/common/output/report.py\n+++ b/checkov/common/output/report.py\n@@ -49,17 +49,26 @@\n def get_json(self):\n return json.dumps(self.get_dict(), indent=4)\n \n- def get_dict(self):\n- return {\n- \"check_type\": self.check_type,\n- \"results\": {\n- \"passed_checks\": [check.__dict__ for check in self.passed_checks],\n- \"failed_checks\": [check.__dict__ for check in self.failed_checks],\n- \"skipped_checks\": [check.__dict__ for check in self.skipped_checks],\n- \"parsing_errors\": list(self.parsing_errors)\n- },\n- \"summary\": self.get_summary()\n+ def get_dict(self, is_quiet=False):\n+ if is_quiet:\n+ return {\n+ \"check_type\": self.check_type,\n+ \"results\": {\n+ \"failed_checks\": [check.__dict__ for check in self.failed_checks]\n+ },\n+ \"summary\": self.get_summary()\n }\n+ else: \n+ return {\n+ \"check_type\": self.check_type,\n+ \"results\": {\n+ \"passed_checks\": [check.__dict__ for check in self.passed_checks],\n+ \"failed_checks\": [check.__dict__ for check in self.failed_checks],\n+ \"skipped_checks\": [check.__dict__ for check in self.skipped_checks],\n+ \"parsing_errors\": list(self.parsing_errors)\n+ },\n+ \"summary\": self.get_summary()\n+ }\n \n def get_exit_code(self, soft_fail):\n if soft_fail:\ndiff --git a/checkov/common/runners/runner_registry.py b/checkov/common/runners/runner_registry.py\n--- a/checkov/common/runners/runner_registry.py\n+++ b/checkov/common/runners/runner_registry.py\n@@ -48,7 +48,7 @@\n for report in scan_reports:\n if not report.is_empty():\n if args.output == \"json\":\n- report_jsons.append(report.get_dict())\n+ report_jsons.append(report.get_dict(is_quiet=args.quiet))\n elif args.output == \"junitxml\":\n junit_reports.append(report)\n # report.print_junit_xml()\n", "issue": "Feature Request: another flag to display failed checks\n**Is your feature request related to a problem? Please describe.**\r\n\r\nit seems that -o github_failed_only only returns failed but with plain text. if I use -o json then I get all checks(failed and success). \r\n**Describe the solution you'd like**\r\n\r\nApart from json and github_failed_only parameters. It might be good to have to another flag to display failed only reports. It can be used with json output. Something like to see failed checks in json format.\r\n```\r\n$ checkov --display-failed-checks -o json -d .\r\n```\r\n\r\n\n", "before_files": [{"content": "import json\nfrom collections import defaultdict\n\nfrom colorama import init\nfrom junit_xml import TestCase, TestSuite\nfrom termcolor import colored\n\nfrom checkov.common.models.enums import CheckResult\nfrom checkov.version import version\nfrom tabulate import tabulate\n\ninit(autoreset=True)\n\n\nclass Report:\n\n def __init__(self, check_type):\n self.check_type = check_type\n self.passed_checks = []\n self.failed_checks = []\n self.skipped_checks = []\n self.parsing_errors = []\n\n def add_parsing_errors(self, errors):\n for file in errors:\n self.add_parsing_error(file)\n\n def add_parsing_error(self, file):\n if file:\n self.parsing_errors.append(file)\n\n def add_record(self, record):\n if record.check_result['result'] == CheckResult.PASSED:\n self.passed_checks.append(record)\n if record.check_result['result'] == CheckResult.FAILED:\n self.failed_checks.append(record)\n if record.check_result['result'] == CheckResult.SKIPPED:\n self.skipped_checks.append(record)\n\n def get_summary(self):\n return {\n \"passed\": len(self.passed_checks),\n \"failed\": len(self.failed_checks),\n \"skipped\": len(self.skipped_checks),\n \"parsing_errors\": len(self.parsing_errors),\n \"checkov_version\": version\n }\n\n def get_json(self):\n return json.dumps(self.get_dict(), indent=4)\n\n def get_dict(self):\n return {\n \"check_type\": self.check_type,\n \"results\": {\n \"passed_checks\": [check.__dict__ for check in self.passed_checks],\n \"failed_checks\": [check.__dict__ for check in self.failed_checks],\n \"skipped_checks\": [check.__dict__ for check in self.skipped_checks],\n \"parsing_errors\": list(self.parsing_errors)\n },\n \"summary\": self.get_summary()\n }\n\n def get_exit_code(self, soft_fail):\n if soft_fail:\n return 0\n elif len(self.failed_checks) > 0:\n return 1\n return 0\n\n def is_empty(self):\n return len(self.passed_checks) + len(self.failed_checks) + len(self.skipped_checks) + len(self.parsing_errors) == 0\n\n def print_console(self, is_quiet=False, is_compact=False):\n summary = self.get_summary()\n print(colored(f\"{self.check_type} scan results:\", \"blue\"))\n if self.parsing_errors:\n message = \"\\nPassed checks: {}, Failed checks: {}, Skipped checks: {}, Parsing errors: {}\\n\".format(\n summary[\"passed\"], summary[\"failed\"], summary[\"skipped\"], summary[\"parsing_errors\"])\n else:\n message = \"\\nPassed checks: {}, Failed checks: {}, Skipped checks: {}\\n\".format(\n summary[\"passed\"], summary[\"failed\"], summary[\"skipped\"])\n print(colored(message, \"cyan\"))\n if not is_quiet:\n for record in self.passed_checks:\n print(record.to_string(compact=is_compact))\n for record in self.failed_checks:\n print(record.to_string(compact=is_compact))\n if not is_quiet:\n for record in self.skipped_checks:\n print(record.to_string(compact=is_compact))\n\n if not is_quiet:\n for file in self.parsing_errors:\n Report._print_parsing_error_console(file)\n\n @staticmethod\n def _print_parsing_error_console(file):\n print(colored(f'Error parsing file {file}', 'red'))\n\n def print_junit_xml(self):\n ts = self.get_test_suites()\n print(TestSuite.to_xml_string(ts))\n\n def print_failed_github_md(self):\n result = []\n for record in self.failed_checks:\n result.append([record.check_id, record.file_path ,record.resource, record.check_name, record.guideline])\n print(tabulate(result, headers=[\"check_id\", \"file\" ,\"resource\", \"check_name\", \"guideline\"], tablefmt=\"github\", showindex=True))\n print(\"\\n\\n---\\n\\n\")\n\n def get_test_suites(self):\n test_cases = defaultdict(list)\n test_suites = []\n records = self.passed_checks + self.failed_checks + self.skipped_checks\n for record in records:\n check_name = record.check_name\n\n test_name = \"{} {} {}\".format(self.check_type, check_name, record.resource)\n test_case = TestCase(name=test_name, file=record.file_path, classname=record.check_class)\n if record.check_result['result'] == CheckResult.FAILED:\n test_case.add_failure_info(\n \"Resource \\\"{}\\\" failed in check \\\"{}\\\"\".format(record.resource, check_name))\n if record.check_result['result'] == CheckResult.SKIPPED:\n test_case.add_skipped_info(\n \"Resource \\\"{}\\\" skipped in check \\\"{}\\\"\\n Suppress comment: {}\".format(record.resource, check_name,\n record.check_result[\n 'suppress_comment']))\n test_cases[check_name].append(test_case)\n for key in test_cases.keys():\n test_suites.append(\n TestSuite(name=key, test_cases=test_cases[key], package=test_cases[key][0].classname))\n return test_suites\n\n def print_json(self):\n print(self.get_json())\n\n", "path": "checkov/common/output/report.py"}, {"content": "import json\nimport logging\nfrom abc import abstractmethod\n\nfrom checkov.common.bridgecrew.integration_features.integration_feature_registry import integration_feature_registry\nfrom checkov.common.output.report import Report\n\nOUTPUT_CHOICES = ['cli', 'json', 'junitxml', 'github_failed_only']\n\nfrom checkov.common.bridgecrew.platform_integration import BcPlatformIntegration\n\n\nclass RunnerRegistry(object):\n runners = []\n scan_reports = []\n banner = \"\"\n\n def __init__(self, banner, runner_filter, *runners):\n self.logger = logging.getLogger(__name__)\n self.runner_filter = runner_filter\n self.runners = runners\n self.banner = banner\n self.scan_reports = []\n self.filter_runner_framework()\n self.bc_platform = BcPlatformIntegration()\n\n @abstractmethod\n def extract_entity_details(self, entity):\n raise NotImplementedError()\n\n def run(self, root_folder=None, external_checks_dir=None, files=None, guidelines=None, collect_skip_comments=True, bc_integration=None):\n for runner in self.runners:\n integration_feature_registry.run_pre_scan()\n scan_report = runner.run(root_folder, external_checks_dir=external_checks_dir, files=files,\n runner_filter=self.runner_filter, collect_skip_comments=collect_skip_comments)\n integration_feature_registry.run_post_scan(scan_report)\n if guidelines:\n RunnerRegistry.enrich_report_with_guidelines(scan_report, guidelines)\n self.scan_reports.append(scan_report)\n return self.scan_reports\n\n def print_reports(self, scan_reports, args, url=None):\n if args.output == 'cli':\n print(f\"{self.banner}\\n\")\n exit_codes = []\n report_jsons = []\n junit_reports = []\n for report in scan_reports:\n if not report.is_empty():\n if args.output == \"json\":\n report_jsons.append(report.get_dict())\n elif args.output == \"junitxml\":\n junit_reports.append(report)\n # report.print_junit_xml()\n elif args.output == 'github_failed_only':\n report.print_failed_github_md()\n else:\n report.print_console(is_quiet=args.quiet, is_compact=args.compact)\n if url:\n print(\"More details: {}\".format(url))\n exit_codes.append(report.get_exit_code(args.soft_fail))\n if args.output == \"junitxml\":\n if len(junit_reports) == 1:\n junit_reports[0].print_junit_xml()\n else:\n master_report = Report(None)\n for report in junit_reports:\n master_report.skipped_checks += report.skipped_checks\n master_report.passed_checks += report.passed_checks\n master_report.failed_checks += report.failed_checks\n master_report.print_junit_xml()\n if args.output == \"json\":\n if len(report_jsons) == 1:\n print(json.dumps(report_jsons[0], indent=4))\n else:\n print(json.dumps(report_jsons, indent=4))\n if args.output == \"cli\":\n self.bc_platform.get_report_to_platform(args,scan_reports)\n\n exit_code = 1 if 1 in exit_codes else 0\n exit(exit_code)\n\n def filter_runner_framework(self):\n if not self.runner_filter:\n return\n if self.runner_filter.framework == 'all':\n return\n for runner in self.runners:\n if runner.check_type == self.runner_filter.framework:\n self.runners = [runner]\n return\n\n @staticmethod\n def enrich_report_with_guidelines(scan_report, guidelines):\n for record in scan_report.failed_checks + scan_report.passed_checks + scan_report.skipped_checks:\n if record.check_id in guidelines:\n record.set_guideline(guidelines[record.check_id])\n", "path": "checkov/common/runners/runner_registry.py"}], "after_files": [{"content": "import json\nfrom collections import defaultdict\n\nfrom colorama import init\nfrom junit_xml import TestCase, TestSuite\nfrom termcolor import colored\n\nfrom checkov.common.models.enums import CheckResult\nfrom checkov.version import version\nfrom tabulate import tabulate\n\ninit(autoreset=True)\n\n\nclass Report:\n\n def __init__(self, check_type):\n self.check_type = check_type\n self.passed_checks = []\n self.failed_checks = []\n self.skipped_checks = []\n self.parsing_errors = []\n\n def add_parsing_errors(self, errors):\n for file in errors:\n self.add_parsing_error(file)\n\n def add_parsing_error(self, file):\n if file:\n self.parsing_errors.append(file)\n\n def add_record(self, record):\n if record.check_result['result'] == CheckResult.PASSED:\n self.passed_checks.append(record)\n if record.check_result['result'] == CheckResult.FAILED:\n self.failed_checks.append(record)\n if record.check_result['result'] == CheckResult.SKIPPED:\n self.skipped_checks.append(record)\n\n def get_summary(self):\n return {\n \"passed\": len(self.passed_checks),\n \"failed\": len(self.failed_checks),\n \"skipped\": len(self.skipped_checks),\n \"parsing_errors\": len(self.parsing_errors),\n \"checkov_version\": version\n }\n\n def get_json(self):\n return json.dumps(self.get_dict(), indent=4)\n\n def get_dict(self, is_quiet=False):\n if is_quiet:\n return {\n \"check_type\": self.check_type,\n \"results\": {\n \"failed_checks\": [check.__dict__ for check in self.failed_checks]\n },\n \"summary\": self.get_summary()\n }\n else: \n return {\n \"check_type\": self.check_type,\n \"results\": {\n \"passed_checks\": [check.__dict__ for check in self.passed_checks],\n \"failed_checks\": [check.__dict__ for check in self.failed_checks],\n \"skipped_checks\": [check.__dict__ for check in self.skipped_checks],\n \"parsing_errors\": list(self.parsing_errors)\n },\n \"summary\": self.get_summary()\n }\n\n def get_exit_code(self, soft_fail):\n if soft_fail:\n return 0\n elif len(self.failed_checks) > 0:\n return 1\n return 0\n\n def is_empty(self):\n return len(self.passed_checks) + len(self.failed_checks) + len(self.skipped_checks) + len(self.parsing_errors) == 0\n\n def print_console(self, is_quiet=False, is_compact=False):\n summary = self.get_summary()\n print(colored(f\"{self.check_type} scan results:\", \"blue\"))\n if self.parsing_errors:\n message = \"\\nPassed checks: {}, Failed checks: {}, Skipped checks: {}, Parsing errors: {}\\n\".format(\n summary[\"passed\"], summary[\"failed\"], summary[\"skipped\"], summary[\"parsing_errors\"])\n else:\n message = \"\\nPassed checks: {}, Failed checks: {}, Skipped checks: {}\\n\".format(\n summary[\"passed\"], summary[\"failed\"], summary[\"skipped\"])\n print(colored(message, \"cyan\"))\n if not is_quiet:\n for record in self.passed_checks:\n print(record.to_string(compact=is_compact))\n for record in self.failed_checks:\n print(record.to_string(compact=is_compact))\n if not is_quiet:\n for record in self.skipped_checks:\n print(record.to_string(compact=is_compact))\n\n if not is_quiet:\n for file in self.parsing_errors:\n Report._print_parsing_error_console(file)\n\n @staticmethod\n def _print_parsing_error_console(file):\n print(colored(f'Error parsing file {file}', 'red'))\n\n def print_junit_xml(self):\n ts = self.get_test_suites()\n print(TestSuite.to_xml_string(ts))\n\n def print_failed_github_md(self):\n result = []\n for record in self.failed_checks:\n result.append([record.check_id, record.file_path ,record.resource, record.check_name, record.guideline])\n print(tabulate(result, headers=[\"check_id\", \"file\" ,\"resource\", \"check_name\", \"guideline\"], tablefmt=\"github\", showindex=True))\n print(\"\\n\\n---\\n\\n\")\n\n def get_test_suites(self):\n test_cases = defaultdict(list)\n test_suites = []\n records = self.passed_checks + self.failed_checks + self.skipped_checks\n for record in records:\n check_name = record.check_name\n\n test_name = \"{} {} {}\".format(self.check_type, check_name, record.resource)\n test_case = TestCase(name=test_name, file=record.file_path, classname=record.check_class)\n if record.check_result['result'] == CheckResult.FAILED:\n test_case.add_failure_info(\n \"Resource \\\"{}\\\" failed in check \\\"{}\\\"\".format(record.resource, check_name))\n if record.check_result['result'] == CheckResult.SKIPPED:\n test_case.add_skipped_info(\n \"Resource \\\"{}\\\" skipped in check \\\"{}\\\"\\n Suppress comment: {}\".format(record.resource, check_name,\n record.check_result[\n 'suppress_comment']))\n test_cases[check_name].append(test_case)\n for key in test_cases.keys():\n test_suites.append(\n TestSuite(name=key, test_cases=test_cases[key], package=test_cases[key][0].classname))\n return test_suites\n\n def print_json(self):\n print(self.get_json())\n\n", "path": "checkov/common/output/report.py"}, {"content": "import json\nimport logging\nfrom abc import abstractmethod\n\nfrom checkov.common.bridgecrew.integration_features.integration_feature_registry import integration_feature_registry\nfrom checkov.common.output.report import Report\n\nOUTPUT_CHOICES = ['cli', 'json', 'junitxml', 'github_failed_only']\n\nfrom checkov.common.bridgecrew.platform_integration import BcPlatformIntegration\n\n\nclass RunnerRegistry(object):\n runners = []\n scan_reports = []\n banner = \"\"\n\n def __init__(self, banner, runner_filter, *runners):\n self.logger = logging.getLogger(__name__)\n self.runner_filter = runner_filter\n self.runners = runners\n self.banner = banner\n self.scan_reports = []\n self.filter_runner_framework()\n self.bc_platform = BcPlatformIntegration()\n\n @abstractmethod\n def extract_entity_details(self, entity):\n raise NotImplementedError()\n\n def run(self, root_folder=None, external_checks_dir=None, files=None, guidelines=None, collect_skip_comments=True, bc_integration=None):\n for runner in self.runners:\n integration_feature_registry.run_pre_scan()\n scan_report = runner.run(root_folder, external_checks_dir=external_checks_dir, files=files,\n runner_filter=self.runner_filter, collect_skip_comments=collect_skip_comments)\n integration_feature_registry.run_post_scan(scan_report)\n if guidelines:\n RunnerRegistry.enrich_report_with_guidelines(scan_report, guidelines)\n self.scan_reports.append(scan_report)\n return self.scan_reports\n\n def print_reports(self, scan_reports, args, url=None):\n if args.output == 'cli':\n print(f\"{self.banner}\\n\")\n exit_codes = []\n report_jsons = []\n junit_reports = []\n for report in scan_reports:\n if not report.is_empty():\n if args.output == \"json\":\n report_jsons.append(report.get_dict(is_quiet=args.quiet))\n elif args.output == \"junitxml\":\n junit_reports.append(report)\n # report.print_junit_xml()\n elif args.output == 'github_failed_only':\n report.print_failed_github_md()\n else:\n report.print_console(is_quiet=args.quiet, is_compact=args.compact)\n if url:\n print(\"More details: {}\".format(url))\n exit_codes.append(report.get_exit_code(args.soft_fail))\n if args.output == \"junitxml\":\n if len(junit_reports) == 1:\n junit_reports[0].print_junit_xml()\n else:\n master_report = Report(None)\n for report in junit_reports:\n master_report.skipped_checks += report.skipped_checks\n master_report.passed_checks += report.passed_checks\n master_report.failed_checks += report.failed_checks\n master_report.print_junit_xml()\n if args.output == \"json\":\n if len(report_jsons) == 1:\n print(json.dumps(report_jsons[0], indent=4))\n else:\n print(json.dumps(report_jsons, indent=4))\n if args.output == \"cli\":\n self.bc_platform.get_report_to_platform(args,scan_reports)\n\n exit_code = 1 if 1 in exit_codes else 0\n exit(exit_code)\n\n def filter_runner_framework(self):\n if not self.runner_filter:\n return\n if self.runner_filter.framework == 'all':\n return\n for runner in self.runners:\n if runner.check_type == self.runner_filter.framework:\n self.runners = [runner]\n return\n\n @staticmethod\n def enrich_report_with_guidelines(scan_report, guidelines):\n for record in scan_report.failed_checks + scan_report.passed_checks + scan_report.skipped_checks:\n if record.check_id in guidelines:\n record.set_guideline(guidelines[record.check_id])\n", "path": "checkov/common/runners/runner_registry.py"}]} | 2,828 | 507 |
gh_patches_debug_11300 | rasdani/github-patches | git_diff | pypa__setuptools-1986 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deprecated distutils bdist_wininst is going to be removed
I proposed to remove the bdist_winstinst command from distutils in Python 3.9:
* https://bugs.python.org/issue39541
* https://discuss.python.org/t/remove-distutils-bdist-wininst-command/3115
* https://github.com/python/cpython/pull/18329
Problem: setuptools always uses it on all platforms at: setuptools/command/install_scripts.py, line 35:
```
bw_cmd = self.get_finalized_command("bdist_wininst")
```
See #857 which is a closed duplicated which proposed different options to fix the issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setuptools/command/install_scripts.py`
Content:
```
1 from distutils import log
2 import distutils.command.install_scripts as orig
3 import os
4 import sys
5
6 from pkg_resources import Distribution, PathMetadata, ensure_directory
7
8
9 class install_scripts(orig.install_scripts):
10 """Do normal script install, plus any egg_info wrapper scripts"""
11
12 def initialize_options(self):
13 orig.install_scripts.initialize_options(self)
14 self.no_ep = False
15
16 def run(self):
17 import setuptools.command.easy_install as ei
18
19 self.run_command("egg_info")
20 if self.distribution.scripts:
21 orig.install_scripts.run(self) # run first to set up self.outfiles
22 else:
23 self.outfiles = []
24 if self.no_ep:
25 # don't install entry point scripts into .egg file!
26 return
27
28 ei_cmd = self.get_finalized_command("egg_info")
29 dist = Distribution(
30 ei_cmd.egg_base, PathMetadata(ei_cmd.egg_base, ei_cmd.egg_info),
31 ei_cmd.egg_name, ei_cmd.egg_version,
32 )
33 bs_cmd = self.get_finalized_command('build_scripts')
34 exec_param = getattr(bs_cmd, 'executable', None)
35 bw_cmd = self.get_finalized_command("bdist_wininst")
36 is_wininst = getattr(bw_cmd, '_is_running', False)
37 writer = ei.ScriptWriter
38 if is_wininst:
39 exec_param = "python.exe"
40 writer = ei.WindowsScriptWriter
41 if exec_param == sys.executable:
42 # In case the path to the Python executable contains a space, wrap
43 # it so it's not split up.
44 exec_param = [exec_param]
45 # resolve the writer to the environment
46 writer = writer.best()
47 cmd = writer.command_spec_class.best().from_param(exec_param)
48 for args in writer.get_args(dist, cmd.as_header()):
49 self.write_script(*args)
50
51 def write_script(self, script_name, contents, mode="t", *ignored):
52 """Write an executable file to the scripts directory"""
53 from setuptools.command.easy_install import chmod, current_umask
54
55 log.info("Installing %s script to %s", script_name, self.install_dir)
56 target = os.path.join(self.install_dir, script_name)
57 self.outfiles.append(target)
58
59 mask = current_umask()
60 if not self.dry_run:
61 ensure_directory(target)
62 f = open(target, "w" + mode)
63 f.write(contents)
64 f.close()
65 chmod(target, 0o777 - mask)
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setuptools/command/install_scripts.py b/setuptools/command/install_scripts.py
--- a/setuptools/command/install_scripts.py
+++ b/setuptools/command/install_scripts.py
@@ -32,8 +32,11 @@
)
bs_cmd = self.get_finalized_command('build_scripts')
exec_param = getattr(bs_cmd, 'executable', None)
- bw_cmd = self.get_finalized_command("bdist_wininst")
- is_wininst = getattr(bw_cmd, '_is_running', False)
+ try:
+ bw_cmd = self.get_finalized_command("bdist_wininst")
+ is_wininst = getattr(bw_cmd, '_is_running', False)
+ except ImportError:
+ is_wininst = False
writer = ei.ScriptWriter
if is_wininst:
exec_param = "python.exe"
| {"golden_diff": "diff --git a/setuptools/command/install_scripts.py b/setuptools/command/install_scripts.py\n--- a/setuptools/command/install_scripts.py\n+++ b/setuptools/command/install_scripts.py\n@@ -32,8 +32,11 @@\n )\n bs_cmd = self.get_finalized_command('build_scripts')\n exec_param = getattr(bs_cmd, 'executable', None)\n- bw_cmd = self.get_finalized_command(\"bdist_wininst\")\n- is_wininst = getattr(bw_cmd, '_is_running', False)\n+ try:\n+ bw_cmd = self.get_finalized_command(\"bdist_wininst\")\n+ is_wininst = getattr(bw_cmd, '_is_running', False)\n+ except ImportError:\n+ is_wininst = False\n writer = ei.ScriptWriter\n if is_wininst:\n exec_param = \"python.exe\"\n", "issue": "Deprecated distutils bdist_wininst is going to be removed\nI proposed to remove the bdist_winstinst command from distutils in Python 3.9:\r\n\r\n* https://bugs.python.org/issue39541\r\n* https://discuss.python.org/t/remove-distutils-bdist-wininst-command/3115\r\n* https://github.com/python/cpython/pull/18329\r\n\r\nProblem: setuptools always uses it on all platforms at: setuptools/command/install_scripts.py, line 35:\r\n\r\n```\r\n bw_cmd = self.get_finalized_command(\"bdist_wininst\")\r\n```\r\n\r\nSee #857 which is a closed duplicated which proposed different options to fix the issue.\n", "before_files": [{"content": "from distutils import log\nimport distutils.command.install_scripts as orig\nimport os\nimport sys\n\nfrom pkg_resources import Distribution, PathMetadata, ensure_directory\n\n\nclass install_scripts(orig.install_scripts):\n \"\"\"Do normal script install, plus any egg_info wrapper scripts\"\"\"\n\n def initialize_options(self):\n orig.install_scripts.initialize_options(self)\n self.no_ep = False\n\n def run(self):\n import setuptools.command.easy_install as ei\n\n self.run_command(\"egg_info\")\n if self.distribution.scripts:\n orig.install_scripts.run(self) # run first to set up self.outfiles\n else:\n self.outfiles = []\n if self.no_ep:\n # don't install entry point scripts into .egg file!\n return\n\n ei_cmd = self.get_finalized_command(\"egg_info\")\n dist = Distribution(\n ei_cmd.egg_base, PathMetadata(ei_cmd.egg_base, ei_cmd.egg_info),\n ei_cmd.egg_name, ei_cmd.egg_version,\n )\n bs_cmd = self.get_finalized_command('build_scripts')\n exec_param = getattr(bs_cmd, 'executable', None)\n bw_cmd = self.get_finalized_command(\"bdist_wininst\")\n is_wininst = getattr(bw_cmd, '_is_running', False)\n writer = ei.ScriptWriter\n if is_wininst:\n exec_param = \"python.exe\"\n writer = ei.WindowsScriptWriter\n if exec_param == sys.executable:\n # In case the path to the Python executable contains a space, wrap\n # it so it's not split up.\n exec_param = [exec_param]\n # resolve the writer to the environment\n writer = writer.best()\n cmd = writer.command_spec_class.best().from_param(exec_param)\n for args in writer.get_args(dist, cmd.as_header()):\n self.write_script(*args)\n\n def write_script(self, script_name, contents, mode=\"t\", *ignored):\n \"\"\"Write an executable file to the scripts directory\"\"\"\n from setuptools.command.easy_install import chmod, current_umask\n\n log.info(\"Installing %s script to %s\", script_name, self.install_dir)\n target = os.path.join(self.install_dir, script_name)\n self.outfiles.append(target)\n\n mask = current_umask()\n if not self.dry_run:\n ensure_directory(target)\n f = open(target, \"w\" + mode)\n f.write(contents)\n f.close()\n chmod(target, 0o777 - mask)\n", "path": "setuptools/command/install_scripts.py"}], "after_files": [{"content": "from distutils import log\nimport distutils.command.install_scripts as orig\nimport os\nimport sys\n\nfrom pkg_resources import Distribution, PathMetadata, ensure_directory\n\n\nclass install_scripts(orig.install_scripts):\n \"\"\"Do normal script install, plus any egg_info wrapper scripts\"\"\"\n\n def initialize_options(self):\n orig.install_scripts.initialize_options(self)\n self.no_ep = False\n\n def run(self):\n import setuptools.command.easy_install as ei\n\n self.run_command(\"egg_info\")\n if self.distribution.scripts:\n orig.install_scripts.run(self) # run first to set up self.outfiles\n else:\n self.outfiles = []\n if self.no_ep:\n # don't install entry point scripts into .egg file!\n return\n\n ei_cmd = self.get_finalized_command(\"egg_info\")\n dist = Distribution(\n ei_cmd.egg_base, PathMetadata(ei_cmd.egg_base, ei_cmd.egg_info),\n ei_cmd.egg_name, ei_cmd.egg_version,\n )\n bs_cmd = self.get_finalized_command('build_scripts')\n exec_param = getattr(bs_cmd, 'executable', None)\n try:\n bw_cmd = self.get_finalized_command(\"bdist_wininst\")\n is_wininst = getattr(bw_cmd, '_is_running', False)\n except ImportError:\n is_wininst = False\n writer = ei.ScriptWriter\n if is_wininst:\n exec_param = \"python.exe\"\n writer = ei.WindowsScriptWriter\n if exec_param == sys.executable:\n # In case the path to the Python executable contains a space, wrap\n # it so it's not split up.\n exec_param = [exec_param]\n # resolve the writer to the environment\n writer = writer.best()\n cmd = writer.command_spec_class.best().from_param(exec_param)\n for args in writer.get_args(dist, cmd.as_header()):\n self.write_script(*args)\n\n def write_script(self, script_name, contents, mode=\"t\", *ignored):\n \"\"\"Write an executable file to the scripts directory\"\"\"\n from setuptools.command.easy_install import chmod, current_umask\n\n log.info(\"Installing %s script to %s\", script_name, self.install_dir)\n target = os.path.join(self.install_dir, script_name)\n self.outfiles.append(target)\n\n mask = current_umask()\n if not self.dry_run:\n ensure_directory(target)\n f = open(target, \"w\" + mode)\n f.write(contents)\n f.close()\n chmod(target, 0o777 - mask)\n", "path": "setuptools/command/install_scripts.py"}]} | 1,062 | 181 |
gh_patches_debug_39724 | rasdani/github-patches | git_diff | ephios-dev__ephios-178 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Event creation mails do not include event description
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ephios/event_management/mail.py`
Content:
```
1 from django.core import mail
2 from django.core.mail import EmailMultiAlternatives
3 from django.template.loader import render_to_string
4 from django.utils.translation import gettext as _
5 from guardian.shortcuts import get_users_with_perms
6
7 from ephios.event_management.models import AbstractParticipation
8 from ephios.extra.permissions import get_groups_with_perms
9 from ephios.settings import SITE_URL
10 from ephios.user_management.models import UserProfile
11
12
13 def new_event(event):
14 messages = []
15 users = UserProfile.objects.filter(
16 groups__in=get_groups_with_perms(event, only_with_perms_in=["view_event"]), is_active=True
17 ).distinct()
18 responsible_users = get_users_with_perms(event, only_with_perms_in=["change_event"]).distinct()
19 responsible_persons_mails = list(responsible_users.values_list("email", flat=True))
20
21 subject = _("New {type}: {title}").format(type=event.type, title=event.title)
22 text_content = _(
23 "A new {type} ({title}) has been added. \n You can view it here: {link}"
24 ).format(type=event.type, title=event.title, link=event.get_absolute_url())
25 html_content = render_to_string(
26 "event_management/mails/new_event.html", {"event": event, "site_url": SITE_URL}
27 )
28
29 for user in users:
30 message = EmailMultiAlternatives(
31 to=[user.email], subject=subject, body=text_content, reply_to=responsible_persons_mails
32 )
33 message.attach_alternative(html_content, "text/html")
34 messages.append(message)
35 mail.get_connection().send_messages(messages)
36
37
38 def participation_state_changed(participation: AbstractParticipation):
39 if participation.state != AbstractParticipation.States.USER_DECLINED:
40 messages = []
41
42 # send mail to the participant whose participation has been changed
43 if participation.participant.email is not None:
44 text_content = _(
45 "The status for your participation for {shift} has changed. It is now {status}."
46 ).format(shift=participation.shift, status=participation.get_state_display())
47 html_content = render_to_string("email_base.html", {"message_text": text_content})
48 message = EmailMultiAlternatives(
49 to=[participation.participant.email],
50 subject=_("Your participation state changed"),
51 body=text_content,
52 )
53 message.attach_alternative(html_content, "text/html")
54 messages.append(message)
55
56 # send mail to responsible users
57 responsible_users = get_users_with_perms(
58 participation.shift.event, only_with_perms_in=["change_event"]
59 ).distinct()
60 subject = _("Participation was changed for your event")
61 text_content = _(
62 "The participation of {participant} for {shift} was changed. The status is now {status}"
63 ).format(
64 participant=participation.participant,
65 shift=participation.shift,
66 status=participation.get_state_display(),
67 )
68 html_content = render_to_string("email_base.html", {"message_text": text_content})
69 for user in responsible_users:
70 message = EmailMultiAlternatives(to=[user.email], subject=subject, body=text_content)
71 message.attach_alternative(html_content, "text/html")
72 messages.append(message)
73
74 mail.get_connection().send_messages(messages)
75
```
Path: `ephios/user_management/mail.py`
Content:
```
1 from django.contrib.auth.tokens import default_token_generator
2 from django.core.mail import EmailMultiAlternatives
3 from django.template.loader import render_to_string
4 from django.urls import reverse
5 from django.utils.encoding import force_bytes
6 from django.utils.http import urlsafe_base64_encode
7 from django.utils.translation import gettext as _
8
9 from ephios.settings import SITE_URL
10
11
12 def send_account_creation_info(userprofile):
13 subject = _("Welcome to ephios!")
14 uid = urlsafe_base64_encode(force_bytes(userprofile.id))
15 token = default_token_generator.make_token(userprofile)
16 reset_link = reverse("password_reset_confirm", kwargs={"uidb64": uid, "token": token})
17 text_content = _(
18 "You're receiving this email because a new account has been created for you at ephios.\n"
19 "Please go to the following page and choose a password: {url}{reset_link}\n"
20 "Your username is your email address: {email}\n"
21 ).format(url=SITE_URL, reset_link=reset_link, email=userprofile.email)
22
23 html_content = render_to_string(
24 "user_management/new_account_email.html",
25 {"uid": uid, "token": token, "site_url": SITE_URL, "email": userprofile.email},
26 )
27 message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)
28 message.attach_alternative(html_content, "text/html")
29 message.send()
30
31
32 def send_account_update_info(userprofile):
33 subject = _("ephios account updated")
34 url = reverse("user_management:profile")
35 text_content = _(
36 "You're receiving this email because your account at ephios has been updated.\n"
37 "You can see the changes in your profile: {site_url}{url}\n"
38 "Your username is your email address: {email}\n"
39 ).format(site_url=SITE_URL, url=url, email=userprofile.email)
40
41 html_content = render_to_string(
42 "user_management/account_updated_email.html",
43 {"site_url": SITE_URL, "url": url, "email": userprofile.email},
44 )
45 message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)
46 message.attach_alternative(html_content, "text/html")
47 message.send()
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ephios/event_management/mail.py b/ephios/event_management/mail.py
--- a/ephios/event_management/mail.py
+++ b/ephios/event_management/mail.py
@@ -1,3 +1,5 @@
+from urllib.parse import urljoin
+
from django.core import mail
from django.core.mail import EmailMultiAlternatives
from django.template.loader import render_to_string
@@ -20,8 +22,16 @@
subject = _("New {type}: {title}").format(type=event.type, title=event.title)
text_content = _(
- "A new {type} ({title}) has been added. \n You can view it here: {link}"
- ).format(type=event.type, title=event.title, link=event.get_absolute_url())
+ "A new {type} ({title}, {location}) has been added.\n"
+ "Further information: {description}\n"
+ "You can view the event here: {url}"
+ ).format(
+ type=event.type,
+ title=event.title,
+ location=event.location,
+ description=event.description,
+ url=urljoin(SITE_URL, event.get_absolute_url()),
+ )
html_content = render_to_string(
"event_management/mails/new_event.html", {"event": event, "site_url": SITE_URL}
)
diff --git a/ephios/user_management/mail.py b/ephios/user_management/mail.py
--- a/ephios/user_management/mail.py
+++ b/ephios/user_management/mail.py
@@ -1,3 +1,5 @@
+from urllib.parse import urljoin
+
from django.contrib.auth.tokens import default_token_generator
from django.core.mail import EmailMultiAlternatives
from django.template.loader import render_to_string
@@ -16,9 +18,9 @@
reset_link = reverse("password_reset_confirm", kwargs={"uidb64": uid, "token": token})
text_content = _(
"You're receiving this email because a new account has been created for you at ephios.\n"
- "Please go to the following page and choose a password: {url}{reset_link}\n"
+ "Please go to the following page and choose a password: {url}\n"
"Your username is your email address: {email}\n"
- ).format(url=SITE_URL, reset_link=reset_link, email=userprofile.email)
+ ).format(url=urljoin(SITE_URL, reset_link), email=userprofile.email)
html_content = render_to_string(
"user_management/new_account_email.html",
@@ -34,9 +36,9 @@
url = reverse("user_management:profile")
text_content = _(
"You're receiving this email because your account at ephios has been updated.\n"
- "You can see the changes in your profile: {site_url}{url}\n"
+ "You can see the changes in your profile: {url}\n"
"Your username is your email address: {email}\n"
- ).format(site_url=SITE_URL, url=url, email=userprofile.email)
+ ).format(url=urljoin(SITE_URL, url), email=userprofile.email)
html_content = render_to_string(
"user_management/account_updated_email.html",
| {"golden_diff": "diff --git a/ephios/event_management/mail.py b/ephios/event_management/mail.py\n--- a/ephios/event_management/mail.py\n+++ b/ephios/event_management/mail.py\n@@ -1,3 +1,5 @@\n+from urllib.parse import urljoin\n+\n from django.core import mail\n from django.core.mail import EmailMultiAlternatives\n from django.template.loader import render_to_string\n@@ -20,8 +22,16 @@\n \n subject = _(\"New {type}: {title}\").format(type=event.type, title=event.title)\n text_content = _(\n- \"A new {type} ({title}) has been added. \\n You can view it here: {link}\"\n- ).format(type=event.type, title=event.title, link=event.get_absolute_url())\n+ \"A new {type} ({title}, {location}) has been added.\\n\"\n+ \"Further information: {description}\\n\"\n+ \"You can view the event here: {url}\"\n+ ).format(\n+ type=event.type,\n+ title=event.title,\n+ location=event.location,\n+ description=event.description,\n+ url=urljoin(SITE_URL, event.get_absolute_url()),\n+ )\n html_content = render_to_string(\n \"event_management/mails/new_event.html\", {\"event\": event, \"site_url\": SITE_URL}\n )\ndiff --git a/ephios/user_management/mail.py b/ephios/user_management/mail.py\n--- a/ephios/user_management/mail.py\n+++ b/ephios/user_management/mail.py\n@@ -1,3 +1,5 @@\n+from urllib.parse import urljoin\n+\n from django.contrib.auth.tokens import default_token_generator\n from django.core.mail import EmailMultiAlternatives\n from django.template.loader import render_to_string\n@@ -16,9 +18,9 @@\n reset_link = reverse(\"password_reset_confirm\", kwargs={\"uidb64\": uid, \"token\": token})\n text_content = _(\n \"You're receiving this email because a new account has been created for you at ephios.\\n\"\n- \"Please go to the following page and choose a password: {url}{reset_link}\\n\"\n+ \"Please go to the following page and choose a password: {url}\\n\"\n \"Your username is your email address: {email}\\n\"\n- ).format(url=SITE_URL, reset_link=reset_link, email=userprofile.email)\n+ ).format(url=urljoin(SITE_URL, reset_link), email=userprofile.email)\n \n html_content = render_to_string(\n \"user_management/new_account_email.html\",\n@@ -34,9 +36,9 @@\n url = reverse(\"user_management:profile\")\n text_content = _(\n \"You're receiving this email because your account at ephios has been updated.\\n\"\n- \"You can see the changes in your profile: {site_url}{url}\\n\"\n+ \"You can see the changes in your profile: {url}\\n\"\n \"Your username is your email address: {email}\\n\"\n- ).format(site_url=SITE_URL, url=url, email=userprofile.email)\n+ ).format(url=urljoin(SITE_URL, url), email=userprofile.email)\n \n html_content = render_to_string(\n \"user_management/account_updated_email.html\",\n", "issue": "Event creation mails do not include event description\n\n", "before_files": [{"content": "from django.core import mail\nfrom django.core.mail import EmailMultiAlternatives\nfrom django.template.loader import render_to_string\nfrom django.utils.translation import gettext as _\nfrom guardian.shortcuts import get_users_with_perms\n\nfrom ephios.event_management.models import AbstractParticipation\nfrom ephios.extra.permissions import get_groups_with_perms\nfrom ephios.settings import SITE_URL\nfrom ephios.user_management.models import UserProfile\n\n\ndef new_event(event):\n messages = []\n users = UserProfile.objects.filter(\n groups__in=get_groups_with_perms(event, only_with_perms_in=[\"view_event\"]), is_active=True\n ).distinct()\n responsible_users = get_users_with_perms(event, only_with_perms_in=[\"change_event\"]).distinct()\n responsible_persons_mails = list(responsible_users.values_list(\"email\", flat=True))\n\n subject = _(\"New {type}: {title}\").format(type=event.type, title=event.title)\n text_content = _(\n \"A new {type} ({title}) has been added. \\n You can view it here: {link}\"\n ).format(type=event.type, title=event.title, link=event.get_absolute_url())\n html_content = render_to_string(\n \"event_management/mails/new_event.html\", {\"event\": event, \"site_url\": SITE_URL}\n )\n\n for user in users:\n message = EmailMultiAlternatives(\n to=[user.email], subject=subject, body=text_content, reply_to=responsible_persons_mails\n )\n message.attach_alternative(html_content, \"text/html\")\n messages.append(message)\n mail.get_connection().send_messages(messages)\n\n\ndef participation_state_changed(participation: AbstractParticipation):\n if participation.state != AbstractParticipation.States.USER_DECLINED:\n messages = []\n\n # send mail to the participant whose participation has been changed\n if participation.participant.email is not None:\n text_content = _(\n \"The status for your participation for {shift} has changed. It is now {status}.\"\n ).format(shift=participation.shift, status=participation.get_state_display())\n html_content = render_to_string(\"email_base.html\", {\"message_text\": text_content})\n message = EmailMultiAlternatives(\n to=[participation.participant.email],\n subject=_(\"Your participation state changed\"),\n body=text_content,\n )\n message.attach_alternative(html_content, \"text/html\")\n messages.append(message)\n\n # send mail to responsible users\n responsible_users = get_users_with_perms(\n participation.shift.event, only_with_perms_in=[\"change_event\"]\n ).distinct()\n subject = _(\"Participation was changed for your event\")\n text_content = _(\n \"The participation of {participant} for {shift} was changed. The status is now {status}\"\n ).format(\n participant=participation.participant,\n shift=participation.shift,\n status=participation.get_state_display(),\n )\n html_content = render_to_string(\"email_base.html\", {\"message_text\": text_content})\n for user in responsible_users:\n message = EmailMultiAlternatives(to=[user.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n messages.append(message)\n\n mail.get_connection().send_messages(messages)\n", "path": "ephios/event_management/mail.py"}, {"content": "from django.contrib.auth.tokens import default_token_generator\nfrom django.core.mail import EmailMultiAlternatives\nfrom django.template.loader import render_to_string\nfrom django.urls import reverse\nfrom django.utils.encoding import force_bytes\nfrom django.utils.http import urlsafe_base64_encode\nfrom django.utils.translation import gettext as _\n\nfrom ephios.settings import SITE_URL\n\n\ndef send_account_creation_info(userprofile):\n subject = _(\"Welcome to ephios!\")\n uid = urlsafe_base64_encode(force_bytes(userprofile.id))\n token = default_token_generator.make_token(userprofile)\n reset_link = reverse(\"password_reset_confirm\", kwargs={\"uidb64\": uid, \"token\": token})\n text_content = _(\n \"You're receiving this email because a new account has been created for you at ephios.\\n\"\n \"Please go to the following page and choose a password: {url}{reset_link}\\n\"\n \"Your username is your email address: {email}\\n\"\n ).format(url=SITE_URL, reset_link=reset_link, email=userprofile.email)\n\n html_content = render_to_string(\n \"user_management/new_account_email.html\",\n {\"uid\": uid, \"token\": token, \"site_url\": SITE_URL, \"email\": userprofile.email},\n )\n message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n message.send()\n\n\ndef send_account_update_info(userprofile):\n subject = _(\"ephios account updated\")\n url = reverse(\"user_management:profile\")\n text_content = _(\n \"You're receiving this email because your account at ephios has been updated.\\n\"\n \"You can see the changes in your profile: {site_url}{url}\\n\"\n \"Your username is your email address: {email}\\n\"\n ).format(site_url=SITE_URL, url=url, email=userprofile.email)\n\n html_content = render_to_string(\n \"user_management/account_updated_email.html\",\n {\"site_url\": SITE_URL, \"url\": url, \"email\": userprofile.email},\n )\n message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n message.send()\n", "path": "ephios/user_management/mail.py"}], "after_files": [{"content": "from urllib.parse import urljoin\n\nfrom django.core import mail\nfrom django.core.mail import EmailMultiAlternatives\nfrom django.template.loader import render_to_string\nfrom django.utils.translation import gettext as _\nfrom guardian.shortcuts import get_users_with_perms\n\nfrom ephios.event_management.models import AbstractParticipation\nfrom ephios.extra.permissions import get_groups_with_perms\nfrom ephios.settings import SITE_URL\nfrom ephios.user_management.models import UserProfile\n\n\ndef new_event(event):\n messages = []\n users = UserProfile.objects.filter(\n groups__in=get_groups_with_perms(event, only_with_perms_in=[\"view_event\"]), is_active=True\n ).distinct()\n responsible_users = get_users_with_perms(event, only_with_perms_in=[\"change_event\"]).distinct()\n responsible_persons_mails = list(responsible_users.values_list(\"email\", flat=True))\n\n subject = _(\"New {type}: {title}\").format(type=event.type, title=event.title)\n text_content = _(\n \"A new {type} ({title}, {location}) has been added.\\n\"\n \"Further information: {description}\\n\"\n \"You can view the event here: {url}\"\n ).format(\n type=event.type,\n title=event.title,\n location=event.location,\n description=event.description,\n url=urljoin(SITE_URL, event.get_absolute_url()),\n )\n html_content = render_to_string(\n \"event_management/mails/new_event.html\", {\"event\": event, \"site_url\": SITE_URL}\n )\n\n for user in users:\n message = EmailMultiAlternatives(\n to=[user.email], subject=subject, body=text_content, reply_to=responsible_persons_mails\n )\n message.attach_alternative(html_content, \"text/html\")\n messages.append(message)\n mail.get_connection().send_messages(messages)\n\n\ndef participation_state_changed(participation: AbstractParticipation):\n if participation.state != AbstractParticipation.States.USER_DECLINED:\n messages = []\n\n # send mail to the participant whose participation has been changed\n if participation.participant.email is not None:\n text_content = _(\n \"The status for your participation for {shift} has changed. It is now {status}.\"\n ).format(shift=participation.shift, status=participation.get_state_display())\n html_content = render_to_string(\"email_base.html\", {\"message_text\": text_content})\n message = EmailMultiAlternatives(\n to=[participation.participant.email],\n subject=_(\"Your participation state changed\"),\n body=text_content,\n )\n message.attach_alternative(html_content, \"text/html\")\n messages.append(message)\n\n # send mail to responsible users\n responsible_users = get_users_with_perms(\n participation.shift.event, only_with_perms_in=[\"change_event\"]\n ).distinct()\n subject = _(\"Participation was changed for your event\")\n text_content = _(\n \"The participation of {participant} for {shift} was changed. The status is now {status}\"\n ).format(\n participant=participation.participant,\n shift=participation.shift,\n status=participation.get_state_display(),\n )\n html_content = render_to_string(\"email_base.html\", {\"message_text\": text_content})\n for user in responsible_users:\n message = EmailMultiAlternatives(to=[user.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n messages.append(message)\n\n mail.get_connection().send_messages(messages)\n", "path": "ephios/event_management/mail.py"}, {"content": "from urllib.parse import urljoin\n\nfrom django.contrib.auth.tokens import default_token_generator\nfrom django.core.mail import EmailMultiAlternatives\nfrom django.template.loader import render_to_string\nfrom django.urls import reverse\nfrom django.utils.encoding import force_bytes\nfrom django.utils.http import urlsafe_base64_encode\nfrom django.utils.translation import gettext as _\n\nfrom ephios.settings import SITE_URL\n\n\ndef send_account_creation_info(userprofile):\n subject = _(\"Welcome to ephios!\")\n uid = urlsafe_base64_encode(force_bytes(userprofile.id))\n token = default_token_generator.make_token(userprofile)\n reset_link = reverse(\"password_reset_confirm\", kwargs={\"uidb64\": uid, \"token\": token})\n text_content = _(\n \"You're receiving this email because a new account has been created for you at ephios.\\n\"\n \"Please go to the following page and choose a password: {url}\\n\"\n \"Your username is your email address: {email}\\n\"\n ).format(url=urljoin(SITE_URL, reset_link), email=userprofile.email)\n\n html_content = render_to_string(\n \"user_management/new_account_email.html\",\n {\"uid\": uid, \"token\": token, \"site_url\": SITE_URL, \"email\": userprofile.email},\n )\n message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n message.send()\n\n\ndef send_account_update_info(userprofile):\n subject = _(\"ephios account updated\")\n url = reverse(\"user_management:profile\")\n text_content = _(\n \"You're receiving this email because your account at ephios has been updated.\\n\"\n \"You can see the changes in your profile: {url}\\n\"\n \"Your username is your email address: {email}\\n\"\n ).format(url=urljoin(SITE_URL, url), email=userprofile.email)\n\n html_content = render_to_string(\n \"user_management/account_updated_email.html\",\n {\"site_url\": SITE_URL, \"url\": url, \"email\": userprofile.email},\n )\n message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n message.send()\n", "path": "ephios/user_management/mail.py"}]} | 1,686 | 709 |
gh_patches_debug_33891 | rasdani/github-patches | git_diff | learningequality__kolibri-2704 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
'Database is Locked' error when channel importing is running
We get errors in `jonboiser`'s `available-channels` branch. It's not reliably replicating (!), but the general observations are:
- It happens during `startremotechannelimport` or `diskchannelimport`
- The actual error is raised when another API or session call happens
My sense here is that it's caused by the long content import and annotation phase of channel importing.
Possible solutions:
- shorten length of transactions
- do stuff outside of transactions, before committing them
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/content/utils/annotation.py`
Content:
```
1 import logging as logger
2 import os
3
4 from django.conf import settings
5 from kolibri.content.apps import KolibriContentConfig
6 from kolibri.content.models import ChannelMetadata, ContentNode, File, LocalFile
7 from le_utils.constants import content_kinds
8 from sqlalchemy import and_, exists, func, select
9
10 from .channels import get_channel_ids_for_content_database_dir
11 from .paths import get_content_file_name, get_content_storage_file_path
12 from .sqlalchemybridge import Bridge
13
14 logging = logger.getLogger(__name__)
15
16 CONTENT_APP_NAME = KolibriContentConfig.label
17
18 CHUNKSIZE = 10000
19
20 def update_channel_metadata():
21 """
22 If we are potentially moving from a version of Kolibri that did not import its content data,
23 scan through the settings.CONTENT_DATABASE_DIR folder for all channel content databases,
24 and pull the data from each database if we have not already imported it.
25 """
26 from .channel_import import import_channel_from_local_db
27 channel_ids = get_channel_ids_for_content_database_dir(settings.CONTENT_DATABASE_DIR)
28 imported = False
29 for channel_id in channel_ids:
30 if not ChannelMetadata.objects.filter(id=channel_id).exists():
31 import_channel_from_local_db(channel_id)
32 imported = True
33 if imported:
34 set_availability()
35
36
37 def set_leaf_node_availability_from_local_file_availability():
38 bridge = Bridge(app_name=CONTENT_APP_NAME)
39
40 ContentNodeTable = bridge.get_table(ContentNode)
41 FileTable = bridge.get_table(File)
42 LocalFileTable = bridge.get_table(LocalFile)
43
44 connection = bridge.get_connection()
45
46 file_statement = select([LocalFileTable.c.available]).where(
47 FileTable.c.local_file_id == LocalFileTable.c.id,
48 ).limit(1)
49
50 logging.info('Setting availability of File objects based on LocalFile availability')
51
52 connection.execute(FileTable.update().values(available=file_statement).execution_options(autocommit=True))
53
54 contentnode_statement = select([FileTable.c.contentnode_id]).where(
55 and_(
56 FileTable.c.available == True, # noqa
57 FileTable.c.supplementary == False
58 )
59 ).where(ContentNodeTable.c.id == FileTable.c.contentnode_id)
60
61 logging.info('Setting availability of non-topic ContentNode objects based on File availability')
62
63 connection.execute(ContentNodeTable.update().where(
64 ContentNodeTable.c.kind != content_kinds.TOPIC).values(available=exists(contentnode_statement)).execution_options(autocommit=True))
65
66 bridge.end()
67
68 def mark_local_files_as_available(checksums):
69 """
70 Shortcut method to update database if we are sure that the files are available.
71 Can be used after successful downloads to flag availability without having to do expensive disk reads.
72 """
73 bridge = Bridge(app_name=CONTENT_APP_NAME)
74
75 LocalFileClass = bridge.get_class(LocalFile)
76
77 logging.info('Setting availability of {number} LocalFile objects based on passed in checksums'.format(number=len(checksums)))
78
79 for i in range(0, len(checksums), CHUNKSIZE):
80 bridge.session.bulk_update_mappings(LocalFileClass, ({
81 'id': checksum,
82 'available': True
83 } for checksum in checksums[i:i+CHUNKSIZE]))
84 bridge.session.flush()
85
86 bridge.session.commit()
87
88 bridge.end()
89
90 def set_local_file_availability_from_disk(checksums=None):
91 bridge = Bridge(app_name=CONTENT_APP_NAME)
92
93 LocalFileClass = bridge.get_class(LocalFile)
94
95 if checksums is None:
96 logging.info('Setting availability of LocalFile objects based on disk availability')
97 files = bridge.session.query(LocalFileClass).all()
98 elif type(checksums) == list:
99 logging.info('Setting availability of {number} LocalFile objects based on disk availability'.format(number=len(checksums)))
100 files = bridge.session.query(LocalFileClass).filter(LocalFileClass.id.in_(checksums)).all()
101 else:
102 logging.info('Setting availability of LocalFile object with checksum {checksum} based on disk availability'.format(checksum=checksums))
103 files = [bridge.session.query(LocalFileClass).get(checksums)]
104
105 checksums_to_update = [
106 file.id for file in files if os.path.exists(get_content_storage_file_path(get_content_file_name(file)))
107 ]
108
109 bridge.end()
110
111 mark_local_files_as_available(checksums_to_update)
112
113 def recurse_availability_up_tree():
114 bridge = Bridge(app_name=CONTENT_APP_NAME)
115
116 ContentNodeClass = bridge.get_class(ContentNode)
117
118 ContentNodeTable = bridge.get_table(ContentNode)
119
120 connection = bridge.get_connection()
121
122 node_depth = bridge.session.query(func.max(ContentNodeClass.level)).scalar()
123
124 logging.info('Setting availability of ContentNode objects with children for {levels} levels'.format(levels=node_depth))
125
126 child = ContentNodeTable.alias()
127
128 # Go from the deepest level to the shallowest
129 for level in range(node_depth, 0, -1):
130
131 available_nodes = select([child.c.available]).where(
132 and_(
133 child.c.available == True, # noqa
134 child.c.level == level,
135 )
136 ).where(ContentNodeTable.c.id == child.c.parent_id)
137
138 logging.info('Setting availability of ContentNode objects with children for level {level}'.format(level=level))
139 # Only modify topic availability here
140 connection.execute(ContentNodeTable.update().where(
141 ContentNodeTable.c.level == level - 1).where(
142 ContentNodeTable.c.kind == content_kinds.TOPIC).values(available=exists(available_nodes)).execution_options(autocommit=True))
143
144 bridge.end()
145
146 def set_availability(checksums=None):
147 if checksums is None:
148 set_local_file_availability_from_disk()
149 else:
150 mark_local_files_as_available(checksums)
151
152 set_leaf_node_availability_from_local_file_availability()
153 recurse_availability_up_tree()
154
```
Path: `kolibri/content/management/commands/importcontent.py`
Content:
```
1 import os
2
3 from django.conf import settings
4 from django.core.management.base import CommandError
5 from django.db.models import Sum
6 from kolibri.tasks.management.commands.base import AsyncCommand
7 from requests.exceptions import HTTPError
8
9 from ...models import LocalFile
10 from ...utils import annotation, paths, transfer
11
12 # constants to specify the transfer method to be used
13 DOWNLOAD_METHOD = "download"
14 COPY_METHOD = "copy"
15
16
17 class Command(AsyncCommand):
18
19 def add_arguments(self, parser):
20 # let's save the parser in case we need to print a help statement
21 self._parser = parser
22
23 # we want two groups of arguments. One group is when the
24 # 'importcontent disk' command is given, where we'll expect a file
25 # directory to be given. Another is the 'importcontent network'
26 # command to be given, where we'll expect a channel.
27
28 # However, some optional arguments apply to both groups. Add them here!
29 node_ids_help_text = """
30 Specify one or more node IDs to import. Only the files associated to those node IDs will be imported.
31
32 e.g.
33
34 kolibri manage importcontent --node_ids <id1>,<id2>, [<ids>,...] {network, disk} <channel id>
35 """
36 parser.add_argument(
37 "--node_ids", "-n",
38 # Split the comma separated string we get, into a list of strings
39 type=lambda x: x.split(","),
40 default=[],
41 required=False,
42 dest="node_ids",
43 help=node_ids_help_text,
44 )
45
46 exclude_node_ids_help_text = """
47 Specify one or more node IDs to exclude. Files associated to those node IDs will be not be imported.
48
49 e.g.
50
51 kolibri manage importcontent --exclude_node_ids <id1>,<id2>, [<ids>,...] {network, disk} <channel id>
52 """
53 parser.add_argument(
54 "--exclude_node_ids",
55 # Split the comma separated string we get, into a list of string
56 type=lambda x: x.split(","),
57 default=[],
58 required=False,
59 dest="exclude_node_ids",
60 help=exclude_node_ids_help_text
61 )
62
63 # to implement these two groups of commands and their corresponding
64 # arguments, we'll need argparse.subparsers.
65 subparsers = parser.add_subparsers(dest='command', help="The following subcommands are available.")
66
67 # the network command has a channel id required positional argument,
68 # and some optional content_id arguments.
69
70 # TODO: implement a --content-domain parameter, for optionally
71 # specifying the domain for the curation server.
72
73 # Note: cmd should be the management command instance, as though the
74 # interface for adding arguments is argparse, Django overrides the
75 # parser object with its own thing, hence why we need to add cmd. See
76 # http://stackoverflow.com/questions/36706220/is-it-possible-to-create-subparsers-in-a-django-management-command
77 network_subparser = subparsers.add_parser(
78 name='network',
79 cmd=self,
80 help="Download the given channel through the network.",
81 )
82 network_subparser.add_argument('channel_id', type=str)
83
84 default_studio_url = settings.CENTRAL_CONTENT_DOWNLOAD_BASE_URL
85 network_subparser.add_argument(
86 "--baseurl",
87 type=str,
88 default=default_studio_url,
89 dest="baseurl",
90 )
91
92 disk_subparser = subparsers.add_parser(
93 name='disk',
94 cmd=self,
95 help='Copy the content from the given folder.'
96 )
97 disk_subparser.add_argument('channel_id', type=str)
98 disk_subparser.add_argument('directory', type=str)
99
100 def download_content(self, channel_id, node_ids=None, exclude_node_ids=None, baseurl=None):
101 self._transfer(DOWNLOAD_METHOD, channel_id, node_ids=node_ids, exclude_node_ids=exclude_node_ids, baseurl=baseurl)
102
103 def copy_content(self, channel_id, path, node_ids=None, exclude_node_ids=None):
104 self._transfer(COPY_METHOD, channel_id, path=path, node_ids=node_ids, exclude_node_ids=exclude_node_ids)
105
106 def _transfer(self, method, channel_id, path=None, node_ids=None, exclude_node_ids=None, baseurl=None): # noqa: max-complexity=16
107
108 files_to_download = LocalFile.objects.filter(files__contentnode__channel_id=channel_id, available=False)
109
110 if node_ids:
111 files_to_download = files_to_download.filter(files__contentnode__in=node_ids)
112
113 if exclude_node_ids:
114 files_to_download = files_to_download.exclude(files__contentnode__in=exclude_node_ids)
115
116 # Make sure the files are unique, to avoid duplicating downloads
117 files_to_download = files_to_download.distinct()
118
119 total_bytes_to_transfer = files_to_download.aggregate(Sum('file_size'))['file_size__sum'] or 0
120
121 downloaded_files = []
122 file_checksums_to_annotate = []
123
124 with self.start_progress(total=total_bytes_to_transfer) as overall_progress_update:
125
126 for f in files_to_download:
127
128 if self.is_cancelled():
129 break
130
131 filename = f.get_filename()
132 dest = paths.get_content_storage_file_path(filename)
133
134 # if the file already exists, add its size to our overall progress, and skip
135 if os.path.isfile(dest) and os.path.getsize(dest) == f.file_size:
136 overall_progress_update(f.file_size)
137 file_checksums_to_annotate.append(f.id)
138 continue
139
140 # determine where we're downloading/copying from, and create appropriate transfer object
141 if method == DOWNLOAD_METHOD:
142 url = paths.get_content_storage_remote_url(filename, baseurl=baseurl)
143 filetransfer = transfer.FileDownload(url, dest)
144 elif method == COPY_METHOD:
145 srcpath = paths.get_content_storage_file_path(filename, datafolder=path)
146 filetransfer = transfer.FileCopy(srcpath, dest)
147
148 try:
149
150 with filetransfer:
151
152 with self.start_progress(total=filetransfer.total_size) as file_dl_progress_update:
153
154 for chunk in filetransfer:
155 if self.is_cancelled():
156 filetransfer.cancel()
157 break
158 length = len(chunk)
159 overall_progress_update(length)
160 file_dl_progress_update(length)
161 else:
162 # If the for loop didn't break, add this to downloaded files.
163 downloaded_files.append(dest)
164
165 file_checksums_to_annotate.append(f.id)
166
167 except HTTPError:
168 overall_progress_update(f.file_size)
169
170 if self.is_cancelled():
171 # Cancelled, clean up any already downloading files.
172 for dest in downloaded_files:
173 os.remove(dest)
174 self.cancel()
175 else:
176 annotation.set_availability(file_checksums_to_annotate)
177
178 def handle_async(self, *args, **options):
179 if options['command'] == 'network':
180 self.download_content(options["channel_id"],
181 node_ids=options["node_ids"],
182 exclude_node_ids=options['exclude_node_ids'],
183 baseurl=options["baseurl"])
184 elif options['command'] == 'disk':
185 self.copy_content(options["channel_id"],
186 options["directory"],
187 node_ids=options["node_ids"],
188 exclude_node_ids=options["exclude_node_ids"])
189 else:
190 self._parser.print_help()
191 raise CommandError("Please give a valid subcommand. You gave: {}".format(options["command"]))
192
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kolibri/content/management/commands/importcontent.py b/kolibri/content/management/commands/importcontent.py
--- a/kolibri/content/management/commands/importcontent.py
+++ b/kolibri/content/management/commands/importcontent.py
@@ -173,7 +173,7 @@
os.remove(dest)
self.cancel()
else:
- annotation.set_availability(file_checksums_to_annotate)
+ annotation.set_availability(channel_id, file_checksums_to_annotate)
def handle_async(self, *args, **options):
if options['command'] == 'network':
diff --git a/kolibri/content/utils/annotation.py b/kolibri/content/utils/annotation.py
--- a/kolibri/content/utils/annotation.py
+++ b/kolibri/content/utils/annotation.py
@@ -110,7 +110,7 @@
mark_local_files_as_available(checksums_to_update)
-def recurse_availability_up_tree():
+def recurse_availability_up_tree(channel_id):
bridge = Bridge(app_name=CONTENT_APP_NAME)
ContentNodeClass = bridge.get_class(ContentNode)
@@ -138,16 +138,18 @@
logging.info('Setting availability of ContentNode objects with children for level {level}'.format(level=level))
# Only modify topic availability here
connection.execute(ContentNodeTable.update().where(
- ContentNodeTable.c.level == level - 1).where(
- ContentNodeTable.c.kind == content_kinds.TOPIC).values(available=exists(available_nodes)).execution_options(autocommit=True))
+ and_(
+ ContentNodeTable.c.level == level - 1,
+ ContentNodeTable.c.channel_id == channel_id,
+ ContentNodeTable.c.kind == content_kinds.TOPIC)).values(available=exists(available_nodes)).execution_options(autocommit=True))
bridge.end()
-def set_availability(checksums=None):
+def set_availability(channel_id, checksums=None):
if checksums is None:
set_local_file_availability_from_disk()
else:
mark_local_files_as_available(checksums)
set_leaf_node_availability_from_local_file_availability()
- recurse_availability_up_tree()
+ recurse_availability_up_tree(channel_id)
| {"golden_diff": "diff --git a/kolibri/content/management/commands/importcontent.py b/kolibri/content/management/commands/importcontent.py\n--- a/kolibri/content/management/commands/importcontent.py\n+++ b/kolibri/content/management/commands/importcontent.py\n@@ -173,7 +173,7 @@\n os.remove(dest)\n self.cancel()\n else:\n- annotation.set_availability(file_checksums_to_annotate)\n+ annotation.set_availability(channel_id, file_checksums_to_annotate)\n \n def handle_async(self, *args, **options):\n if options['command'] == 'network':\ndiff --git a/kolibri/content/utils/annotation.py b/kolibri/content/utils/annotation.py\n--- a/kolibri/content/utils/annotation.py\n+++ b/kolibri/content/utils/annotation.py\n@@ -110,7 +110,7 @@\n \n mark_local_files_as_available(checksums_to_update)\n \n-def recurse_availability_up_tree():\n+def recurse_availability_up_tree(channel_id):\n bridge = Bridge(app_name=CONTENT_APP_NAME)\n \n ContentNodeClass = bridge.get_class(ContentNode)\n@@ -138,16 +138,18 @@\n logging.info('Setting availability of ContentNode objects with children for level {level}'.format(level=level))\n # Only modify topic availability here\n connection.execute(ContentNodeTable.update().where(\n- ContentNodeTable.c.level == level - 1).where(\n- ContentNodeTable.c.kind == content_kinds.TOPIC).values(available=exists(available_nodes)).execution_options(autocommit=True))\n+ and_(\n+ ContentNodeTable.c.level == level - 1,\n+ ContentNodeTable.c.channel_id == channel_id,\n+ ContentNodeTable.c.kind == content_kinds.TOPIC)).values(available=exists(available_nodes)).execution_options(autocommit=True))\n \n bridge.end()\n \n-def set_availability(checksums=None):\n+def set_availability(channel_id, checksums=None):\n if checksums is None:\n set_local_file_availability_from_disk()\n else:\n mark_local_files_as_available(checksums)\n \n set_leaf_node_availability_from_local_file_availability()\n- recurse_availability_up_tree()\n+ recurse_availability_up_tree(channel_id)\n", "issue": "'Database is Locked' error when channel importing is running\nWe get errors in `jonboiser`'s `available-channels` branch. It's not reliably replicating (!), but the general observations are:\r\n\r\n- It happens during `startremotechannelimport` or `diskchannelimport`\r\n- The actual error is raised when another API or session call happens\r\n\r\nMy sense here is that it's caused by the long content import and annotation phase of channel importing.\r\n\r\nPossible solutions:\r\n- shorten length of transactions\r\n- do stuff outside of transactions, before committing them\n", "before_files": [{"content": "import logging as logger\nimport os\n\nfrom django.conf import settings\nfrom kolibri.content.apps import KolibriContentConfig\nfrom kolibri.content.models import ChannelMetadata, ContentNode, File, LocalFile\nfrom le_utils.constants import content_kinds\nfrom sqlalchemy import and_, exists, func, select\n\nfrom .channels import get_channel_ids_for_content_database_dir\nfrom .paths import get_content_file_name, get_content_storage_file_path\nfrom .sqlalchemybridge import Bridge\n\nlogging = logger.getLogger(__name__)\n\nCONTENT_APP_NAME = KolibriContentConfig.label\n\nCHUNKSIZE = 10000\n\ndef update_channel_metadata():\n \"\"\"\n If we are potentially moving from a version of Kolibri that did not import its content data,\n scan through the settings.CONTENT_DATABASE_DIR folder for all channel content databases,\n and pull the data from each database if we have not already imported it.\n \"\"\"\n from .channel_import import import_channel_from_local_db\n channel_ids = get_channel_ids_for_content_database_dir(settings.CONTENT_DATABASE_DIR)\n imported = False\n for channel_id in channel_ids:\n if not ChannelMetadata.objects.filter(id=channel_id).exists():\n import_channel_from_local_db(channel_id)\n imported = True\n if imported:\n set_availability()\n\n\ndef set_leaf_node_availability_from_local_file_availability():\n bridge = Bridge(app_name=CONTENT_APP_NAME)\n\n ContentNodeTable = bridge.get_table(ContentNode)\n FileTable = bridge.get_table(File)\n LocalFileTable = bridge.get_table(LocalFile)\n\n connection = bridge.get_connection()\n\n file_statement = select([LocalFileTable.c.available]).where(\n FileTable.c.local_file_id == LocalFileTable.c.id,\n ).limit(1)\n\n logging.info('Setting availability of File objects based on LocalFile availability')\n\n connection.execute(FileTable.update().values(available=file_statement).execution_options(autocommit=True))\n\n contentnode_statement = select([FileTable.c.contentnode_id]).where(\n and_(\n FileTable.c.available == True, # noqa\n FileTable.c.supplementary == False\n )\n ).where(ContentNodeTable.c.id == FileTable.c.contentnode_id)\n\n logging.info('Setting availability of non-topic ContentNode objects based on File availability')\n\n connection.execute(ContentNodeTable.update().where(\n ContentNodeTable.c.kind != content_kinds.TOPIC).values(available=exists(contentnode_statement)).execution_options(autocommit=True))\n\n bridge.end()\n\ndef mark_local_files_as_available(checksums):\n \"\"\"\n Shortcut method to update database if we are sure that the files are available.\n Can be used after successful downloads to flag availability without having to do expensive disk reads.\n \"\"\"\n bridge = Bridge(app_name=CONTENT_APP_NAME)\n\n LocalFileClass = bridge.get_class(LocalFile)\n\n logging.info('Setting availability of {number} LocalFile objects based on passed in checksums'.format(number=len(checksums)))\n\n for i in range(0, len(checksums), CHUNKSIZE):\n bridge.session.bulk_update_mappings(LocalFileClass, ({\n 'id': checksum,\n 'available': True\n } for checksum in checksums[i:i+CHUNKSIZE]))\n bridge.session.flush()\n\n bridge.session.commit()\n\n bridge.end()\n\ndef set_local_file_availability_from_disk(checksums=None):\n bridge = Bridge(app_name=CONTENT_APP_NAME)\n\n LocalFileClass = bridge.get_class(LocalFile)\n\n if checksums is None:\n logging.info('Setting availability of LocalFile objects based on disk availability')\n files = bridge.session.query(LocalFileClass).all()\n elif type(checksums) == list:\n logging.info('Setting availability of {number} LocalFile objects based on disk availability'.format(number=len(checksums)))\n files = bridge.session.query(LocalFileClass).filter(LocalFileClass.id.in_(checksums)).all()\n else:\n logging.info('Setting availability of LocalFile object with checksum {checksum} based on disk availability'.format(checksum=checksums))\n files = [bridge.session.query(LocalFileClass).get(checksums)]\n\n checksums_to_update = [\n file.id for file in files if os.path.exists(get_content_storage_file_path(get_content_file_name(file)))\n ]\n\n bridge.end()\n\n mark_local_files_as_available(checksums_to_update)\n\ndef recurse_availability_up_tree():\n bridge = Bridge(app_name=CONTENT_APP_NAME)\n\n ContentNodeClass = bridge.get_class(ContentNode)\n\n ContentNodeTable = bridge.get_table(ContentNode)\n\n connection = bridge.get_connection()\n\n node_depth = bridge.session.query(func.max(ContentNodeClass.level)).scalar()\n\n logging.info('Setting availability of ContentNode objects with children for {levels} levels'.format(levels=node_depth))\n\n child = ContentNodeTable.alias()\n\n # Go from the deepest level to the shallowest\n for level in range(node_depth, 0, -1):\n\n available_nodes = select([child.c.available]).where(\n and_(\n child.c.available == True, # noqa\n child.c.level == level,\n )\n ).where(ContentNodeTable.c.id == child.c.parent_id)\n\n logging.info('Setting availability of ContentNode objects with children for level {level}'.format(level=level))\n # Only modify topic availability here\n connection.execute(ContentNodeTable.update().where(\n ContentNodeTable.c.level == level - 1).where(\n ContentNodeTable.c.kind == content_kinds.TOPIC).values(available=exists(available_nodes)).execution_options(autocommit=True))\n\n bridge.end()\n\ndef set_availability(checksums=None):\n if checksums is None:\n set_local_file_availability_from_disk()\n else:\n mark_local_files_as_available(checksums)\n\n set_leaf_node_availability_from_local_file_availability()\n recurse_availability_up_tree()\n", "path": "kolibri/content/utils/annotation.py"}, {"content": "import os\n\nfrom django.conf import settings\nfrom django.core.management.base import CommandError\nfrom django.db.models import Sum\nfrom kolibri.tasks.management.commands.base import AsyncCommand\nfrom requests.exceptions import HTTPError\n\nfrom ...models import LocalFile\nfrom ...utils import annotation, paths, transfer\n\n# constants to specify the transfer method to be used\nDOWNLOAD_METHOD = \"download\"\nCOPY_METHOD = \"copy\"\n\n\nclass Command(AsyncCommand):\n\n def add_arguments(self, parser):\n # let's save the parser in case we need to print a help statement\n self._parser = parser\n\n # we want two groups of arguments. One group is when the\n # 'importcontent disk' command is given, where we'll expect a file\n # directory to be given. Another is the 'importcontent network'\n # command to be given, where we'll expect a channel.\n\n # However, some optional arguments apply to both groups. Add them here!\n node_ids_help_text = \"\"\"\n Specify one or more node IDs to import. Only the files associated to those node IDs will be imported.\n\n e.g.\n\n kolibri manage importcontent --node_ids <id1>,<id2>, [<ids>,...] {network, disk} <channel id>\n \"\"\"\n parser.add_argument(\n \"--node_ids\", \"-n\",\n # Split the comma separated string we get, into a list of strings\n type=lambda x: x.split(\",\"),\n default=[],\n required=False,\n dest=\"node_ids\",\n help=node_ids_help_text,\n )\n\n exclude_node_ids_help_text = \"\"\"\n Specify one or more node IDs to exclude. Files associated to those node IDs will be not be imported.\n\n e.g.\n\n kolibri manage importcontent --exclude_node_ids <id1>,<id2>, [<ids>,...] {network, disk} <channel id>\n \"\"\"\n parser.add_argument(\n \"--exclude_node_ids\",\n # Split the comma separated string we get, into a list of string\n type=lambda x: x.split(\",\"),\n default=[],\n required=False,\n dest=\"exclude_node_ids\",\n help=exclude_node_ids_help_text\n )\n\n # to implement these two groups of commands and their corresponding\n # arguments, we'll need argparse.subparsers.\n subparsers = parser.add_subparsers(dest='command', help=\"The following subcommands are available.\")\n\n # the network command has a channel id required positional argument,\n # and some optional content_id arguments.\n\n # TODO: implement a --content-domain parameter, for optionally\n # specifying the domain for the curation server.\n\n # Note: cmd should be the management command instance, as though the\n # interface for adding arguments is argparse, Django overrides the\n # parser object with its own thing, hence why we need to add cmd. See\n # http://stackoverflow.com/questions/36706220/is-it-possible-to-create-subparsers-in-a-django-management-command\n network_subparser = subparsers.add_parser(\n name='network',\n cmd=self,\n help=\"Download the given channel through the network.\",\n )\n network_subparser.add_argument('channel_id', type=str)\n\n default_studio_url = settings.CENTRAL_CONTENT_DOWNLOAD_BASE_URL\n network_subparser.add_argument(\n \"--baseurl\",\n type=str,\n default=default_studio_url,\n dest=\"baseurl\",\n )\n\n disk_subparser = subparsers.add_parser(\n name='disk',\n cmd=self,\n help='Copy the content from the given folder.'\n )\n disk_subparser.add_argument('channel_id', type=str)\n disk_subparser.add_argument('directory', type=str)\n\n def download_content(self, channel_id, node_ids=None, exclude_node_ids=None, baseurl=None):\n self._transfer(DOWNLOAD_METHOD, channel_id, node_ids=node_ids, exclude_node_ids=exclude_node_ids, baseurl=baseurl)\n\n def copy_content(self, channel_id, path, node_ids=None, exclude_node_ids=None):\n self._transfer(COPY_METHOD, channel_id, path=path, node_ids=node_ids, exclude_node_ids=exclude_node_ids)\n\n def _transfer(self, method, channel_id, path=None, node_ids=None, exclude_node_ids=None, baseurl=None): # noqa: max-complexity=16\n\n files_to_download = LocalFile.objects.filter(files__contentnode__channel_id=channel_id, available=False)\n\n if node_ids:\n files_to_download = files_to_download.filter(files__contentnode__in=node_ids)\n\n if exclude_node_ids:\n files_to_download = files_to_download.exclude(files__contentnode__in=exclude_node_ids)\n\n # Make sure the files are unique, to avoid duplicating downloads\n files_to_download = files_to_download.distinct()\n\n total_bytes_to_transfer = files_to_download.aggregate(Sum('file_size'))['file_size__sum'] or 0\n\n downloaded_files = []\n file_checksums_to_annotate = []\n\n with self.start_progress(total=total_bytes_to_transfer) as overall_progress_update:\n\n for f in files_to_download:\n\n if self.is_cancelled():\n break\n\n filename = f.get_filename()\n dest = paths.get_content_storage_file_path(filename)\n\n # if the file already exists, add its size to our overall progress, and skip\n if os.path.isfile(dest) and os.path.getsize(dest) == f.file_size:\n overall_progress_update(f.file_size)\n file_checksums_to_annotate.append(f.id)\n continue\n\n # determine where we're downloading/copying from, and create appropriate transfer object\n if method == DOWNLOAD_METHOD:\n url = paths.get_content_storage_remote_url(filename, baseurl=baseurl)\n filetransfer = transfer.FileDownload(url, dest)\n elif method == COPY_METHOD:\n srcpath = paths.get_content_storage_file_path(filename, datafolder=path)\n filetransfer = transfer.FileCopy(srcpath, dest)\n\n try:\n\n with filetransfer:\n\n with self.start_progress(total=filetransfer.total_size) as file_dl_progress_update:\n\n for chunk in filetransfer:\n if self.is_cancelled():\n filetransfer.cancel()\n break\n length = len(chunk)\n overall_progress_update(length)\n file_dl_progress_update(length)\n else:\n # If the for loop didn't break, add this to downloaded files.\n downloaded_files.append(dest)\n\n file_checksums_to_annotate.append(f.id)\n\n except HTTPError:\n overall_progress_update(f.file_size)\n\n if self.is_cancelled():\n # Cancelled, clean up any already downloading files.\n for dest in downloaded_files:\n os.remove(dest)\n self.cancel()\n else:\n annotation.set_availability(file_checksums_to_annotate)\n\n def handle_async(self, *args, **options):\n if options['command'] == 'network':\n self.download_content(options[\"channel_id\"],\n node_ids=options[\"node_ids\"],\n exclude_node_ids=options['exclude_node_ids'],\n baseurl=options[\"baseurl\"])\n elif options['command'] == 'disk':\n self.copy_content(options[\"channel_id\"],\n options[\"directory\"],\n node_ids=options[\"node_ids\"],\n exclude_node_ids=options[\"exclude_node_ids\"])\n else:\n self._parser.print_help()\n raise CommandError(\"Please give a valid subcommand. You gave: {}\".format(options[\"command\"]))\n", "path": "kolibri/content/management/commands/importcontent.py"}], "after_files": [{"content": "import logging as logger\nimport os\n\nfrom django.conf import settings\nfrom kolibri.content.apps import KolibriContentConfig\nfrom kolibri.content.models import ChannelMetadata, ContentNode, File, LocalFile\nfrom le_utils.constants import content_kinds\nfrom sqlalchemy import and_, exists, func, select\n\nfrom .channels import get_channel_ids_for_content_database_dir\nfrom .paths import get_content_file_name, get_content_storage_file_path\nfrom .sqlalchemybridge import Bridge\n\nlogging = logger.getLogger(__name__)\n\nCONTENT_APP_NAME = KolibriContentConfig.label\n\nCHUNKSIZE = 10000\n\ndef update_channel_metadata():\n \"\"\"\n If we are potentially moving from a version of Kolibri that did not import its content data,\n scan through the settings.CONTENT_DATABASE_DIR folder for all channel content databases,\n and pull the data from each database if we have not already imported it.\n \"\"\"\n from .channel_import import import_channel_from_local_db\n channel_ids = get_channel_ids_for_content_database_dir(settings.CONTENT_DATABASE_DIR)\n imported = False\n for channel_id in channel_ids:\n if not ChannelMetadata.objects.filter(id=channel_id).exists():\n import_channel_from_local_db(channel_id)\n imported = True\n if imported:\n set_availability()\n\n\ndef set_leaf_node_availability_from_local_file_availability():\n bridge = Bridge(app_name=CONTENT_APP_NAME)\n\n ContentNodeTable = bridge.get_table(ContentNode)\n FileTable = bridge.get_table(File)\n LocalFileTable = bridge.get_table(LocalFile)\n\n connection = bridge.get_connection()\n\n file_statement = select([LocalFileTable.c.available]).where(\n FileTable.c.local_file_id == LocalFileTable.c.id,\n ).limit(1)\n\n logging.info('Setting availability of File objects based on LocalFile availability')\n\n connection.execute(FileTable.update().values(available=file_statement).execution_options(autocommit=True))\n\n contentnode_statement = select([FileTable.c.contentnode_id]).where(\n and_(\n FileTable.c.available == True, # noqa\n FileTable.c.supplementary == False\n )\n ).where(ContentNodeTable.c.id == FileTable.c.contentnode_id)\n\n logging.info('Setting availability of non-topic ContentNode objects based on File availability')\n\n connection.execute(ContentNodeTable.update().where(\n ContentNodeTable.c.kind != content_kinds.TOPIC).values(available=exists(contentnode_statement)).execution_options(autocommit=True))\n\n bridge.end()\n\ndef mark_local_files_as_available(checksums):\n \"\"\"\n Shortcut method to update database if we are sure that the files are available.\n Can be used after successful downloads to flag availability without having to do expensive disk reads.\n \"\"\"\n bridge = Bridge(app_name=CONTENT_APP_NAME)\n\n LocalFileClass = bridge.get_class(LocalFile)\n\n logging.info('Setting availability of {number} LocalFile objects based on passed in checksums'.format(number=len(checksums)))\n\n for i in range(0, len(checksums), CHUNKSIZE):\n bridge.session.bulk_update_mappings(LocalFileClass, ({\n 'id': checksum,\n 'available': True\n } for checksum in checksums[i:i+CHUNKSIZE]))\n bridge.session.flush()\n\n bridge.session.commit()\n\n bridge.end()\n\ndef set_local_file_availability_from_disk(checksums=None):\n bridge = Bridge(app_name=CONTENT_APP_NAME)\n\n LocalFileClass = bridge.get_class(LocalFile)\n\n if checksums is None:\n logging.info('Setting availability of LocalFile objects based on disk availability')\n files = bridge.session.query(LocalFileClass).all()\n elif type(checksums) == list:\n logging.info('Setting availability of {number} LocalFile objects based on disk availability'.format(number=len(checksums)))\n files = bridge.session.query(LocalFileClass).filter(LocalFileClass.id.in_(checksums)).all()\n else:\n logging.info('Setting availability of LocalFile object with checksum {checksum} based on disk availability'.format(checksum=checksums))\n files = [bridge.session.query(LocalFileClass).get(checksums)]\n\n checksums_to_update = [\n file.id for file in files if os.path.exists(get_content_storage_file_path(get_content_file_name(file)))\n ]\n\n bridge.end()\n\n mark_local_files_as_available(checksums_to_update)\n\ndef recurse_availability_up_tree(channel_id):\n bridge = Bridge(app_name=CONTENT_APP_NAME)\n\n ContentNodeClass = bridge.get_class(ContentNode)\n\n ContentNodeTable = bridge.get_table(ContentNode)\n\n connection = bridge.get_connection()\n\n node_depth = bridge.session.query(func.max(ContentNodeClass.level)).scalar()\n\n logging.info('Setting availability of ContentNode objects with children for {levels} levels'.format(levels=node_depth))\n\n child = ContentNodeTable.alias()\n\n # Go from the deepest level to the shallowest\n for level in range(node_depth, 0, -1):\n\n available_nodes = select([child.c.available]).where(\n and_(\n child.c.available == True, # noqa\n child.c.level == level,\n )\n ).where(ContentNodeTable.c.id == child.c.parent_id)\n\n logging.info('Setting availability of ContentNode objects with children for level {level}'.format(level=level))\n # Only modify topic availability here\n connection.execute(ContentNodeTable.update().where(\n and_(\n ContentNodeTable.c.level == level - 1,\n ContentNodeTable.c.channel_id == channel_id,\n ContentNodeTable.c.kind == content_kinds.TOPIC)).values(available=exists(available_nodes)).execution_options(autocommit=True))\n\n bridge.end()\n\ndef set_availability(channel_id, checksums=None):\n if checksums is None:\n set_local_file_availability_from_disk()\n else:\n mark_local_files_as_available(checksums)\n\n set_leaf_node_availability_from_local_file_availability()\n recurse_availability_up_tree(channel_id)\n", "path": "kolibri/content/utils/annotation.py"}, {"content": "import os\n\nfrom django.conf import settings\nfrom django.core.management.base import CommandError\nfrom django.db.models import Sum\nfrom kolibri.tasks.management.commands.base import AsyncCommand\nfrom requests.exceptions import HTTPError\n\nfrom ...models import LocalFile\nfrom ...utils import annotation, paths, transfer\n\n# constants to specify the transfer method to be used\nDOWNLOAD_METHOD = \"download\"\nCOPY_METHOD = \"copy\"\n\n\nclass Command(AsyncCommand):\n\n def add_arguments(self, parser):\n # let's save the parser in case we need to print a help statement\n self._parser = parser\n\n # we want two groups of arguments. One group is when the\n # 'importcontent disk' command is given, where we'll expect a file\n # directory to be given. Another is the 'importcontent network'\n # command to be given, where we'll expect a channel.\n\n # However, some optional arguments apply to both groups. Add them here!\n node_ids_help_text = \"\"\"\n Specify one or more node IDs to import. Only the files associated to those node IDs will be imported.\n\n e.g.\n\n kolibri manage importcontent --node_ids <id1>,<id2>, [<ids>,...] {network, disk} <channel id>\n \"\"\"\n parser.add_argument(\n \"--node_ids\", \"-n\",\n # Split the comma separated string we get, into a list of strings\n type=lambda x: x.split(\",\"),\n default=[],\n required=False,\n dest=\"node_ids\",\n help=node_ids_help_text,\n )\n\n exclude_node_ids_help_text = \"\"\"\n Specify one or more node IDs to exclude. Files associated to those node IDs will be not be imported.\n\n e.g.\n\n kolibri manage importcontent --exclude_node_ids <id1>,<id2>, [<ids>,...] {network, disk} <channel id>\n \"\"\"\n parser.add_argument(\n \"--exclude_node_ids\",\n # Split the comma separated string we get, into a list of string\n type=lambda x: x.split(\",\"),\n default=[],\n required=False,\n dest=\"exclude_node_ids\",\n help=exclude_node_ids_help_text\n )\n\n # to implement these two groups of commands and their corresponding\n # arguments, we'll need argparse.subparsers.\n subparsers = parser.add_subparsers(dest='command', help=\"The following subcommands are available.\")\n\n # the network command has a channel id required positional argument,\n # and some optional content_id arguments.\n\n # TODO: implement a --content-domain parameter, for optionally\n # specifying the domain for the curation server.\n\n # Note: cmd should be the management command instance, as though the\n # interface for adding arguments is argparse, Django overrides the\n # parser object with its own thing, hence why we need to add cmd. See\n # http://stackoverflow.com/questions/36706220/is-it-possible-to-create-subparsers-in-a-django-management-command\n network_subparser = subparsers.add_parser(\n name='network',\n cmd=self,\n help=\"Download the given channel through the network.\",\n )\n network_subparser.add_argument('channel_id', type=str)\n\n default_studio_url = settings.CENTRAL_CONTENT_DOWNLOAD_BASE_URL\n network_subparser.add_argument(\n \"--baseurl\",\n type=str,\n default=default_studio_url,\n dest=\"baseurl\",\n )\n\n disk_subparser = subparsers.add_parser(\n name='disk',\n cmd=self,\n help='Copy the content from the given folder.'\n )\n disk_subparser.add_argument('channel_id', type=str)\n disk_subparser.add_argument('directory', type=str)\n\n def download_content(self, channel_id, node_ids=None, exclude_node_ids=None, baseurl=None):\n self._transfer(DOWNLOAD_METHOD, channel_id, node_ids=node_ids, exclude_node_ids=exclude_node_ids, baseurl=baseurl)\n\n def copy_content(self, channel_id, path, node_ids=None, exclude_node_ids=None):\n self._transfer(COPY_METHOD, channel_id, path=path, node_ids=node_ids, exclude_node_ids=exclude_node_ids)\n\n def _transfer(self, method, channel_id, path=None, node_ids=None, exclude_node_ids=None, baseurl=None): # noqa: max-complexity=16\n\n files_to_download = LocalFile.objects.filter(files__contentnode__channel_id=channel_id, available=False)\n\n if node_ids:\n files_to_download = files_to_download.filter(files__contentnode__in=node_ids)\n\n if exclude_node_ids:\n files_to_download = files_to_download.exclude(files__contentnode__in=exclude_node_ids)\n\n # Make sure the files are unique, to avoid duplicating downloads\n files_to_download = files_to_download.distinct()\n\n total_bytes_to_transfer = files_to_download.aggregate(Sum('file_size'))['file_size__sum'] or 0\n\n downloaded_files = []\n file_checksums_to_annotate = []\n\n with self.start_progress(total=total_bytes_to_transfer) as overall_progress_update:\n\n for f in files_to_download:\n\n if self.is_cancelled():\n break\n\n filename = f.get_filename()\n dest = paths.get_content_storage_file_path(filename)\n\n # if the file already exists, add its size to our overall progress, and skip\n if os.path.isfile(dest) and os.path.getsize(dest) == f.file_size:\n overall_progress_update(f.file_size)\n file_checksums_to_annotate.append(f.id)\n continue\n\n # determine where we're downloading/copying from, and create appropriate transfer object\n if method == DOWNLOAD_METHOD:\n url = paths.get_content_storage_remote_url(filename, baseurl=baseurl)\n filetransfer = transfer.FileDownload(url, dest)\n elif method == COPY_METHOD:\n srcpath = paths.get_content_storage_file_path(filename, datafolder=path)\n filetransfer = transfer.FileCopy(srcpath, dest)\n\n try:\n\n with filetransfer:\n\n with self.start_progress(total=filetransfer.total_size) as file_dl_progress_update:\n\n for chunk in filetransfer:\n if self.is_cancelled():\n filetransfer.cancel()\n break\n length = len(chunk)\n overall_progress_update(length)\n file_dl_progress_update(length)\n else:\n # If the for loop didn't break, add this to downloaded files.\n downloaded_files.append(dest)\n\n file_checksums_to_annotate.append(f.id)\n\n except HTTPError:\n overall_progress_update(f.file_size)\n\n if self.is_cancelled():\n # Cancelled, clean up any already downloading files.\n for dest in downloaded_files:\n os.remove(dest)\n self.cancel()\n else:\n annotation.set_availability(channel_id, file_checksums_to_annotate)\n\n def handle_async(self, *args, **options):\n if options['command'] == 'network':\n self.download_content(options[\"channel_id\"],\n node_ids=options[\"node_ids\"],\n exclude_node_ids=options['exclude_node_ids'],\n baseurl=options[\"baseurl\"])\n elif options['command'] == 'disk':\n self.copy_content(options[\"channel_id\"],\n options[\"directory\"],\n node_ids=options[\"node_ids\"],\n exclude_node_ids=options[\"exclude_node_ids\"])\n else:\n self._parser.print_help()\n raise CommandError(\"Please give a valid subcommand. You gave: {}\".format(options[\"command\"]))\n", "path": "kolibri/content/management/commands/importcontent.py"}]} | 4,059 | 496 |
gh_patches_debug_535 | rasdani/github-patches | git_diff | neptune-ai__neptune-client-155 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
create_experiment() fails on windows 10
Hi there,
I enjoy neptune very much and on my macbook everything works fine. But when I run the same code on my Windows 10 machine, I get an error when calling create_experiment().
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\ProgramData\Anaconda3\envs\rl_insurance\lib\site-packages\neptune\__init__.py", line 177, in create_experiment
notebook_id=notebook_id
File "C:\ProgramData\Anaconda3\envs\rl_insurance\lib\site-packages\neptune\projects.py", line 400, in create_experiment
click.echo(str(experiment.id))
File "C:\ProgramData\Anaconda3\envs\rl_insurance\lib\site-packages\click\utils.py", line 218, in echo
file = _default_text_stdout()
File "C:\ProgramData\Anaconda3\envs\rl_insurance\lib\site-packages\click\_compat.py", line 675, in func
rv = wrapper_func()
File "C:\ProgramData\Anaconda3\envs\rl_insurance\lib\site-packages\click\_compat.py", line 436, in get_text_stdout
rv = _get_windows_console_stream(sys.stdout, encoding, errors)
File "C:\ProgramData\Anaconda3\envs\rl_insurance\lib\site-packages\click\_winconsole.py", line 295, in _get_windows_console_stream
func = _stream_factories.get(f.fileno())
AttributeError: 'StdOutWithUpload' object has no attribute 'fileno'`
It happens when I run:
`import neptune `
`import cfg`
`neptune.init(api_token=cfg.neptune_token, project_qualified_name=cfg.neptune_project_name) `
`neptune.create_experiment()`
I run it in conda environments both times.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `neptune/internal/streams/stdstream_uploader.py`
Content:
```
1 #
2 # Copyright (c) 2019, Neptune Labs Sp. z o.o.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 #
16 import sys
17
18 from neptune.internal.channels.channels import ChannelNamespace
19 from neptune.internal.streams.channel_writer import ChannelWriter
20
21
22 class StdStreamWithUpload(object):
23
24 def __init__(self, experiment, channel_name, stream):
25 # pylint:disable=protected-access
26 self._channel = experiment._get_channel(channel_name, 'text', ChannelNamespace.SYSTEM)
27 self._channel_writer = ChannelWriter(experiment, channel_name, ChannelNamespace.SYSTEM)
28 self._stream = stream
29
30 def write(self, data):
31 self._stream.write(data)
32 try:
33 self._channel_writer.write(data)
34 # pylint:disable=bare-except
35 except:
36 pass
37
38 def isatty(self):
39 return hasattr(self._stream, 'isatty') and self._stream.isatty()
40
41 def flush(self):
42 self._stream.flush()
43
44
45 class StdOutWithUpload(StdStreamWithUpload):
46
47 def __init__(self, experiment):
48 super(StdOutWithUpload, self).__init__(experiment, 'stdout', sys.__stdout__)
49 sys.stdout = self
50
51 def close(self):
52 sys.stdout = sys.__stdout__
53
54
55 class StdErrWithUpload(StdStreamWithUpload):
56
57 def __init__(self, experiment):
58 super(StdErrWithUpload, self).__init__(experiment, 'stderr', sys.__stderr__)
59 sys.stderr = self
60
61 def close(self):
62 sys.stderr = sys.__stderr__
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/neptune/internal/streams/stdstream_uploader.py b/neptune/internal/streams/stdstream_uploader.py
--- a/neptune/internal/streams/stdstream_uploader.py
+++ b/neptune/internal/streams/stdstream_uploader.py
@@ -41,6 +41,9 @@
def flush(self):
self._stream.flush()
+ def fileno(self):
+ return self._stream.fileno()
+
class StdOutWithUpload(StdStreamWithUpload):
| {"golden_diff": "diff --git a/neptune/internal/streams/stdstream_uploader.py b/neptune/internal/streams/stdstream_uploader.py\n--- a/neptune/internal/streams/stdstream_uploader.py\n+++ b/neptune/internal/streams/stdstream_uploader.py\n@@ -41,6 +41,9 @@\n def flush(self):\n self._stream.flush()\n \n+ def fileno(self):\n+ return self._stream.fileno()\n+\n \n class StdOutWithUpload(StdStreamWithUpload):\n", "issue": "create_experiment() fails on windows 10\nHi there, \r\n\r\nI enjoy neptune very much and on my macbook everything works fine. But when I run the same code on my Windows 10 machine, I get an error when calling create_experiment().\r\n\r\n`Traceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\rl_insurance\\lib\\site-packages\\neptune\\__init__.py\", line 177, in create_experiment\r\n notebook_id=notebook_id\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\rl_insurance\\lib\\site-packages\\neptune\\projects.py\", line 400, in create_experiment\r\n click.echo(str(experiment.id))\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\rl_insurance\\lib\\site-packages\\click\\utils.py\", line 218, in echo\r\n file = _default_text_stdout()\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\rl_insurance\\lib\\site-packages\\click\\_compat.py\", line 675, in func\r\n rv = wrapper_func()\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\rl_insurance\\lib\\site-packages\\click\\_compat.py\", line 436, in get_text_stdout\r\n rv = _get_windows_console_stream(sys.stdout, encoding, errors)\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\rl_insurance\\lib\\site-packages\\click\\_winconsole.py\", line 295, in _get_windows_console_stream\r\n func = _stream_factories.get(f.fileno())\r\nAttributeError: 'StdOutWithUpload' object has no attribute 'fileno'`\r\n\r\nIt happens when I run:\r\n\r\n`import neptune `\r\n`import cfg`\r\n`neptune.init(api_token=cfg.neptune_token, project_qualified_name=cfg.neptune_project_name) `\r\n`neptune.create_experiment()`\r\n\r\nI run it in conda environments both times.\r\n\n", "before_files": [{"content": "#\n# Copyright (c) 2019, Neptune Labs Sp. z o.o.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport sys\n\nfrom neptune.internal.channels.channels import ChannelNamespace\nfrom neptune.internal.streams.channel_writer import ChannelWriter\n\n\nclass StdStreamWithUpload(object):\n\n def __init__(self, experiment, channel_name, stream):\n # pylint:disable=protected-access\n self._channel = experiment._get_channel(channel_name, 'text', ChannelNamespace.SYSTEM)\n self._channel_writer = ChannelWriter(experiment, channel_name, ChannelNamespace.SYSTEM)\n self._stream = stream\n\n def write(self, data):\n self._stream.write(data)\n try:\n self._channel_writer.write(data)\n # pylint:disable=bare-except\n except:\n pass\n\n def isatty(self):\n return hasattr(self._stream, 'isatty') and self._stream.isatty()\n\n def flush(self):\n self._stream.flush()\n\n\nclass StdOutWithUpload(StdStreamWithUpload):\n\n def __init__(self, experiment):\n super(StdOutWithUpload, self).__init__(experiment, 'stdout', sys.__stdout__)\n sys.stdout = self\n\n def close(self):\n sys.stdout = sys.__stdout__\n\n\nclass StdErrWithUpload(StdStreamWithUpload):\n\n def __init__(self, experiment):\n super(StdErrWithUpload, self).__init__(experiment, 'stderr', sys.__stderr__)\n sys.stderr = self\n\n def close(self):\n sys.stderr = sys.__stderr__\n", "path": "neptune/internal/streams/stdstream_uploader.py"}], "after_files": [{"content": "#\n# Copyright (c) 2019, Neptune Labs Sp. z o.o.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport sys\n\nfrom neptune.internal.channels.channels import ChannelNamespace\nfrom neptune.internal.streams.channel_writer import ChannelWriter\n\n\nclass StdStreamWithUpload(object):\n\n def __init__(self, experiment, channel_name, stream):\n # pylint:disable=protected-access\n self._channel = experiment._get_channel(channel_name, 'text', ChannelNamespace.SYSTEM)\n self._channel_writer = ChannelWriter(experiment, channel_name, ChannelNamespace.SYSTEM)\n self._stream = stream\n\n def write(self, data):\n self._stream.write(data)\n try:\n self._channel_writer.write(data)\n # pylint:disable=bare-except\n except:\n pass\n\n def isatty(self):\n return hasattr(self._stream, 'isatty') and self._stream.isatty()\n\n def flush(self):\n self._stream.flush()\n\n def fileno(self):\n return self._stream.fileno()\n\n\nclass StdOutWithUpload(StdStreamWithUpload):\n\n def __init__(self, experiment):\n super(StdOutWithUpload, self).__init__(experiment, 'stdout', sys.__stdout__)\n sys.stdout = self\n\n def close(self):\n sys.stdout = sys.__stdout__\n\n\nclass StdErrWithUpload(StdStreamWithUpload):\n\n def __init__(self, experiment):\n super(StdErrWithUpload, self).__init__(experiment, 'stderr', sys.__stderr__)\n sys.stderr = self\n\n def close(self):\n sys.stderr = sys.__stderr__\n", "path": "neptune/internal/streams/stdstream_uploader.py"}]} | 1,283 | 105 |
gh_patches_debug_31116 | rasdani/github-patches | git_diff | gratipay__gratipay.com-870 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Profiles generated with fake_data.py 500 when viewed
Introduced in a4b904f
This is due to `user_info` being set to an empty string instead of a dict/json, causing `escape` to be passed `None` (from `user_info.get`) instead of a string.
```
Traceback (most recent call last):
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/website.py", line 66, in handle_safely
response = self.handle(request)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/website.py", line 99, in handle
response = request.resource.respond(request)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/resources/dynamic_resource.py", line 57, in respond
response = self.get_response(context)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/resources/negotiated_resource.py", line 100, in get_response
response.body = render(context)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/renderers/__init__.py", line 113, in __call__
return self.render_content(context)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/renderers/tornado.py", line 14, in render_content
return self.compiled.generate(**context)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/tornado/template.py", line 129, in generate
return execute()
File "/home/joe/git/www.gittip.com/www/%username/index.html", line 786, in _execute
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/tornado/escape.py", line 52, in xhtml_escape
return xml.sax.saxutils.escape(value, {'"': """})
File "/usr/lib/python2.7/xml/sax/saxutils.py", line 39, in escape
data = data.replace("&", "&")
AttributeError: 'NoneType' object has no attribute 'replace'
```
Profiles generated with fake_data.py 500 when viewed
Introduced in a4b904f
This is due to `user_info` being set to an empty string instead of a dict/json, causing `escape` to be passed `None` (from `user_info.get`) instead of a string.
```
Traceback (most recent call last):
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/website.py", line 66, in handle_safely
response = self.handle(request)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/website.py", line 99, in handle
response = request.resource.respond(request)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/resources/dynamic_resource.py", line 57, in respond
response = self.get_response(context)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/resources/negotiated_resource.py", line 100, in get_response
response.body = render(context)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/renderers/__init__.py", line 113, in __call__
return self.render_content(context)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/renderers/tornado.py", line 14, in render_content
return self.compiled.generate(**context)
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/tornado/template.py", line 129, in generate
return execute()
File "/home/joe/git/www.gittip.com/www/%username/index.html", line 786, in _execute
File "/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/tornado/escape.py", line 52, in xhtml_escape
return xml.sax.saxutils.escape(value, {'"': """})
File "/usr/lib/python2.7/xml/sax/saxutils.py", line 39, in escape
data = data.replace("&", "&")
AttributeError: 'NoneType' object has no attribute 'replace'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gittip/fake_data.py`
Content:
```
1 from faker import Factory
2 from gittip import orm
3 from gittip.models.tip import Tip
4 from gittip.models.participant import Participant
5 from gittip.models.elsewhere import Elsewhere
6 from gittip import AMOUNTS
7 import string
8 import random
9
10 faker = Factory.create()
11
12 platforms = ['github', 'twitter']
13
14
15 def fake_text_id(size=6, chars=string.ascii_lowercase + string.digits):
16 """
17 Create a random text id
18 """
19 return ''.join(random.choice(chars) for x in range(size))
20
21
22 def fake_balance(max_amount=100):
23 """
24 Return a random amount between 0 and max_amount
25 """
26 return random.random() * max_amount
27
28 def fake_int_id(nmax=2 ** 31 -1):
29 """
30 Create a random int id
31 """
32 return random.randint(0, nmax)
33
34
35 def fake_participant(is_admin=False, anonymous=False):
36 """
37 Create a fake User
38 """
39 username = faker.firstName() + fake_text_id(3)
40 return Participant(
41 id=fake_int_id(),
42 username=username,
43 username_lower=username.lower(),
44 statement=faker.sentence(),
45 ctime=faker.dateTimeThisYear(),
46 is_admin=is_admin,
47 balance=fake_balance(),
48 anonymous=anonymous,
49 goal=fake_balance(),
50 balanced_account_uri=faker.uri(),
51 last_ach_result='',
52 is_suspicious=False,
53 last_bill_result='', # Needed to not be suspicious
54 claimed_time=faker.dateTimeThisYear(),
55 type="individual"
56 )
57
58
59 def fake_tip(tipper, tippee):
60 """
61 Create a fake tip
62 """
63 return Tip(
64 id=fake_int_id(),
65 ctime=faker.dateTimeThisYear(),
66 mtime=faker.dateTimeThisMonth(),
67 tipper=tipper.username,
68 tippee=tippee.username,
69 amount=random.choice(AMOUNTS)
70 )
71
72
73 def fake_elsewhere(participant, platform=None):
74 """
75 Create a fake elsewhere
76 """
77 if platform is None:
78 platform = random.choice(platforms)
79
80 return Elsewhere(
81 id=fake_int_id(),
82 platform=platform,
83 user_id=fake_text_id(),
84 is_locked=False,
85 participant=participant.username,
86 user_info=''
87 )
88
89
90 def populate_db(session, num_participants=100, num_tips=50):
91 """
92 Populate DB with fake data
93 """
94 #Make the participants
95 participants = []
96 for i in xrange(num_participants):
97 p = fake_participant()
98 session.add(p)
99 participants.append(p)
100
101 #Make the "Elsewhere's"
102 for p in participants:
103 #All participants get 1 or 2 elsewheres
104 num_elsewheres = random.randint(1, 2)
105 for platform_name in platforms[:num_elsewheres]:
106 e = fake_elsewhere(p, platform_name)
107 session.add(e)
108
109 #Make the tips
110 tips = []
111 for i in xrange(num_tips):
112 tipper, tippee = random.sample(participants, 2)
113 t = fake_tip(tipper, tippee)
114 tips.append(t)
115 session.add(t)
116 session.commit()
117
118
119 def main():
120 db = orm.db
121 dbsession = db.session
122 populate_db(dbsession)
123
124 if __name__ == '__main__':
125 main()
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gittip/fake_data.py b/gittip/fake_data.py
--- a/gittip/fake_data.py
+++ b/gittip/fake_data.py
@@ -9,7 +9,7 @@
faker = Factory.create()
-platforms = ['github', 'twitter']
+platforms = ['github', 'twitter', 'bitbucket']
def fake_text_id(size=6, chars=string.ascii_lowercase + string.digits):
@@ -77,13 +77,33 @@
if platform is None:
platform = random.choice(platforms)
+ info_templates = {
+ "github": {
+ "name": participant.username,
+ "html_url": "https://github.com/" + participant.username,
+ "type": "User",
+ "login": participant.username
+ },
+ "twitter": {
+ "name": participant.username,
+ "html_url": "https://twitter.com/" + participant.username,
+ "screen_name": participant.username
+ },
+ "bitbucket": {
+ "display_name": participant.username,
+ "username": participant.username,
+ "is_team": "False",
+ "html_url": "https://bitbucket.org/" + participant.username,
+ }
+ }
+
return Elsewhere(
id=fake_int_id(),
platform=platform,
user_id=fake_text_id(),
is_locked=False,
participant=participant.username,
- user_info=''
+ user_info=info_templates[platform]
)
@@ -100,8 +120,8 @@
#Make the "Elsewhere's"
for p in participants:
- #All participants get 1 or 2 elsewheres
- num_elsewheres = random.randint(1, 2)
+ #All participants get between 1 and 3 elsewheres
+ num_elsewheres = random.randint(1, 3)
for platform_name in platforms[:num_elsewheres]:
e = fake_elsewhere(p, platform_name)
session.add(e)
| {"golden_diff": "diff --git a/gittip/fake_data.py b/gittip/fake_data.py\n--- a/gittip/fake_data.py\n+++ b/gittip/fake_data.py\n@@ -9,7 +9,7 @@\n \n faker = Factory.create()\n \n-platforms = ['github', 'twitter']\n+platforms = ['github', 'twitter', 'bitbucket']\n \n \n def fake_text_id(size=6, chars=string.ascii_lowercase + string.digits):\n@@ -77,13 +77,33 @@\n if platform is None:\n platform = random.choice(platforms)\n \n+ info_templates = {\n+ \"github\": {\n+ \"name\": participant.username,\n+ \"html_url\": \"https://github.com/\" + participant.username,\n+ \"type\": \"User\",\n+ \"login\": participant.username\n+ },\n+ \"twitter\": {\n+ \"name\": participant.username,\n+ \"html_url\": \"https://twitter.com/\" + participant.username,\n+ \"screen_name\": participant.username\n+ },\n+ \"bitbucket\": {\n+ \"display_name\": participant.username,\n+ \"username\": participant.username,\n+ \"is_team\": \"False\",\n+ \"html_url\": \"https://bitbucket.org/\" + participant.username,\n+ }\n+ }\n+\n return Elsewhere(\n id=fake_int_id(),\n platform=platform,\n user_id=fake_text_id(),\n is_locked=False,\n participant=participant.username,\n- user_info=''\n+ user_info=info_templates[platform]\n )\n \n \n@@ -100,8 +120,8 @@\n \n #Make the \"Elsewhere's\"\n for p in participants:\n- #All participants get 1 or 2 elsewheres\n- num_elsewheres = random.randint(1, 2)\n+ #All participants get between 1 and 3 elsewheres\n+ num_elsewheres = random.randint(1, 3)\n for platform_name in platforms[:num_elsewheres]:\n e = fake_elsewhere(p, platform_name)\n session.add(e)\n", "issue": "Profiles generated with fake_data.py 500 when viewed\nIntroduced in a4b904f\n\nThis is due to `user_info` being set to an empty string instead of a dict/json, causing `escape` to be passed `None` (from `user_info.get`) instead of a string.\n\n```\nTraceback (most recent call last):\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/website.py\", line 66, in handle_safely\n response = self.handle(request)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/website.py\", line 99, in handle\n response = request.resource.respond(request)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/resources/dynamic_resource.py\", line 57, in respond\n response = self.get_response(context)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/resources/negotiated_resource.py\", line 100, in get_response\n response.body = render(context)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/renderers/__init__.py\", line 113, in __call__\n return self.render_content(context)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/renderers/tornado.py\", line 14, in render_content\n return self.compiled.generate(**context)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/tornado/template.py\", line 129, in generate\n return execute()\n File \"/home/joe/git/www.gittip.com/www/%username/index.html\", line 786, in _execute\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/tornado/escape.py\", line 52, in xhtml_escape\n return xml.sax.saxutils.escape(value, {'\"': \"\"\"})\n File \"/usr/lib/python2.7/xml/sax/saxutils.py\", line 39, in escape\n data = data.replace(\"&\", \"&\")\nAttributeError: 'NoneType' object has no attribute 'replace'\n```\n\nProfiles generated with fake_data.py 500 when viewed\nIntroduced in a4b904f\n\nThis is due to `user_info` being set to an empty string instead of a dict/json, causing `escape` to be passed `None` (from `user_info.get`) instead of a string.\n\n```\nTraceback (most recent call last):\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/website.py\", line 66, in handle_safely\n response = self.handle(request)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/website.py\", line 99, in handle\n response = request.resource.respond(request)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/resources/dynamic_resource.py\", line 57, in respond\n response = self.get_response(context)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/resources/negotiated_resource.py\", line 100, in get_response\n response.body = render(context)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/renderers/__init__.py\", line 113, in __call__\n return self.render_content(context)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/aspen/renderers/tornado.py\", line 14, in render_content\n return self.compiled.generate(**context)\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/tornado/template.py\", line 129, in generate\n return execute()\n File \"/home/joe/git/www.gittip.com/www/%username/index.html\", line 786, in _execute\n File \"/home/joe/git/www.gittip.com/env/local/lib/python2.7/site-packages/tornado/escape.py\", line 52, in xhtml_escape\n return xml.sax.saxutils.escape(value, {'\"': \"\"\"})\n File \"/usr/lib/python2.7/xml/sax/saxutils.py\", line 39, in escape\n data = data.replace(\"&\", \"&\")\nAttributeError: 'NoneType' object has no attribute 'replace'\n```\n\n", "before_files": [{"content": "from faker import Factory\nfrom gittip import orm\nfrom gittip.models.tip import Tip\nfrom gittip.models.participant import Participant\nfrom gittip.models.elsewhere import Elsewhere\nfrom gittip import AMOUNTS\nimport string\nimport random\n\nfaker = Factory.create()\n\nplatforms = ['github', 'twitter']\n\n\ndef fake_text_id(size=6, chars=string.ascii_lowercase + string.digits):\n \"\"\"\n Create a random text id\n \"\"\"\n return ''.join(random.choice(chars) for x in range(size))\n\n\ndef fake_balance(max_amount=100):\n \"\"\"\n Return a random amount between 0 and max_amount\n \"\"\"\n return random.random() * max_amount\n\ndef fake_int_id(nmax=2 ** 31 -1):\n \"\"\"\n Create a random int id\n \"\"\"\n return random.randint(0, nmax)\n\n\ndef fake_participant(is_admin=False, anonymous=False):\n \"\"\"\n Create a fake User\n \"\"\"\n username = faker.firstName() + fake_text_id(3)\n return Participant(\n id=fake_int_id(),\n username=username,\n username_lower=username.lower(),\n statement=faker.sentence(),\n ctime=faker.dateTimeThisYear(),\n is_admin=is_admin,\n balance=fake_balance(),\n anonymous=anonymous,\n goal=fake_balance(),\n balanced_account_uri=faker.uri(),\n last_ach_result='',\n is_suspicious=False,\n last_bill_result='', # Needed to not be suspicious\n claimed_time=faker.dateTimeThisYear(),\n type=\"individual\"\n )\n\n\ndef fake_tip(tipper, tippee):\n \"\"\"\n Create a fake tip\n \"\"\"\n return Tip(\n id=fake_int_id(),\n ctime=faker.dateTimeThisYear(),\n mtime=faker.dateTimeThisMonth(),\n tipper=tipper.username,\n tippee=tippee.username,\n amount=random.choice(AMOUNTS)\n )\n\n\ndef fake_elsewhere(participant, platform=None):\n \"\"\"\n Create a fake elsewhere\n \"\"\"\n if platform is None:\n platform = random.choice(platforms)\n\n return Elsewhere(\n id=fake_int_id(),\n platform=platform,\n user_id=fake_text_id(),\n is_locked=False,\n participant=participant.username,\n user_info=''\n )\n\n\ndef populate_db(session, num_participants=100, num_tips=50):\n \"\"\"\n Populate DB with fake data\n \"\"\"\n #Make the participants\n participants = []\n for i in xrange(num_participants):\n p = fake_participant()\n session.add(p)\n participants.append(p)\n\n #Make the \"Elsewhere's\"\n for p in participants:\n #All participants get 1 or 2 elsewheres\n num_elsewheres = random.randint(1, 2)\n for platform_name in platforms[:num_elsewheres]:\n e = fake_elsewhere(p, platform_name)\n session.add(e)\n\n #Make the tips\n tips = []\n for i in xrange(num_tips):\n tipper, tippee = random.sample(participants, 2)\n t = fake_tip(tipper, tippee)\n tips.append(t)\n session.add(t)\n session.commit()\n\n\ndef main():\n db = orm.db\n dbsession = db.session\n populate_db(dbsession)\n\nif __name__ == '__main__':\n main()\n", "path": "gittip/fake_data.py"}], "after_files": [{"content": "from faker import Factory\nfrom gittip import orm\nfrom gittip.models.tip import Tip\nfrom gittip.models.participant import Participant\nfrom gittip.models.elsewhere import Elsewhere\nfrom gittip import AMOUNTS\nimport string\nimport random\n\nfaker = Factory.create()\n\nplatforms = ['github', 'twitter', 'bitbucket']\n\n\ndef fake_text_id(size=6, chars=string.ascii_lowercase + string.digits):\n \"\"\"\n Create a random text id\n \"\"\"\n return ''.join(random.choice(chars) for x in range(size))\n\n\ndef fake_balance(max_amount=100):\n \"\"\"\n Return a random amount between 0 and max_amount\n \"\"\"\n return random.random() * max_amount\n\ndef fake_int_id(nmax=2 ** 31 -1):\n \"\"\"\n Create a random int id\n \"\"\"\n return random.randint(0, nmax)\n\n\ndef fake_participant(is_admin=False, anonymous=False):\n \"\"\"\n Create a fake User\n \"\"\"\n username = faker.firstName() + fake_text_id(3)\n return Participant(\n id=fake_int_id(),\n username=username,\n username_lower=username.lower(),\n statement=faker.sentence(),\n ctime=faker.dateTimeThisYear(),\n is_admin=is_admin,\n balance=fake_balance(),\n anonymous=anonymous,\n goal=fake_balance(),\n balanced_account_uri=faker.uri(),\n last_ach_result='',\n is_suspicious=False,\n last_bill_result='', # Needed to not be suspicious\n claimed_time=faker.dateTimeThisYear(),\n type=\"individual\"\n )\n\n\ndef fake_tip(tipper, tippee):\n \"\"\"\n Create a fake tip\n \"\"\"\n return Tip(\n id=fake_int_id(),\n ctime=faker.dateTimeThisYear(),\n mtime=faker.dateTimeThisMonth(),\n tipper=tipper.username,\n tippee=tippee.username,\n amount=random.choice(AMOUNTS)\n )\n\n\ndef fake_elsewhere(participant, platform=None):\n \"\"\"\n Create a fake elsewhere\n \"\"\"\n if platform is None:\n platform = random.choice(platforms)\n\n info_templates = {\n \"github\": {\n \"name\": participant.username,\n \"html_url\": \"https://github.com/\" + participant.username,\n \"type\": \"User\",\n \"login\": participant.username\n },\n \"twitter\": {\n \"name\": participant.username,\n \"html_url\": \"https://twitter.com/\" + participant.username,\n \"screen_name\": participant.username\n },\n \"bitbucket\": {\n \"display_name\": participant.username,\n \"username\": participant.username,\n \"is_team\": \"False\",\n \"html_url\": \"https://bitbucket.org/\" + participant.username,\n }\n }\n\n return Elsewhere(\n id=fake_int_id(),\n platform=platform,\n user_id=fake_text_id(),\n is_locked=False,\n participant=participant.username,\n user_info=info_templates[platform]\n )\n\n\ndef populate_db(session, num_participants=100, num_tips=50):\n \"\"\"\n Populate DB with fake data\n \"\"\"\n #Make the participants\n participants = []\n for i in xrange(num_participants):\n p = fake_participant()\n session.add(p)\n participants.append(p)\n\n #Make the \"Elsewhere's\"\n for p in participants:\n #All participants get between 1 and 3 elsewheres\n num_elsewheres = random.randint(1, 3)\n for platform_name in platforms[:num_elsewheres]:\n e = fake_elsewhere(p, platform_name)\n session.add(e)\n\n #Make the tips\n tips = []\n for i in xrange(num_tips):\n tipper, tippee = random.sample(participants, 2)\n t = fake_tip(tipper, tippee)\n tips.append(t)\n session.add(t)\n session.commit()\n\n\ndef main():\n db = orm.db\n dbsession = db.session\n populate_db(dbsession)\n\nif __name__ == '__main__':\n main()\n", "path": "gittip/fake_data.py"}]} | 2,299 | 455 |
gh_patches_debug_32741 | rasdani/github-patches | git_diff | WeblateOrg__weblate-9260 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Document weblate migrate command
### Describe the problem
`weblate migrate` command is mentioned in the docs, but not actually documented.
It is also used inconsistently:
1. https://docs.weblate.org/en/latest/admin/languages.html#built-in-language-definitions
2. https://docs.weblate.org/en/latest/admin/install.html#filling-up-the-database
### Describe the solution you'd like
document the usage and link it in mentioned occurrences.
### Describe alternatives you've considered
_No response_
### Screenshots
_No response_
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/_ext/djangodocs.py`
Content:
```
1 """Sphinx plugins for Weblate documentation."""
2 import re
3
4 from sphinx import addnodes
5 from sphinx.domains.std import Cmdoption
6
7 # RE for option descriptions without a '--' prefix
8 simple_option_desc_re = re.compile(r"([-_a-zA-Z0-9]+)(\s*.*?)(?=,\s+(?:/|-|--)|$)")
9
10
11 def setup(app):
12 app.add_crossref_type(
13 directivename="setting", rolename="setting", indextemplate="pair: %s; setting"
14 )
15 app.add_object_type(
16 directivename="django-admin",
17 rolename="djadmin",
18 indextemplate="pair: %s; weblate admin command",
19 parse_node=parse_django_admin_node,
20 )
21 app.add_directive("django-admin-option", Cmdoption)
22
23
24 def parse_django_admin_node(env, sig, signode):
25 command = sig.split(" ")[0]
26 env.ref_context["std:program"] = command
27 title = f"weblate {sig}"
28 signode += addnodes.desc_name(title, title)
29 return command
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/_ext/djangodocs.py b/docs/_ext/djangodocs.py
--- a/docs/_ext/djangodocs.py
+++ b/docs/_ext/djangodocs.py
@@ -1,6 +1,7 @@
"""Sphinx plugins for Weblate documentation."""
import re
+from docutils.nodes import literal
from sphinx import addnodes
from sphinx.domains.std import Cmdoption
@@ -8,22 +9,45 @@
simple_option_desc_re = re.compile(r"([-_a-zA-Z0-9]+)(\s*.*?)(?=,\s+(?:/|-|--)|$)")
+class WeblateCommandLiteral(literal):
+ def __init__(self, rawsource="", text="", *children, **attributes):
+ if not text:
+ text = "weblate "
+ super().__init__(rawsource, text, *children, **attributes)
+
+
def setup(app):
app.add_crossref_type(
directivename="setting", rolename="setting", indextemplate="pair: %s; setting"
)
+ app.add_object_type(
+ directivename="weblate-admin",
+ rolename="wladmin",
+ indextemplate="pair: %s; weblate admin command",
+ parse_node=parse_weblate_admin_node,
+ ref_nodeclass=WeblateCommandLiteral,
+ )
+ app.add_directive("weblate-admin-option", Cmdoption)
app.add_object_type(
directivename="django-admin",
rolename="djadmin",
- indextemplate="pair: %s; weblate admin command",
+ indextemplate="pair: %s; django-admin command",
parse_node=parse_django_admin_node,
)
- app.add_directive("django-admin-option", Cmdoption)
-def parse_django_admin_node(env, sig, signode):
+def parse_weblate_admin_node(env, sig, signode):
command = sig.split(" ")[0]
+ # Context for options
env.ref_context["std:program"] = command
title = f"weblate {sig}"
signode += addnodes.desc_name(title, title)
return command
+
+
+def parse_django_admin_node(env, sig, signode):
+ command = sig.split(" ")[0]
+ env.ref_context["std:program"] = command
+ title = "django-admin %s" % sig
+ signode += addnodes.desc_name(title, title)
+ return command
| {"golden_diff": "diff --git a/docs/_ext/djangodocs.py b/docs/_ext/djangodocs.py\n--- a/docs/_ext/djangodocs.py\n+++ b/docs/_ext/djangodocs.py\n@@ -1,6 +1,7 @@\n \"\"\"Sphinx plugins for Weblate documentation.\"\"\"\n import re\n \n+from docutils.nodes import literal\n from sphinx import addnodes\n from sphinx.domains.std import Cmdoption\n \n@@ -8,22 +9,45 @@\n simple_option_desc_re = re.compile(r\"([-_a-zA-Z0-9]+)(\\s*.*?)(?=,\\s+(?:/|-|--)|$)\")\n \n \n+class WeblateCommandLiteral(literal):\n+ def __init__(self, rawsource=\"\", text=\"\", *children, **attributes):\n+ if not text:\n+ text = \"weblate \"\n+ super().__init__(rawsource, text, *children, **attributes)\n+\n+\n def setup(app):\n app.add_crossref_type(\n directivename=\"setting\", rolename=\"setting\", indextemplate=\"pair: %s; setting\"\n )\n+ app.add_object_type(\n+ directivename=\"weblate-admin\",\n+ rolename=\"wladmin\",\n+ indextemplate=\"pair: %s; weblate admin command\",\n+ parse_node=parse_weblate_admin_node,\n+ ref_nodeclass=WeblateCommandLiteral,\n+ )\n+ app.add_directive(\"weblate-admin-option\", Cmdoption)\n app.add_object_type(\n directivename=\"django-admin\",\n rolename=\"djadmin\",\n- indextemplate=\"pair: %s; weblate admin command\",\n+ indextemplate=\"pair: %s; django-admin command\",\n parse_node=parse_django_admin_node,\n )\n- app.add_directive(\"django-admin-option\", Cmdoption)\n \n \n-def parse_django_admin_node(env, sig, signode):\n+def parse_weblate_admin_node(env, sig, signode):\n command = sig.split(\" \")[0]\n+ # Context for options\n env.ref_context[\"std:program\"] = command\n title = f\"weblate {sig}\"\n signode += addnodes.desc_name(title, title)\n return command\n+\n+\n+def parse_django_admin_node(env, sig, signode):\n+ command = sig.split(\" \")[0]\n+ env.ref_context[\"std:program\"] = command\n+ title = \"django-admin %s\" % sig\n+ signode += addnodes.desc_name(title, title)\n+ return command\n", "issue": "Document weblate migrate command\n### Describe the problem\n\n`weblate migrate` command is mentioned in the docs, but not actually documented. \r\n\r\nIt is also used inconsistently:\r\n1. https://docs.weblate.org/en/latest/admin/languages.html#built-in-language-definitions\r\n2. https://docs.weblate.org/en/latest/admin/install.html#filling-up-the-database\n\n### Describe the solution you'd like\n\ndocument the usage and link it in mentioned occurrences.\n\n### Describe alternatives you've considered\n\n_No response_\n\n### Screenshots\n\n_No response_\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "\"\"\"Sphinx plugins for Weblate documentation.\"\"\"\nimport re\n\nfrom sphinx import addnodes\nfrom sphinx.domains.std import Cmdoption\n\n# RE for option descriptions without a '--' prefix\nsimple_option_desc_re = re.compile(r\"([-_a-zA-Z0-9]+)(\\s*.*?)(?=,\\s+(?:/|-|--)|$)\")\n\n\ndef setup(app):\n app.add_crossref_type(\n directivename=\"setting\", rolename=\"setting\", indextemplate=\"pair: %s; setting\"\n )\n app.add_object_type(\n directivename=\"django-admin\",\n rolename=\"djadmin\",\n indextemplate=\"pair: %s; weblate admin command\",\n parse_node=parse_django_admin_node,\n )\n app.add_directive(\"django-admin-option\", Cmdoption)\n\n\ndef parse_django_admin_node(env, sig, signode):\n command = sig.split(\" \")[0]\n env.ref_context[\"std:program\"] = command\n title = f\"weblate {sig}\"\n signode += addnodes.desc_name(title, title)\n return command\n", "path": "docs/_ext/djangodocs.py"}], "after_files": [{"content": "\"\"\"Sphinx plugins for Weblate documentation.\"\"\"\nimport re\n\nfrom docutils.nodes import literal\nfrom sphinx import addnodes\nfrom sphinx.domains.std import Cmdoption\n\n# RE for option descriptions without a '--' prefix\nsimple_option_desc_re = re.compile(r\"([-_a-zA-Z0-9]+)(\\s*.*?)(?=,\\s+(?:/|-|--)|$)\")\n\n\nclass WeblateCommandLiteral(literal):\n def __init__(self, rawsource=\"\", text=\"\", *children, **attributes):\n if not text:\n text = \"weblate \"\n super().__init__(rawsource, text, *children, **attributes)\n\n\ndef setup(app):\n app.add_crossref_type(\n directivename=\"setting\", rolename=\"setting\", indextemplate=\"pair: %s; setting\"\n )\n app.add_object_type(\n directivename=\"weblate-admin\",\n rolename=\"wladmin\",\n indextemplate=\"pair: %s; weblate admin command\",\n parse_node=parse_weblate_admin_node,\n ref_nodeclass=WeblateCommandLiteral,\n )\n app.add_directive(\"weblate-admin-option\", Cmdoption)\n app.add_object_type(\n directivename=\"django-admin\",\n rolename=\"djadmin\",\n indextemplate=\"pair: %s; django-admin command\",\n parse_node=parse_django_admin_node,\n )\n\n\ndef parse_weblate_admin_node(env, sig, signode):\n command = sig.split(\" \")[0]\n # Context for options\n env.ref_context[\"std:program\"] = command\n title = f\"weblate {sig}\"\n signode += addnodes.desc_name(title, title)\n return command\n\n\ndef parse_django_admin_node(env, sig, signode):\n command = sig.split(\" \")[0]\n env.ref_context[\"std:program\"] = command\n title = \"django-admin %s\" % sig\n signode += addnodes.desc_name(title, title)\n return command\n", "path": "docs/_ext/djangodocs.py"}]} | 679 | 559 |
gh_patches_debug_34798 | rasdani/github-patches | git_diff | sopel-irc__sopel-625 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
.dice breaks if you try to roll a die with no sides
Repro:
```
(8:55:43 PM) creftos: .roll 1d0
(8:55:43 PM) Willie: ValueError: empty range for randrange() (1,1, 0) (file "/usr/lib64/python2.7/random.py", line 217, in randrange)
```
Logs:
```
Traceback (most recent call last):
File "/home/bdc/workspace/willie/willie/bot.py", line 741, in call
exit_code = func(willie, trigger)
File "/home/bdc/workspace/willie/willie/modules/dice.py", line 180, in roll
dice = list(map(_roll_dice, dice_expressions))
File "/home/bdc/workspace/willie/willie/modules/dice.py", line 140, in _roll_dice
dice = DicePouch(dice_num, dice_type, 0)
File "/home/bdc/workspace/willie/willie/modules/dice.py", line 34, in __init__
self.roll_dice()
File "/home/bdc/workspace/willie/willie/modules/dice.py", line 41, in roll_dice
number = random.randint(1, self.type)
File "/usr/lib64/python2.7/random.py", line 241, in randint
return self.randrange(a, b+1)
File "/usr/lib64/python2.7/random.py", line 217, in randrange
raise ValueError, "empty range for randrange() (%d,%d, %d)" % (istart, istop, width)
ValueError: empty range for randrange() (1,1, 0)
```
Proposed Solution:
Needs a catch to make sure user can't do this.
Additional Information:
Willie v. 4.5.1
OS: Fedora release 20 (Heisenbug)
Kernel Version: 3.16.2-201.fc20.x86_64
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `willie/modules/dice.py`
Content:
```
1 # coding=utf8
2 """
3 dice.py - Dice Module
4 Copyright 2010-2013, Dimitri "Tyrope" Molenaars, TyRope.nl
5 Copyright 2013, Ari Koivula, <[email protected]>
6 Licensed under the Eiffel Forum License 2.
7
8 http://willie.dftba.net/
9 """
10 from __future__ import unicode_literals
11 import random
12 import re
13
14 import willie.module
15 from willie.tools import eval_equation
16
17
18 class DicePouch:
19 def __init__(self, num_of_die, type_of_die, addition):
20 """Initialize dice pouch and roll the dice.
21
22 Args:
23 num_of_die: number of dice in the pouch.
24 type_of_die: how many faces the dice have.
25 addition: how much is added to the result of the dice.
26 """
27 self.num = num_of_die
28 self.type = type_of_die
29 self.addition = addition
30
31 self.dice = {}
32 self.dropped = {}
33
34 self.roll_dice()
35
36 def roll_dice(self):
37 """Roll all the dice in the pouch."""
38 self.dice = {}
39 self.dropped = {}
40 for __ in range(self.num):
41 number = random.randint(1, self.type)
42 count = self.dice.setdefault(number, 0)
43 self.dice[number] = count + 1
44
45 def drop_lowest(self, n):
46 """Drop n lowest dice from the result.
47
48 Args:
49 n: the number of dice to drop.
50 """
51 for i, count in self.dice.items():
52 count = self.dice[i]
53 if n == 0:
54 break
55 elif n < count:
56 self.dice[i] = count - n
57 self.dropped[i] = n
58 break
59 else:
60 self.dice[i] = 0
61 self.dropped[i] = count
62 n = n - count
63
64 for i, count in self.dropped.items():
65 if self.dice[i] == 0:
66 del self.dice[i]
67
68 def get_simple_string(self):
69 """Return the values of the dice like (2+2+2[+1+1])+1."""
70 dice = self.dice.items()
71 faces = ("+".join([str(face)] * times) for face, times in dice)
72 dice_str = "+".join(faces)
73
74 dropped_str = ""
75 if self.dropped:
76 dropped = self.dropped.items()
77 dfaces = ("+".join([str(face)] * times) for face, times in dropped)
78 dropped_str = "[+%s]" % ("+".join(dfaces),)
79
80 plus_str = ""
81 if self.addition:
82 plus_str = "{:+d}".format(self.addition)
83
84 return "(%s%s)%s" % (dice_str, dropped_str, plus_str)
85
86 def get_compressed_string(self):
87 """Return the values of the dice like (3x2[+2x1])+1."""
88 dice = self.dice.items()
89 faces = ("%dx%d" % (times, face) for face, times in dice)
90 dice_str = "+".join(faces)
91
92 dropped_str = ""
93 if self.dropped:
94 dropped = self.dropped.items()
95 dfaces = ("%dx%d" % (times, face) for face, times in dropped)
96 dropped_str = "[+%s]" % ("+".join(dfaces),)
97
98 plus_str = ""
99 if self.addition:
100 plus_str = "{:+d}".format(self.addition)
101
102 return "(%s%s)%s" % (dice_str, dropped_str, plus_str)
103
104 def get_sum(self):
105 """Get the sum of non-dropped dice and the addition."""
106 result = self.addition
107 for face, times in self.dice.items():
108 result += face * times
109 return result
110
111 def get_number_of_faces(self):
112 """Returns sum of different faces for dropped and not dropped dice
113
114 This can be used to estimate, whether the result can be shown in
115 compressed form in a reasonable amount of space.
116 """
117 return len(self.dice) + len(self.dropped)
118
119
120 def _roll_dice(dice_expression):
121 result = re.search(
122 r"""
123 (?P<dice_num>\d*)
124 d
125 (?P<dice_type>\d+)
126 (v(?P<drop_lowest>\d+))?
127 $""",
128 dice_expression,
129 re.IGNORECASE | re.VERBOSE)
130
131 dice_num = int(result.group('dice_num') or 1)
132 dice_type = int(result.group('dice_type'))
133
134 # Upper limit for dice should be at most a million. Creating a dict with
135 # more than a million elements already takes a noticeable amount of time
136 # on a fast computer and ~55kB of memory.
137 if dice_num > 1000:
138 return None
139
140 dice = DicePouch(dice_num, dice_type, 0)
141
142 if result.group('drop_lowest'):
143 drop = int(result.group('drop_lowest'))
144 dice.drop_lowest(drop)
145
146 return dice
147
148
149 @willie.module.commands("roll")
150 @willie.module.commands("dice")
151 @willie.module.commands("d")
152 @willie.module.priority("medium")
153 @willie.module.example(".roll 3d1+1", 'You roll 3d1+1: (1+1+1)+1 = 4')
154 @willie.module.example(".roll 3d1v2+1", 'You roll 3d1v2+1: (1[+1+1])+1 = 2')
155 @willie.module.example(".roll 2d4", 'You roll 2d4: \(\d\+\d\) = \d', re=True)
156 @willie.module.example(".roll 100d1", '[^:]*: \(100x1\) = 100', re=True)
157 @willie.module.example(".roll 1001d1", 'I only have 1000 dice. =(')
158 @willie.module.example(".roll 1d1 + 1d1", 'You roll 1d1 + 1d1: (1) + (1) = 2')
159 @willie.module.example(".roll 1d1+1d1", 'You roll 1d1+1d1: (1)+(1) = 2')
160 def roll(bot, trigger):
161 """.dice XdY[vZ][+N], rolls dice and reports the result.
162
163 X is the number of dice. Y is the number of faces in the dice. Z is the
164 number of lowest dice to be dropped from the result. N is the constant to
165 be applied to the end result.
166 """
167 # This regexp is only allowed to have one captured group, because having
168 # more would alter the output of re.findall.
169 dice_regexp = r"\d*d\d+(?:v\d+)?"
170
171 # Get a list of all dice expressions, evaluate them and then replace the
172 # expressions in the original string with the results. Replacing is done
173 # using string formatting, so %-characters must be escaped.
174 arg_str = trigger.group(2)
175 dice_expressions = re.findall(dice_regexp, arg_str)
176 arg_str = arg_str.replace("%", "%%")
177 arg_str = re.sub(dice_regexp, "%s", arg_str)
178 dice = list(map(_roll_dice, dice_expressions))
179 if None in dice:
180 bot.reply("I only have 1000 dice. =(")
181 return
182
183 def _get_eval_str(dice):
184 return "(%d)" % (dice.get_sum(),)
185
186 def _get_pretty_str(dice):
187 if dice.num <= 10:
188 return dice.get_simple_string()
189 elif dice.get_number_of_faces() <= 10:
190 return dice.get_compressed_string()
191 else:
192 return "(...)"
193
194 eval_str = arg_str % (tuple(map(_get_eval_str, dice)))
195 pretty_str = arg_str % (tuple(map(_get_pretty_str, dice)))
196
197 # Showing the actual error will hopefully give a better hint of what is
198 # wrong with the syntax than a generic error message.
199 try:
200 result = eval_equation(eval_str)
201 except Exception as e:
202 bot.reply("SyntaxError, eval(%s), %s" % (eval_str, e))
203 return
204
205 bot.reply("You roll %s: %s = %d" % (
206 trigger.group(2), pretty_str, result))
207
208
209 @willie.module.commands("choice")
210 @willie.module.commands("ch")
211 @willie.module.commands("choose")
212 @willie.module.priority("medium")
213 def choose(bot, trigger):
214 """
215 .choice option1|option2|option3 - Makes a difficult choice easy.
216 """
217 if not trigger.group(2):
218 return bot.reply('I\'d choose an option, but you didn\'t give me any.')
219 choices = re.split('[\|\\\\\/]', trigger.group(2))
220 pick = random.choice(choices)
221 return bot.reply('Your options: %s. My choice: %s' % (', '.join(choices), pick))
222
223
224 if __name__ == "__main__":
225 from willie.test_tools import run_example_tests
226 run_example_tests(__file__)
227
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/willie/modules/dice.py b/willie/modules/dice.py
--- a/willie/modules/dice.py
+++ b/willie/modules/dice.py
@@ -117,7 +117,7 @@
return len(self.dice) + len(self.dropped)
-def _roll_dice(dice_expression):
+def _roll_dice(bot, dice_expression):
result = re.search(
r"""
(?P<dice_num>\d*)
@@ -131,11 +131,17 @@
dice_num = int(result.group('dice_num') or 1)
dice_type = int(result.group('dice_type'))
+ # Dice can't have zero or a negative number of sides.
+ if dice_type <= 0:
+ bot.reply("I don't have any dice with %d sides. =(" % dice_type)
+ return None # Signal there was a problem
+
# Upper limit for dice should be at most a million. Creating a dict with
# more than a million elements already takes a noticeable amount of time
# on a fast computer and ~55kB of memory.
if dice_num > 1000:
- return None
+ bot.reply('I only have 1000 dice. =(')
+ return None # Signal there was a problem
dice = DicePouch(dice_num, dice_type, 0)
@@ -171,13 +177,18 @@
# Get a list of all dice expressions, evaluate them and then replace the
# expressions in the original string with the results. Replacing is done
# using string formatting, so %-characters must be escaped.
+ if not trigger.group(2):
+ return bot.reply("No dice to roll.")
arg_str = trigger.group(2)
dice_expressions = re.findall(dice_regexp, arg_str)
arg_str = arg_str.replace("%", "%%")
arg_str = re.sub(dice_regexp, "%s", arg_str)
- dice = list(map(_roll_dice, dice_expressions))
+
+ f = lambda dice_expr: _roll_dice (bot, dice_expr)
+ dice = list(map(f, dice_expressions))
+
if None in dice:
- bot.reply("I only have 1000 dice. =(")
+ # Stop computing roll if there was a problem rolling dice.
return
def _get_eval_str(dice):
| {"golden_diff": "diff --git a/willie/modules/dice.py b/willie/modules/dice.py\n--- a/willie/modules/dice.py\n+++ b/willie/modules/dice.py\n@@ -117,7 +117,7 @@\n return len(self.dice) + len(self.dropped)\n \n \n-def _roll_dice(dice_expression):\n+def _roll_dice(bot, dice_expression):\n result = re.search(\n r\"\"\"\n (?P<dice_num>\\d*)\n@@ -131,11 +131,17 @@\n dice_num = int(result.group('dice_num') or 1)\n dice_type = int(result.group('dice_type'))\n \n+ # Dice can't have zero or a negative number of sides.\n+ if dice_type <= 0:\n+ bot.reply(\"I don't have any dice with %d sides. =(\" % dice_type)\n+ return None # Signal there was a problem\n+\n # Upper limit for dice should be at most a million. Creating a dict with\n # more than a million elements already takes a noticeable amount of time\n # on a fast computer and ~55kB of memory.\n if dice_num > 1000:\n- return None\n+ bot.reply('I only have 1000 dice. =(')\n+ return None # Signal there was a problem\n \n dice = DicePouch(dice_num, dice_type, 0)\n \n@@ -171,13 +177,18 @@\n # Get a list of all dice expressions, evaluate them and then replace the\n # expressions in the original string with the results. Replacing is done\n # using string formatting, so %-characters must be escaped.\n+ if not trigger.group(2):\n+ return bot.reply(\"No dice to roll.\")\n arg_str = trigger.group(2)\n dice_expressions = re.findall(dice_regexp, arg_str)\n arg_str = arg_str.replace(\"%\", \"%%\")\n arg_str = re.sub(dice_regexp, \"%s\", arg_str)\n- dice = list(map(_roll_dice, dice_expressions))\n+\n+ f = lambda dice_expr: _roll_dice (bot, dice_expr)\n+ dice = list(map(f, dice_expressions))\n+\n if None in dice:\n- bot.reply(\"I only have 1000 dice. =(\")\n+ # Stop computing roll if there was a problem rolling dice.\n return\n \n def _get_eval_str(dice):\n", "issue": ".dice breaks if you try to roll a die with no sides\nRepro:\n\n```\n(8:55:43 PM) creftos: .roll 1d0\n(8:55:43 PM) Willie: ValueError: empty range for randrange() (1,1, 0) (file \"/usr/lib64/python2.7/random.py\", line 217, in randrange)\n```\n\nLogs:\n\n```\n Traceback (most recent call last):\n File \"/home/bdc/workspace/willie/willie/bot.py\", line 741, in call\n exit_code = func(willie, trigger)\n File \"/home/bdc/workspace/willie/willie/modules/dice.py\", line 180, in roll\n dice = list(map(_roll_dice, dice_expressions))\n File \"/home/bdc/workspace/willie/willie/modules/dice.py\", line 140, in _roll_dice\n dice = DicePouch(dice_num, dice_type, 0)\n File \"/home/bdc/workspace/willie/willie/modules/dice.py\", line 34, in __init__\n self.roll_dice()\n File \"/home/bdc/workspace/willie/willie/modules/dice.py\", line 41, in roll_dice\n number = random.randint(1, self.type)\n File \"/usr/lib64/python2.7/random.py\", line 241, in randint\n return self.randrange(a, b+1)\n File \"/usr/lib64/python2.7/random.py\", line 217, in randrange\n raise ValueError, \"empty range for randrange() (%d,%d, %d)\" % (istart, istop, width)\n ValueError: empty range for randrange() (1,1, 0)\n```\n\nProposed Solution:\nNeeds a catch to make sure user can't do this.\n\nAdditional Information:\nWillie v. 4.5.1\nOS: Fedora release 20 (Heisenbug)\nKernel Version: 3.16.2-201.fc20.x86_64\n\n", "before_files": [{"content": "# coding=utf8\n\"\"\"\ndice.py - Dice Module\nCopyright 2010-2013, Dimitri \"Tyrope\" Molenaars, TyRope.nl\nCopyright 2013, Ari Koivula, <[email protected]>\nLicensed under the Eiffel Forum License 2.\n\nhttp://willie.dftba.net/\n\"\"\"\nfrom __future__ import unicode_literals\nimport random\nimport re\n\nimport willie.module\nfrom willie.tools import eval_equation\n\n\nclass DicePouch:\n def __init__(self, num_of_die, type_of_die, addition):\n \"\"\"Initialize dice pouch and roll the dice.\n\n Args:\n num_of_die: number of dice in the pouch.\n type_of_die: how many faces the dice have.\n addition: how much is added to the result of the dice.\n \"\"\"\n self.num = num_of_die\n self.type = type_of_die\n self.addition = addition\n\n self.dice = {}\n self.dropped = {}\n\n self.roll_dice()\n\n def roll_dice(self):\n \"\"\"Roll all the dice in the pouch.\"\"\"\n self.dice = {}\n self.dropped = {}\n for __ in range(self.num):\n number = random.randint(1, self.type)\n count = self.dice.setdefault(number, 0)\n self.dice[number] = count + 1\n\n def drop_lowest(self, n):\n \"\"\"Drop n lowest dice from the result.\n\n Args:\n n: the number of dice to drop.\n \"\"\"\n for i, count in self.dice.items():\n count = self.dice[i]\n if n == 0:\n break\n elif n < count:\n self.dice[i] = count - n\n self.dropped[i] = n\n break\n else:\n self.dice[i] = 0\n self.dropped[i] = count\n n = n - count\n\n for i, count in self.dropped.items():\n if self.dice[i] == 0:\n del self.dice[i]\n\n def get_simple_string(self):\n \"\"\"Return the values of the dice like (2+2+2[+1+1])+1.\"\"\"\n dice = self.dice.items()\n faces = (\"+\".join([str(face)] * times) for face, times in dice)\n dice_str = \"+\".join(faces)\n\n dropped_str = \"\"\n if self.dropped:\n dropped = self.dropped.items()\n dfaces = (\"+\".join([str(face)] * times) for face, times in dropped)\n dropped_str = \"[+%s]\" % (\"+\".join(dfaces),)\n\n plus_str = \"\"\n if self.addition:\n plus_str = \"{:+d}\".format(self.addition)\n\n return \"(%s%s)%s\" % (dice_str, dropped_str, plus_str)\n\n def get_compressed_string(self):\n \"\"\"Return the values of the dice like (3x2[+2x1])+1.\"\"\"\n dice = self.dice.items()\n faces = (\"%dx%d\" % (times, face) for face, times in dice)\n dice_str = \"+\".join(faces)\n\n dropped_str = \"\"\n if self.dropped:\n dropped = self.dropped.items()\n dfaces = (\"%dx%d\" % (times, face) for face, times in dropped)\n dropped_str = \"[+%s]\" % (\"+\".join(dfaces),)\n\n plus_str = \"\"\n if self.addition:\n plus_str = \"{:+d}\".format(self.addition)\n\n return \"(%s%s)%s\" % (dice_str, dropped_str, plus_str)\n\n def get_sum(self):\n \"\"\"Get the sum of non-dropped dice and the addition.\"\"\"\n result = self.addition\n for face, times in self.dice.items():\n result += face * times\n return result\n\n def get_number_of_faces(self):\n \"\"\"Returns sum of different faces for dropped and not dropped dice\n\n This can be used to estimate, whether the result can be shown in\n compressed form in a reasonable amount of space.\n \"\"\"\n return len(self.dice) + len(self.dropped)\n\n\ndef _roll_dice(dice_expression):\n result = re.search(\n r\"\"\"\n (?P<dice_num>\\d*)\n d\n (?P<dice_type>\\d+)\n (v(?P<drop_lowest>\\d+))?\n $\"\"\",\n dice_expression,\n re.IGNORECASE | re.VERBOSE)\n\n dice_num = int(result.group('dice_num') or 1)\n dice_type = int(result.group('dice_type'))\n\n # Upper limit for dice should be at most a million. Creating a dict with\n # more than a million elements already takes a noticeable amount of time\n # on a fast computer and ~55kB of memory.\n if dice_num > 1000:\n return None\n\n dice = DicePouch(dice_num, dice_type, 0)\n\n if result.group('drop_lowest'):\n drop = int(result.group('drop_lowest'))\n dice.drop_lowest(drop)\n\n return dice\n\n\[email protected](\"roll\")\[email protected](\"dice\")\[email protected](\"d\")\[email protected](\"medium\")\[email protected](\".roll 3d1+1\", 'You roll 3d1+1: (1+1+1)+1 = 4')\[email protected](\".roll 3d1v2+1\", 'You roll 3d1v2+1: (1[+1+1])+1 = 2')\[email protected](\".roll 2d4\", 'You roll 2d4: \\(\\d\\+\\d\\) = \\d', re=True)\[email protected](\".roll 100d1\", '[^:]*: \\(100x1\\) = 100', re=True)\[email protected](\".roll 1001d1\", 'I only have 1000 dice. =(')\[email protected](\".roll 1d1 + 1d1\", 'You roll 1d1 + 1d1: (1) + (1) = 2')\[email protected](\".roll 1d1+1d1\", 'You roll 1d1+1d1: (1)+(1) = 2')\ndef roll(bot, trigger):\n \"\"\".dice XdY[vZ][+N], rolls dice and reports the result.\n\n X is the number of dice. Y is the number of faces in the dice. Z is the\n number of lowest dice to be dropped from the result. N is the constant to\n be applied to the end result.\n \"\"\"\n # This regexp is only allowed to have one captured group, because having\n # more would alter the output of re.findall.\n dice_regexp = r\"\\d*d\\d+(?:v\\d+)?\"\n\n # Get a list of all dice expressions, evaluate them and then replace the\n # expressions in the original string with the results. Replacing is done\n # using string formatting, so %-characters must be escaped.\n arg_str = trigger.group(2)\n dice_expressions = re.findall(dice_regexp, arg_str)\n arg_str = arg_str.replace(\"%\", \"%%\")\n arg_str = re.sub(dice_regexp, \"%s\", arg_str)\n dice = list(map(_roll_dice, dice_expressions))\n if None in dice:\n bot.reply(\"I only have 1000 dice. =(\")\n return\n\n def _get_eval_str(dice):\n return \"(%d)\" % (dice.get_sum(),)\n\n def _get_pretty_str(dice):\n if dice.num <= 10:\n return dice.get_simple_string()\n elif dice.get_number_of_faces() <= 10:\n return dice.get_compressed_string()\n else:\n return \"(...)\"\n\n eval_str = arg_str % (tuple(map(_get_eval_str, dice)))\n pretty_str = arg_str % (tuple(map(_get_pretty_str, dice)))\n\n # Showing the actual error will hopefully give a better hint of what is\n # wrong with the syntax than a generic error message.\n try:\n result = eval_equation(eval_str)\n except Exception as e:\n bot.reply(\"SyntaxError, eval(%s), %s\" % (eval_str, e))\n return\n\n bot.reply(\"You roll %s: %s = %d\" % (\n trigger.group(2), pretty_str, result))\n\n\[email protected](\"choice\")\[email protected](\"ch\")\[email protected](\"choose\")\[email protected](\"medium\")\ndef choose(bot, trigger):\n \"\"\"\n .choice option1|option2|option3 - Makes a difficult choice easy.\n \"\"\"\n if not trigger.group(2):\n return bot.reply('I\\'d choose an option, but you didn\\'t give me any.')\n choices = re.split('[\\|\\\\\\\\\\/]', trigger.group(2))\n pick = random.choice(choices)\n return bot.reply('Your options: %s. My choice: %s' % (', '.join(choices), pick))\n\n\nif __name__ == \"__main__\":\n from willie.test_tools import run_example_tests\n run_example_tests(__file__)\n", "path": "willie/modules/dice.py"}], "after_files": [{"content": "# coding=utf8\n\"\"\"\ndice.py - Dice Module\nCopyright 2010-2013, Dimitri \"Tyrope\" Molenaars, TyRope.nl\nCopyright 2013, Ari Koivula, <[email protected]>\nLicensed under the Eiffel Forum License 2.\n\nhttp://willie.dftba.net/\n\"\"\"\nfrom __future__ import unicode_literals\nimport random\nimport re\n\nimport willie.module\nfrom willie.tools import eval_equation\n\n\nclass DicePouch:\n def __init__(self, num_of_die, type_of_die, addition):\n \"\"\"Initialize dice pouch and roll the dice.\n\n Args:\n num_of_die: number of dice in the pouch.\n type_of_die: how many faces the dice have.\n addition: how much is added to the result of the dice.\n \"\"\"\n self.num = num_of_die\n self.type = type_of_die\n self.addition = addition\n\n self.dice = {}\n self.dropped = {}\n\n self.roll_dice()\n\n def roll_dice(self):\n \"\"\"Roll all the dice in the pouch.\"\"\"\n self.dice = {}\n self.dropped = {}\n for __ in range(self.num):\n number = random.randint(1, self.type)\n count = self.dice.setdefault(number, 0)\n self.dice[number] = count + 1\n\n def drop_lowest(self, n):\n \"\"\"Drop n lowest dice from the result.\n\n Args:\n n: the number of dice to drop.\n \"\"\"\n for i, count in self.dice.items():\n count = self.dice[i]\n if n == 0:\n break\n elif n < count:\n self.dice[i] = count - n\n self.dropped[i] = n\n break\n else:\n self.dice[i] = 0\n self.dropped[i] = count\n n = n - count\n\n for i, count in self.dropped.items():\n if self.dice[i] == 0:\n del self.dice[i]\n\n def get_simple_string(self):\n \"\"\"Return the values of the dice like (2+2+2[+1+1])+1.\"\"\"\n dice = self.dice.items()\n faces = (\"+\".join([str(face)] * times) for face, times in dice)\n dice_str = \"+\".join(faces)\n\n dropped_str = \"\"\n if self.dropped:\n dropped = self.dropped.items()\n dfaces = (\"+\".join([str(face)] * times) for face, times in dropped)\n dropped_str = \"[+%s]\" % (\"+\".join(dfaces),)\n\n plus_str = \"\"\n if self.addition:\n plus_str = \"{:+d}\".format(self.addition)\n\n return \"(%s%s)%s\" % (dice_str, dropped_str, plus_str)\n\n def get_compressed_string(self):\n \"\"\"Return the values of the dice like (3x2[+2x1])+1.\"\"\"\n dice = self.dice.items()\n faces = (\"%dx%d\" % (times, face) for face, times in dice)\n dice_str = \"+\".join(faces)\n\n dropped_str = \"\"\n if self.dropped:\n dropped = self.dropped.items()\n dfaces = (\"%dx%d\" % (times, face) for face, times in dropped)\n dropped_str = \"[+%s]\" % (\"+\".join(dfaces),)\n\n plus_str = \"\"\n if self.addition:\n plus_str = \"{:+d}\".format(self.addition)\n\n return \"(%s%s)%s\" % (dice_str, dropped_str, plus_str)\n\n def get_sum(self):\n \"\"\"Get the sum of non-dropped dice and the addition.\"\"\"\n result = self.addition\n for face, times in self.dice.items():\n result += face * times\n return result\n\n def get_number_of_faces(self):\n \"\"\"Returns sum of different faces for dropped and not dropped dice\n\n This can be used to estimate, whether the result can be shown in\n compressed form in a reasonable amount of space.\n \"\"\"\n return len(self.dice) + len(self.dropped)\n\n\ndef _roll_dice(bot, dice_expression):\n result = re.search(\n r\"\"\"\n (?P<dice_num>\\d*)\n d\n (?P<dice_type>\\d+)\n (v(?P<drop_lowest>\\d+))?\n $\"\"\",\n dice_expression,\n re.IGNORECASE | re.VERBOSE)\n\n dice_num = int(result.group('dice_num') or 1)\n dice_type = int(result.group('dice_type'))\n\n # Dice can't have zero or a negative number of sides.\n if dice_type <= 0:\n bot.reply(\"I don't have any dice with %d sides. =(\" % dice_type)\n return None # Signal there was a problem\n\n # Upper limit for dice should be at most a million. Creating a dict with\n # more than a million elements already takes a noticeable amount of time\n # on a fast computer and ~55kB of memory.\n if dice_num > 1000:\n bot.reply('I only have 1000 dice. =(')\n return None # Signal there was a problem\n\n dice = DicePouch(dice_num, dice_type, 0)\n\n if result.group('drop_lowest'):\n drop = int(result.group('drop_lowest'))\n dice.drop_lowest(drop)\n\n return dice\n\n\[email protected](\"roll\")\[email protected](\"dice\")\[email protected](\"d\")\[email protected](\"medium\")\[email protected](\".roll 3d1+1\", 'You roll 3d1+1: (1+1+1)+1 = 4')\[email protected](\".roll 3d1v2+1\", 'You roll 3d1v2+1: (1[+1+1])+1 = 2')\[email protected](\".roll 2d4\", 'You roll 2d4: \\(\\d\\+\\d\\) = \\d', re=True)\[email protected](\".roll 100d1\", '[^:]*: \\(100x1\\) = 100', re=True)\[email protected](\".roll 1001d1\", 'I only have 1000 dice. =(')\[email protected](\".roll 1d1 + 1d1\", 'You roll 1d1 + 1d1: (1) + (1) = 2')\[email protected](\".roll 1d1+1d1\", 'You roll 1d1+1d1: (1)+(1) = 2')\ndef roll(bot, trigger):\n \"\"\".dice XdY[vZ][+N], rolls dice and reports the result.\n\n X is the number of dice. Y is the number of faces in the dice. Z is the\n number of lowest dice to be dropped from the result. N is the constant to\n be applied to the end result.\n \"\"\"\n # This regexp is only allowed to have one captured group, because having\n # more would alter the output of re.findall.\n dice_regexp = r\"\\d*d\\d+(?:v\\d+)?\"\n\n # Get a list of all dice expressions, evaluate them and then replace the\n # expressions in the original string with the results. Replacing is done\n # using string formatting, so %-characters must be escaped.\n if not trigger.group(2):\n return bot.reply(\"No dice to roll.\")\n arg_str = trigger.group(2)\n dice_expressions = re.findall(dice_regexp, arg_str)\n arg_str = arg_str.replace(\"%\", \"%%\")\n arg_str = re.sub(dice_regexp, \"%s\", arg_str)\n\n f = lambda dice_expr: _roll_dice (bot, dice_expr)\n dice = list(map(f, dice_expressions))\n\n if None in dice:\n # Stop computing roll if there was a problem rolling dice.\n return\n\n def _get_eval_str(dice):\n return \"(%d)\" % (dice.get_sum(),)\n\n def _get_pretty_str(dice):\n if dice.num <= 10:\n return dice.get_simple_string()\n elif dice.get_number_of_faces() <= 10:\n return dice.get_compressed_string()\n else:\n return \"(...)\"\n\n eval_str = arg_str % (tuple(map(_get_eval_str, dice)))\n pretty_str = arg_str % (tuple(map(_get_pretty_str, dice)))\n\n # Showing the actual error will hopefully give a better hint of what is\n # wrong with the syntax than a generic error message.\n try:\n result = eval_equation(eval_str)\n except Exception as e:\n bot.reply(\"SyntaxError, eval(%s), %s\" % (eval_str, e))\n return\n\n bot.reply(\"You roll %s: %s = %d\" % (\n trigger.group(2), pretty_str, result))\n\n\[email protected](\"choice\")\[email protected](\"ch\")\[email protected](\"choose\")\[email protected](\"medium\")\ndef choose(bot, trigger):\n \"\"\"\n .choice option1|option2|option3 - Makes a difficult choice easy.\n \"\"\"\n if not trigger.group(2):\n return bot.reply('I\\'d choose an option, but you didn\\'t give me any.')\n choices = re.split('[\\|\\\\\\\\\\/]', trigger.group(2))\n pick = random.choice(choices)\n return bot.reply('Your options: %s. My choice: %s' % (', '.join(choices), pick))\n\n\nif __name__ == \"__main__\":\n from willie.test_tools import run_example_tests\n run_example_tests(__file__)\n", "path": "willie/modules/dice.py"}]} | 3,358 | 545 |
gh_patches_debug_404 | rasdani/github-patches | git_diff | pytorch__rl-1536 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] TruncatedNormal crashing when computing entropy
## Describe the bug
Calling `.entropy()` on a `TruncatedNormal` distribution causes the code to crash.
## To Reproduce
First crash happened using a PPO agent with entropy bonus turned on and actor parametrized with a `TruncatedNormal`.
A simple snippet to reproduce is the following:
```python
import torch
from torchrl.modules.distributions import IndependentNormal, TruncatedNormal
if __name__ == '__main__':
loc, scale = torch.zeros(1), torch.ones(1)
d1 = IndependentNormal(loc, scale)
print(d1.entropy())
d2 = TruncatedNormal(loc, scale)
print(d2.entropy())
```
```bash
tensor(1.4189)
Traceback (most recent call last):
File "/home/diego/Desktop/test.py", line 10, in <module>
print(d2.entropy())
File "/home/diego/miniconda3/envs/pytorch/lib/python3.10/site-packages/torch/distributions/independent.py", line 103, in entropy
entropy = self.base_dist.entropy()
TypeError: 'Tensor' object is not callable
```
## Expected behavior
The entropy value should be returned.
## System info
* Python 3.10.12
* torch 2.0.1
```python
import torchrl, numpy, sys
print(torchrl.__version__, numpy.__version__, sys.version, sys.platform)
```
```
0.1.1 1.25.1 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] linux
```
## Reason and Possible fixes
In the `TruncatedStandardNormal` class, the `self._entropy` attribute is a constant tensor computed at initialization. For some reason, calling `TruncatedStandardNormal.entropy` returns the `self._entropy` attribute, rather than the `entropy()` property:
```python
import torch
from torchrl.modules.distributions.truncated_normal import TruncatedStandardNormal
loc, scale = torch.zeros(1), torch.ones(1)
print(TruncatedStandardNormal(loc, scale).entropy)
print(TruncatedStandardNormal(loc, scale).entropy())
```
```bash
tensor([-0.0104])
Traceback (most recent call last):
File "/home/diego/Desktop/test.py", line 5, in <module>
print(TruncatedStandardNormal(loc, scale).entropy())
TypeError: 'Tensor' object is not callable
```
## Checklist
- [x] I have checked that there is no similar issue in the repo (**required**)
- [x] I have read the [documentation](https://github.com/pytorch/rl/tree/main/docs/) (**required**)
- [x] I have provided a minimal working example to reproduce the bug (**required**)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchrl/modules/distributions/truncated_normal.py`
Content:
```
1 # Copyright (c) Meta Platforms, Inc. and affiliates.
2 #
3 # This source code is licensed under the MIT license found in the
4 # LICENSE file in the root directory of this source tree.
5
6
7 # from https://github.com/toshas/torch_truncnorm
8
9 import math
10 from numbers import Number
11
12 import torch
13 from torch.distributions import constraints, Distribution
14 from torch.distributions.utils import broadcast_all
15
16 CONST_SQRT_2 = math.sqrt(2)
17 CONST_INV_SQRT_2PI = 1 / math.sqrt(2 * math.pi)
18 CONST_INV_SQRT_2 = 1 / math.sqrt(2)
19 CONST_LOG_INV_SQRT_2PI = math.log(CONST_INV_SQRT_2PI)
20 CONST_LOG_SQRT_2PI_E = 0.5 * math.log(2 * math.pi * math.e)
21
22
23 class TruncatedStandardNormal(Distribution):
24 """Truncated Standard Normal distribution.
25
26 Source: https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf
27 """
28
29 arg_constraints = {
30 "a": constraints.real,
31 "b": constraints.real,
32 }
33 has_rsample = True
34 eps = 1e-6
35
36 def __init__(self, a, b, validate_args=None):
37 self.a, self.b = broadcast_all(a, b)
38 if isinstance(a, Number) and isinstance(b, Number):
39 batch_shape = torch.Size()
40 else:
41 batch_shape = self.a.size()
42 super(TruncatedStandardNormal, self).__init__(
43 batch_shape, validate_args=validate_args
44 )
45 if self.a.dtype != self.b.dtype:
46 raise ValueError("Truncation bounds types are different")
47 if any(
48 (self.a >= self.b)
49 .view(
50 -1,
51 )
52 .tolist()
53 ):
54 raise ValueError("Incorrect truncation range")
55 eps = self.eps
56 self._dtype_min_gt_0 = eps
57 self._dtype_max_lt_1 = 1 - eps
58 self._little_phi_a = self._little_phi(self.a)
59 self._little_phi_b = self._little_phi(self.b)
60 self._big_phi_a = self._big_phi(self.a)
61 self._big_phi_b = self._big_phi(self.b)
62 self._Z = (self._big_phi_b - self._big_phi_a).clamp(eps, 1 - eps)
63 self._log_Z = self._Z.log()
64 little_phi_coeff_a = torch.nan_to_num(self.a, nan=math.nan)
65 little_phi_coeff_b = torch.nan_to_num(self.b, nan=math.nan)
66 self._lpbb_m_lpaa_d_Z = (
67 self._little_phi_b * little_phi_coeff_b
68 - self._little_phi_a * little_phi_coeff_a
69 ) / self._Z
70 self._mean = -(self._little_phi_b - self._little_phi_a) / self._Z
71 self._variance = (
72 1
73 - self._lpbb_m_lpaa_d_Z
74 - ((self._little_phi_b - self._little_phi_a) / self._Z) ** 2
75 )
76 self._entropy = CONST_LOG_SQRT_2PI_E + self._log_Z - 0.5 * self._lpbb_m_lpaa_d_Z
77
78 @constraints.dependent_property
79 def support(self):
80 return constraints.interval(self.a, self.b)
81
82 @property
83 def mean(self):
84 return self._mean
85
86 @property
87 def variance(self):
88 return self._variance
89
90 @property
91 def entropy(self):
92 return self._entropy
93
94 @property
95 def auc(self):
96 return self._Z
97
98 @staticmethod
99 def _little_phi(x):
100 return (-(x**2) * 0.5).exp() * CONST_INV_SQRT_2PI
101
102 def _big_phi(self, x):
103 phi = 0.5 * (1 + (x * CONST_INV_SQRT_2).erf())
104 return phi.clamp(self.eps, 1 - self.eps)
105
106 @staticmethod
107 def _inv_big_phi(x):
108 return CONST_SQRT_2 * (2 * x - 1).erfinv()
109
110 def cdf(self, value):
111 if self._validate_args:
112 self._validate_sample(value)
113 return ((self._big_phi(value) - self._big_phi_a) / self._Z).clamp(0, 1)
114
115 def icdf(self, value):
116 y = self._big_phi_a + value * self._Z
117 y = y.clamp(self.eps, 1 - self.eps)
118 return self._inv_big_phi(y)
119
120 def log_prob(self, value):
121 if self._validate_args:
122 self._validate_sample(value)
123 return CONST_LOG_INV_SQRT_2PI - self._log_Z - (value**2) * 0.5
124
125 def rsample(self, sample_shape=None):
126 if sample_shape is None:
127 sample_shape = torch.Size([])
128 shape = self._extended_shape(sample_shape)
129 p = torch.empty(shape, device=self.a.device).uniform_(
130 self._dtype_min_gt_0, self._dtype_max_lt_1
131 )
132 return self.icdf(p)
133
134
135 class TruncatedNormal(TruncatedStandardNormal):
136 """Truncated Normal distribution.
137
138 https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf
139 """
140
141 has_rsample = True
142
143 def __init__(self, loc, scale, a, b, validate_args=None):
144 scale = scale.clamp_min(self.eps)
145 self.loc, self.scale, a, b = broadcast_all(loc, scale, a, b)
146 self._non_std_a = a
147 self._non_std_b = b
148 a = (a - self.loc) / self.scale
149 b = (b - self.loc) / self.scale
150 super(TruncatedNormal, self).__init__(a, b, validate_args=validate_args)
151 self._log_scale = self.scale.log()
152 self._mean = self._mean * self.scale + self.loc
153 self._variance = self._variance * self.scale**2
154 self._entropy += self._log_scale
155
156 def _to_std_rv(self, value):
157 return (value - self.loc) / self.scale
158
159 def _from_std_rv(self, value):
160 return value * self.scale + self.loc
161
162 def cdf(self, value):
163 return super(TruncatedNormal, self).cdf(self._to_std_rv(value))
164
165 def icdf(self, value):
166 sample = self._from_std_rv(super().icdf(value))
167
168 # clamp data but keep gradients
169 sample_clip = torch.stack(
170 [sample.detach(), self._non_std_a.detach().expand_as(sample)], 0
171 ).max(0)[0]
172 sample_clip = torch.stack(
173 [sample_clip, self._non_std_b.detach().expand_as(sample)], 0
174 ).min(0)[0]
175 sample.data.copy_(sample_clip)
176 return sample
177
178 def log_prob(self, value):
179 value = self._to_std_rv(value)
180 return super(TruncatedNormal, self).log_prob(value) - self._log_scale
181
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchrl/modules/distributions/truncated_normal.py b/torchrl/modules/distributions/truncated_normal.py
--- a/torchrl/modules/distributions/truncated_normal.py
+++ b/torchrl/modules/distributions/truncated_normal.py
@@ -87,7 +87,6 @@
def variance(self):
return self._variance
- @property
def entropy(self):
return self._entropy
| {"golden_diff": "diff --git a/torchrl/modules/distributions/truncated_normal.py b/torchrl/modules/distributions/truncated_normal.py\n--- a/torchrl/modules/distributions/truncated_normal.py\n+++ b/torchrl/modules/distributions/truncated_normal.py\n@@ -87,7 +87,6 @@\n def variance(self):\n return self._variance\n \n- @property\n def entropy(self):\n return self._entropy\n", "issue": "[BUG] TruncatedNormal crashing when computing entropy\n## Describe the bug\r\n\r\nCalling `.entropy()` on a `TruncatedNormal` distribution causes the code to crash.\r\n\r\n## To Reproduce\r\n\r\nFirst crash happened using a PPO agent with entropy bonus turned on and actor parametrized with a `TruncatedNormal`.\r\nA simple snippet to reproduce is the following:\r\n\r\n```python\r\nimport torch\r\nfrom torchrl.modules.distributions import IndependentNormal, TruncatedNormal\r\n\r\nif __name__ == '__main__':\r\n\tloc, scale = torch.zeros(1), torch.ones(1)\r\n\td1 = IndependentNormal(loc, scale)\r\n\tprint(d1.entropy())\r\n\t\r\n\td2 = TruncatedNormal(loc, scale)\r\n\tprint(d2.entropy())\r\n```\r\n\r\n```bash\r\ntensor(1.4189)\r\nTraceback (most recent call last):\r\n File \"/home/diego/Desktop/test.py\", line 10, in <module>\r\n print(d2.entropy())\r\n File \"/home/diego/miniconda3/envs/pytorch/lib/python3.10/site-packages/torch/distributions/independent.py\", line 103, in entropy\r\n entropy = self.base_dist.entropy()\r\nTypeError: 'Tensor' object is not callable\r\n\r\n```\r\n\r\n## Expected behavior\r\n\r\nThe entropy value should be returned.\r\n\r\n## System info\r\n* Python 3.10.12\r\n* torch 2.0.1\r\n\r\n```python\r\nimport torchrl, numpy, sys\r\nprint(torchrl.__version__, numpy.__version__, sys.version, sys.platform)\r\n```\r\n```\r\n0.1.1 1.25.1 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] linux\r\n```\r\n## Reason and Possible fixes\r\n\r\nIn the `TruncatedStandardNormal` class, the `self._entropy` attribute is a constant tensor computed at initialization. For some reason, calling `TruncatedStandardNormal.entropy` returns the `self._entropy` attribute, rather than the `entropy()` property:\r\n\r\n```python\r\nimport torch\r\nfrom torchrl.modules.distributions.truncated_normal import TruncatedStandardNormal\r\nloc, scale = torch.zeros(1), torch.ones(1)\r\nprint(TruncatedStandardNormal(loc, scale).entropy)\r\nprint(TruncatedStandardNormal(loc, scale).entropy())\r\n```\r\n\r\n```bash\r\ntensor([-0.0104])\r\nTraceback (most recent call last):\r\n File \"/home/diego/Desktop/test.py\", line 5, in <module>\r\n print(TruncatedStandardNormal(loc, scale).entropy())\r\nTypeError: 'Tensor' object is not callable\r\n\r\n```\r\n\r\n## Checklist\r\n\r\n- [x] I have checked that there is no similar issue in the repo (**required**)\r\n- [x] I have read the [documentation](https://github.com/pytorch/rl/tree/main/docs/) (**required**)\r\n- [x] I have provided a minimal working example to reproduce the bug (**required**)\r\n\n", "before_files": [{"content": "# Copyright (c) Meta Platforms, Inc. and affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n# from https://github.com/toshas/torch_truncnorm\n\nimport math\nfrom numbers import Number\n\nimport torch\nfrom torch.distributions import constraints, Distribution\nfrom torch.distributions.utils import broadcast_all\n\nCONST_SQRT_2 = math.sqrt(2)\nCONST_INV_SQRT_2PI = 1 / math.sqrt(2 * math.pi)\nCONST_INV_SQRT_2 = 1 / math.sqrt(2)\nCONST_LOG_INV_SQRT_2PI = math.log(CONST_INV_SQRT_2PI)\nCONST_LOG_SQRT_2PI_E = 0.5 * math.log(2 * math.pi * math.e)\n\n\nclass TruncatedStandardNormal(Distribution):\n \"\"\"Truncated Standard Normal distribution.\n\n Source: https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf\n \"\"\"\n\n arg_constraints = {\n \"a\": constraints.real,\n \"b\": constraints.real,\n }\n has_rsample = True\n eps = 1e-6\n\n def __init__(self, a, b, validate_args=None):\n self.a, self.b = broadcast_all(a, b)\n if isinstance(a, Number) and isinstance(b, Number):\n batch_shape = torch.Size()\n else:\n batch_shape = self.a.size()\n super(TruncatedStandardNormal, self).__init__(\n batch_shape, validate_args=validate_args\n )\n if self.a.dtype != self.b.dtype:\n raise ValueError(\"Truncation bounds types are different\")\n if any(\n (self.a >= self.b)\n .view(\n -1,\n )\n .tolist()\n ):\n raise ValueError(\"Incorrect truncation range\")\n eps = self.eps\n self._dtype_min_gt_0 = eps\n self._dtype_max_lt_1 = 1 - eps\n self._little_phi_a = self._little_phi(self.a)\n self._little_phi_b = self._little_phi(self.b)\n self._big_phi_a = self._big_phi(self.a)\n self._big_phi_b = self._big_phi(self.b)\n self._Z = (self._big_phi_b - self._big_phi_a).clamp(eps, 1 - eps)\n self._log_Z = self._Z.log()\n little_phi_coeff_a = torch.nan_to_num(self.a, nan=math.nan)\n little_phi_coeff_b = torch.nan_to_num(self.b, nan=math.nan)\n self._lpbb_m_lpaa_d_Z = (\n self._little_phi_b * little_phi_coeff_b\n - self._little_phi_a * little_phi_coeff_a\n ) / self._Z\n self._mean = -(self._little_phi_b - self._little_phi_a) / self._Z\n self._variance = (\n 1\n - self._lpbb_m_lpaa_d_Z\n - ((self._little_phi_b - self._little_phi_a) / self._Z) ** 2\n )\n self._entropy = CONST_LOG_SQRT_2PI_E + self._log_Z - 0.5 * self._lpbb_m_lpaa_d_Z\n\n @constraints.dependent_property\n def support(self):\n return constraints.interval(self.a, self.b)\n\n @property\n def mean(self):\n return self._mean\n\n @property\n def variance(self):\n return self._variance\n\n @property\n def entropy(self):\n return self._entropy\n\n @property\n def auc(self):\n return self._Z\n\n @staticmethod\n def _little_phi(x):\n return (-(x**2) * 0.5).exp() * CONST_INV_SQRT_2PI\n\n def _big_phi(self, x):\n phi = 0.5 * (1 + (x * CONST_INV_SQRT_2).erf())\n return phi.clamp(self.eps, 1 - self.eps)\n\n @staticmethod\n def _inv_big_phi(x):\n return CONST_SQRT_2 * (2 * x - 1).erfinv()\n\n def cdf(self, value):\n if self._validate_args:\n self._validate_sample(value)\n return ((self._big_phi(value) - self._big_phi_a) / self._Z).clamp(0, 1)\n\n def icdf(self, value):\n y = self._big_phi_a + value * self._Z\n y = y.clamp(self.eps, 1 - self.eps)\n return self._inv_big_phi(y)\n\n def log_prob(self, value):\n if self._validate_args:\n self._validate_sample(value)\n return CONST_LOG_INV_SQRT_2PI - self._log_Z - (value**2) * 0.5\n\n def rsample(self, sample_shape=None):\n if sample_shape is None:\n sample_shape = torch.Size([])\n shape = self._extended_shape(sample_shape)\n p = torch.empty(shape, device=self.a.device).uniform_(\n self._dtype_min_gt_0, self._dtype_max_lt_1\n )\n return self.icdf(p)\n\n\nclass TruncatedNormal(TruncatedStandardNormal):\n \"\"\"Truncated Normal distribution.\n\n https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf\n \"\"\"\n\n has_rsample = True\n\n def __init__(self, loc, scale, a, b, validate_args=None):\n scale = scale.clamp_min(self.eps)\n self.loc, self.scale, a, b = broadcast_all(loc, scale, a, b)\n self._non_std_a = a\n self._non_std_b = b\n a = (a - self.loc) / self.scale\n b = (b - self.loc) / self.scale\n super(TruncatedNormal, self).__init__(a, b, validate_args=validate_args)\n self._log_scale = self.scale.log()\n self._mean = self._mean * self.scale + self.loc\n self._variance = self._variance * self.scale**2\n self._entropy += self._log_scale\n\n def _to_std_rv(self, value):\n return (value - self.loc) / self.scale\n\n def _from_std_rv(self, value):\n return value * self.scale + self.loc\n\n def cdf(self, value):\n return super(TruncatedNormal, self).cdf(self._to_std_rv(value))\n\n def icdf(self, value):\n sample = self._from_std_rv(super().icdf(value))\n\n # clamp data but keep gradients\n sample_clip = torch.stack(\n [sample.detach(), self._non_std_a.detach().expand_as(sample)], 0\n ).max(0)[0]\n sample_clip = torch.stack(\n [sample_clip, self._non_std_b.detach().expand_as(sample)], 0\n ).min(0)[0]\n sample.data.copy_(sample_clip)\n return sample\n\n def log_prob(self, value):\n value = self._to_std_rv(value)\n return super(TruncatedNormal, self).log_prob(value) - self._log_scale\n", "path": "torchrl/modules/distributions/truncated_normal.py"}], "after_files": [{"content": "# Copyright (c) Meta Platforms, Inc. and affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n# from https://github.com/toshas/torch_truncnorm\n\nimport math\nfrom numbers import Number\n\nimport torch\nfrom torch.distributions import constraints, Distribution\nfrom torch.distributions.utils import broadcast_all\n\nCONST_SQRT_2 = math.sqrt(2)\nCONST_INV_SQRT_2PI = 1 / math.sqrt(2 * math.pi)\nCONST_INV_SQRT_2 = 1 / math.sqrt(2)\nCONST_LOG_INV_SQRT_2PI = math.log(CONST_INV_SQRT_2PI)\nCONST_LOG_SQRT_2PI_E = 0.5 * math.log(2 * math.pi * math.e)\n\n\nclass TruncatedStandardNormal(Distribution):\n \"\"\"Truncated Standard Normal distribution.\n\n Source: https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf\n \"\"\"\n\n arg_constraints = {\n \"a\": constraints.real,\n \"b\": constraints.real,\n }\n has_rsample = True\n eps = 1e-6\n\n def __init__(self, a, b, validate_args=None):\n self.a, self.b = broadcast_all(a, b)\n if isinstance(a, Number) and isinstance(b, Number):\n batch_shape = torch.Size()\n else:\n batch_shape = self.a.size()\n super(TruncatedStandardNormal, self).__init__(\n batch_shape, validate_args=validate_args\n )\n if self.a.dtype != self.b.dtype:\n raise ValueError(\"Truncation bounds types are different\")\n if any(\n (self.a >= self.b)\n .view(\n -1,\n )\n .tolist()\n ):\n raise ValueError(\"Incorrect truncation range\")\n eps = self.eps\n self._dtype_min_gt_0 = eps\n self._dtype_max_lt_1 = 1 - eps\n self._little_phi_a = self._little_phi(self.a)\n self._little_phi_b = self._little_phi(self.b)\n self._big_phi_a = self._big_phi(self.a)\n self._big_phi_b = self._big_phi(self.b)\n self._Z = (self._big_phi_b - self._big_phi_a).clamp(eps, 1 - eps)\n self._log_Z = self._Z.log()\n little_phi_coeff_a = torch.nan_to_num(self.a, nan=math.nan)\n little_phi_coeff_b = torch.nan_to_num(self.b, nan=math.nan)\n self._lpbb_m_lpaa_d_Z = (\n self._little_phi_b * little_phi_coeff_b\n - self._little_phi_a * little_phi_coeff_a\n ) / self._Z\n self._mean = -(self._little_phi_b - self._little_phi_a) / self._Z\n self._variance = (\n 1\n - self._lpbb_m_lpaa_d_Z\n - ((self._little_phi_b - self._little_phi_a) / self._Z) ** 2\n )\n self._entropy = CONST_LOG_SQRT_2PI_E + self._log_Z - 0.5 * self._lpbb_m_lpaa_d_Z\n\n @constraints.dependent_property\n def support(self):\n return constraints.interval(self.a, self.b)\n\n @property\n def mean(self):\n return self._mean\n\n @property\n def variance(self):\n return self._variance\n\n def entropy(self):\n return self._entropy\n\n @property\n def auc(self):\n return self._Z\n\n @staticmethod\n def _little_phi(x):\n return (-(x**2) * 0.5).exp() * CONST_INV_SQRT_2PI\n\n def _big_phi(self, x):\n phi = 0.5 * (1 + (x * CONST_INV_SQRT_2).erf())\n return phi.clamp(self.eps, 1 - self.eps)\n\n @staticmethod\n def _inv_big_phi(x):\n return CONST_SQRT_2 * (2 * x - 1).erfinv()\n\n def cdf(self, value):\n if self._validate_args:\n self._validate_sample(value)\n return ((self._big_phi(value) - self._big_phi_a) / self._Z).clamp(0, 1)\n\n def icdf(self, value):\n y = self._big_phi_a + value * self._Z\n y = y.clamp(self.eps, 1 - self.eps)\n return self._inv_big_phi(y)\n\n def log_prob(self, value):\n if self._validate_args:\n self._validate_sample(value)\n return CONST_LOG_INV_SQRT_2PI - self._log_Z - (value**2) * 0.5\n\n def rsample(self, sample_shape=None):\n if sample_shape is None:\n sample_shape = torch.Size([])\n shape = self._extended_shape(sample_shape)\n p = torch.empty(shape, device=self.a.device).uniform_(\n self._dtype_min_gt_0, self._dtype_max_lt_1\n )\n return self.icdf(p)\n\n\nclass TruncatedNormal(TruncatedStandardNormal):\n \"\"\"Truncated Normal distribution.\n\n https://people.sc.fsu.edu/~jburkardt/presentations/truncated_normal.pdf\n \"\"\"\n\n has_rsample = True\n\n def __init__(self, loc, scale, a, b, validate_args=None):\n scale = scale.clamp_min(self.eps)\n self.loc, self.scale, a, b = broadcast_all(loc, scale, a, b)\n self._non_std_a = a\n self._non_std_b = b\n a = (a - self.loc) / self.scale\n b = (b - self.loc) / self.scale\n super(TruncatedNormal, self).__init__(a, b, validate_args=validate_args)\n self._log_scale = self.scale.log()\n self._mean = self._mean * self.scale + self.loc\n self._variance = self._variance * self.scale**2\n self._entropy += self._log_scale\n\n def _to_std_rv(self, value):\n return (value - self.loc) / self.scale\n\n def _from_std_rv(self, value):\n return value * self.scale + self.loc\n\n def cdf(self, value):\n return super(TruncatedNormal, self).cdf(self._to_std_rv(value))\n\n def icdf(self, value):\n sample = self._from_std_rv(super().icdf(value))\n\n # clamp data but keep gradients\n sample_clip = torch.stack(\n [sample.detach(), self._non_std_a.detach().expand_as(sample)], 0\n ).max(0)[0]\n sample_clip = torch.stack(\n [sample_clip, self._non_std_b.detach().expand_as(sample)], 0\n ).min(0)[0]\n sample.data.copy_(sample_clip)\n return sample\n\n def log_prob(self, value):\n value = self._to_std_rv(value)\n return super(TruncatedNormal, self).log_prob(value) - self._log_scale\n", "path": "torchrl/modules/distributions/truncated_normal.py"}]} | 2,935 | 92 |
gh_patches_debug_32632 | rasdani/github-patches | git_diff | docker__docker-py-727 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
split_port() does not properly handle ":80" or "127.0.0.1:" properly
Initially reported as https://github.com/docker/compose/issues/1887
Example:
``` python
def test_port_only_with_colon(self):
self.assertRaises(ValueError,
lambda: split_port(":80"))
def test_host_only_with_colon(self):
self.assertRaises(ValueError,
lambda: split_port("localhost:"))
```
Results:
```
======================================================================
ERROR: test_host_only_with_colon (__main__.UtilsTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tests/utils_test.py", line 428, in test_host_only_with_colon
lambda: split_port("localhost:"))
File "/usr/lib/python2.7/unittest/case.py", line 473, in assertRaises
callableObj(*args, **kwargs)
File "tests/utils_test.py", line 428, in <lambda>
lambda: split_port("localhost:"))
File "/home/mark/Projects/docker-py/docker/utils/ports/ports.py", line 77, in split_port
if len(internal_range) != len(external_range):
TypeError: object of type 'NoneType' has no len()
======================================================================
ERROR: test_port_only_with_colon (__main__.UtilsTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tests/utils_test.py", line 424, in test_port_only_with_colon
lambda: split_port(":80"))
File "/usr/lib/python2.7/unittest/case.py", line 473, in assertRaises
callableObj(*args, **kwargs)
File "tests/utils_test.py", line 424, in <lambda>
lambda: split_port(":80"))
File "/home/mark/Projects/docker-py/docker/utils/ports/ports.py", line 77, in split_port
if len(internal_range) != len(external_range):
TypeError: object of type 'NoneType' has no len()
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/utils/ports/ports.py`
Content:
```
1
2
3 def add_port_mapping(port_bindings, internal_port, external):
4 if internal_port in port_bindings:
5 port_bindings[internal_port].append(external)
6 else:
7 port_bindings[internal_port] = [external]
8
9
10 def add_port(port_bindings, internal_port_range, external_range):
11 if external_range is None:
12 for internal_port in internal_port_range:
13 add_port_mapping(port_bindings, internal_port, None)
14 else:
15 ports = zip(internal_port_range, external_range)
16 for internal_port, external_port in ports:
17 add_port_mapping(port_bindings, internal_port, external_port)
18
19
20 def build_port_bindings(ports):
21 port_bindings = {}
22 for port in ports:
23 internal_port_range, external_range = split_port(port)
24 add_port(port_bindings, internal_port_range, external_range)
25 return port_bindings
26
27
28 def to_port_range(port):
29 if not port:
30 return None
31
32 protocol = ""
33 if "/" in port:
34 parts = port.split("/")
35 if len(parts) != 2:
36 raise ValueError('Invalid port "%s", should be '
37 '[[remote_ip:]remote_port[-remote_port]:]'
38 'port[/protocol]' % port)
39 port, protocol = parts
40 protocol = "/" + protocol
41
42 parts = str(port).split('-')
43
44 if len(parts) == 1:
45 return ["%s%s" % (port, protocol)]
46
47 if len(parts) == 2:
48 full_port_range = range(int(parts[0]), int(parts[1]) + 1)
49 return ["%s%s" % (p, protocol) for p in full_port_range]
50
51 raise ValueError('Invalid port range "%s", should be '
52 'port or startport-endport' % port)
53
54
55 def split_port(port):
56 parts = str(port).split(':')
57 if not 1 <= len(parts) <= 3:
58 raise ValueError('Invalid port "%s", should be '
59 '[[remote_ip:]remote_port:]port[/protocol]' % port)
60
61 if len(parts) == 1:
62 internal_port, = parts
63 return to_port_range(internal_port), None
64 if len(parts) == 2:
65 external_port, internal_port = parts
66
67 internal_range = to_port_range(internal_port)
68 external_range = to_port_range(external_port)
69 if len(internal_range) != len(external_range):
70 raise ValueError('Port ranges don\'t match in length')
71
72 return internal_range, external_range
73
74 external_ip, external_port, internal_port = parts
75 internal_range = to_port_range(internal_port)
76 external_range = to_port_range(external_port)
77 if not external_range:
78 external_range = [None] * len(internal_range)
79
80 if len(internal_range) != len(external_range):
81 raise ValueError('Port ranges don\'t match in length')
82
83 return internal_range, [(external_ip, ex_port or None)
84 for ex_port in external_range]
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docker/utils/ports/ports.py b/docker/utils/ports/ports.py
--- a/docker/utils/ports/ports.py
+++ b/docker/utils/ports/ports.py
@@ -1,5 +1,4 @@
-
def add_port_mapping(port_bindings, internal_port, external):
if internal_port in port_bindings:
port_bindings[internal_port].append(external)
@@ -33,9 +32,8 @@
if "/" in port:
parts = port.split("/")
if len(parts) != 2:
- raise ValueError('Invalid port "%s", should be '
- '[[remote_ip:]remote_port[-remote_port]:]'
- 'port[/protocol]' % port)
+ _raise_invalid_port(port)
+
port, protocol = parts
protocol = "/" + protocol
@@ -52,11 +50,17 @@
'port or startport-endport' % port)
+def _raise_invalid_port(port):
+ raise ValueError('Invalid port "%s", should be '
+ '[[remote_ip:]remote_port[-remote_port]:]'
+ 'port[/protocol]' % port)
+
+
def split_port(port):
parts = str(port).split(':')
+
if not 1 <= len(parts) <= 3:
- raise ValueError('Invalid port "%s", should be '
- '[[remote_ip:]remote_port:]port[/protocol]' % port)
+ _raise_invalid_port(port)
if len(parts) == 1:
internal_port, = parts
@@ -66,6 +70,10 @@
internal_range = to_port_range(internal_port)
external_range = to_port_range(external_port)
+
+ if internal_range is None or external_range is None:
+ _raise_invalid_port(port)
+
if len(internal_range) != len(external_range):
raise ValueError('Port ranges don\'t match in length')
| {"golden_diff": "diff --git a/docker/utils/ports/ports.py b/docker/utils/ports/ports.py\n--- a/docker/utils/ports/ports.py\n+++ b/docker/utils/ports/ports.py\n@@ -1,5 +1,4 @@\n \n-\n def add_port_mapping(port_bindings, internal_port, external):\n if internal_port in port_bindings:\n port_bindings[internal_port].append(external)\n@@ -33,9 +32,8 @@\n if \"/\" in port:\n parts = port.split(\"/\")\n if len(parts) != 2:\n- raise ValueError('Invalid port \"%s\", should be '\n- '[[remote_ip:]remote_port[-remote_port]:]'\n- 'port[/protocol]' % port)\n+ _raise_invalid_port(port)\n+\n port, protocol = parts\n protocol = \"/\" + protocol\n \n@@ -52,11 +50,17 @@\n 'port or startport-endport' % port)\n \n \n+def _raise_invalid_port(port):\n+ raise ValueError('Invalid port \"%s\", should be '\n+ '[[remote_ip:]remote_port[-remote_port]:]'\n+ 'port[/protocol]' % port)\n+\n+\n def split_port(port):\n parts = str(port).split(':')\n+\n if not 1 <= len(parts) <= 3:\n- raise ValueError('Invalid port \"%s\", should be '\n- '[[remote_ip:]remote_port:]port[/protocol]' % port)\n+ _raise_invalid_port(port)\n \n if len(parts) == 1:\n internal_port, = parts\n@@ -66,6 +70,10 @@\n \n internal_range = to_port_range(internal_port)\n external_range = to_port_range(external_port)\n+\n+ if internal_range is None or external_range is None:\n+ _raise_invalid_port(port)\n+\n if len(internal_range) != len(external_range):\n raise ValueError('Port ranges don\\'t match in length')\n", "issue": "split_port() does not properly handle \":80\" or \"127.0.0.1:\" properly\nInitially reported as https://github.com/docker/compose/issues/1887 \n\nExample:\n\n``` python\n def test_port_only_with_colon(self):\n self.assertRaises(ValueError,\n lambda: split_port(\":80\"))\n\n def test_host_only_with_colon(self):\n self.assertRaises(ValueError,\n lambda: split_port(\"localhost:\"))\n```\n\nResults:\n\n```\n======================================================================\nERROR: test_host_only_with_colon (__main__.UtilsTest)\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"tests/utils_test.py\", line 428, in test_host_only_with_colon\n lambda: split_port(\"localhost:\"))\n File \"/usr/lib/python2.7/unittest/case.py\", line 473, in assertRaises\n callableObj(*args, **kwargs)\n File \"tests/utils_test.py\", line 428, in <lambda>\n lambda: split_port(\"localhost:\"))\n File \"/home/mark/Projects/docker-py/docker/utils/ports/ports.py\", line 77, in split_port\n if len(internal_range) != len(external_range):\nTypeError: object of type 'NoneType' has no len()\n\n======================================================================\nERROR: test_port_only_with_colon (__main__.UtilsTest)\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"tests/utils_test.py\", line 424, in test_port_only_with_colon\n lambda: split_port(\":80\"))\n File \"/usr/lib/python2.7/unittest/case.py\", line 473, in assertRaises\n callableObj(*args, **kwargs)\n File \"tests/utils_test.py\", line 424, in <lambda>\n lambda: split_port(\":80\"))\n File \"/home/mark/Projects/docker-py/docker/utils/ports/ports.py\", line 77, in split_port\n if len(internal_range) != len(external_range):\nTypeError: object of type 'NoneType' has no len()\n```\n\n", "before_files": [{"content": "\n\ndef add_port_mapping(port_bindings, internal_port, external):\n if internal_port in port_bindings:\n port_bindings[internal_port].append(external)\n else:\n port_bindings[internal_port] = [external]\n\n\ndef add_port(port_bindings, internal_port_range, external_range):\n if external_range is None:\n for internal_port in internal_port_range:\n add_port_mapping(port_bindings, internal_port, None)\n else:\n ports = zip(internal_port_range, external_range)\n for internal_port, external_port in ports:\n add_port_mapping(port_bindings, internal_port, external_port)\n\n\ndef build_port_bindings(ports):\n port_bindings = {}\n for port in ports:\n internal_port_range, external_range = split_port(port)\n add_port(port_bindings, internal_port_range, external_range)\n return port_bindings\n\n\ndef to_port_range(port):\n if not port:\n return None\n\n protocol = \"\"\n if \"/\" in port:\n parts = port.split(\"/\")\n if len(parts) != 2:\n raise ValueError('Invalid port \"%s\", should be '\n '[[remote_ip:]remote_port[-remote_port]:]'\n 'port[/protocol]' % port)\n port, protocol = parts\n protocol = \"/\" + protocol\n\n parts = str(port).split('-')\n\n if len(parts) == 1:\n return [\"%s%s\" % (port, protocol)]\n\n if len(parts) == 2:\n full_port_range = range(int(parts[0]), int(parts[1]) + 1)\n return [\"%s%s\" % (p, protocol) for p in full_port_range]\n\n raise ValueError('Invalid port range \"%s\", should be '\n 'port or startport-endport' % port)\n\n\ndef split_port(port):\n parts = str(port).split(':')\n if not 1 <= len(parts) <= 3:\n raise ValueError('Invalid port \"%s\", should be '\n '[[remote_ip:]remote_port:]port[/protocol]' % port)\n\n if len(parts) == 1:\n internal_port, = parts\n return to_port_range(internal_port), None\n if len(parts) == 2:\n external_port, internal_port = parts\n\n internal_range = to_port_range(internal_port)\n external_range = to_port_range(external_port)\n if len(internal_range) != len(external_range):\n raise ValueError('Port ranges don\\'t match in length')\n\n return internal_range, external_range\n\n external_ip, external_port, internal_port = parts\n internal_range = to_port_range(internal_port)\n external_range = to_port_range(external_port)\n if not external_range:\n external_range = [None] * len(internal_range)\n\n if len(internal_range) != len(external_range):\n raise ValueError('Port ranges don\\'t match in length')\n\n return internal_range, [(external_ip, ex_port or None)\n for ex_port in external_range]\n", "path": "docker/utils/ports/ports.py"}], "after_files": [{"content": "\ndef add_port_mapping(port_bindings, internal_port, external):\n if internal_port in port_bindings:\n port_bindings[internal_port].append(external)\n else:\n port_bindings[internal_port] = [external]\n\n\ndef add_port(port_bindings, internal_port_range, external_range):\n if external_range is None:\n for internal_port in internal_port_range:\n add_port_mapping(port_bindings, internal_port, None)\n else:\n ports = zip(internal_port_range, external_range)\n for internal_port, external_port in ports:\n add_port_mapping(port_bindings, internal_port, external_port)\n\n\ndef build_port_bindings(ports):\n port_bindings = {}\n for port in ports:\n internal_port_range, external_range = split_port(port)\n add_port(port_bindings, internal_port_range, external_range)\n return port_bindings\n\n\ndef to_port_range(port):\n if not port:\n return None\n\n protocol = \"\"\n if \"/\" in port:\n parts = port.split(\"/\")\n if len(parts) != 2:\n _raise_invalid_port(port)\n\n port, protocol = parts\n protocol = \"/\" + protocol\n\n parts = str(port).split('-')\n\n if len(parts) == 1:\n return [\"%s%s\" % (port, protocol)]\n\n if len(parts) == 2:\n full_port_range = range(int(parts[0]), int(parts[1]) + 1)\n return [\"%s%s\" % (p, protocol) for p in full_port_range]\n\n raise ValueError('Invalid port range \"%s\", should be '\n 'port or startport-endport' % port)\n\n\ndef _raise_invalid_port(port):\n raise ValueError('Invalid port \"%s\", should be '\n '[[remote_ip:]remote_port[-remote_port]:]'\n 'port[/protocol]' % port)\n\n\ndef split_port(port):\n parts = str(port).split(':')\n\n if not 1 <= len(parts) <= 3:\n _raise_invalid_port(port)\n\n if len(parts) == 1:\n internal_port, = parts\n return to_port_range(internal_port), None\n if len(parts) == 2:\n external_port, internal_port = parts\n\n internal_range = to_port_range(internal_port)\n external_range = to_port_range(external_port)\n\n if internal_range is None or external_range is None:\n _raise_invalid_port(port)\n\n if len(internal_range) != len(external_range):\n raise ValueError('Port ranges don\\'t match in length')\n\n return internal_range, external_range\n\n external_ip, external_port, internal_port = parts\n internal_range = to_port_range(internal_port)\n external_range = to_port_range(external_port)\n if not external_range:\n external_range = [None] * len(internal_range)\n\n if len(internal_range) != len(external_range):\n raise ValueError('Port ranges don\\'t match in length')\n\n return internal_range, [(external_ip, ex_port or None)\n for ex_port in external_range]\n", "path": "docker/utils/ports/ports.py"}]} | 1,499 | 416 |
gh_patches_debug_10847 | rasdani/github-patches | git_diff | translate__translate-3709 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
YAML serialization raises an exception when a node disappears
https://github.com/translate/translate/blob/e5d4d38fbcc7fb310683e7b12f9ae7deab9d7788/translate/storage/yaml.py#L142
```Exception Value: 'str' object does not support item assignment```
In my case (through weblate), the existing file has:
```
it:
base:
path: Italian path
```
And a new node is used in the source translation (`base->path->sublevel`):
```
en:
base:
path:
sublevel: Now I want these nested actually
```
The code in `serialize` will raise an exception on line 154 (and make the file empty).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `translate/storage/yaml.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright 2016 Michal Čihař
4 #
5 # This file is part of the Translate Toolkit.
6 #
7 # This program is free software; you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation; either version 2 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with this program; if not, see <http://www.gnu.org/licenses/>.
19
20 r"""Class that manages YAML data files for translation
21 """
22
23 from __future__ import absolute_import
24 from __future__ import unicode_literals
25
26 import uuid
27 from collections import OrderedDict
28
29 import six
30 import yaml
31 import yaml.constructor
32
33 from translate.storage import base
34
35
36 class OrderedDictYAMLLoader(yaml.SafeLoader):
37 """
38 A YAML loader that loads mappings into ordered dictionaries.
39 """
40
41 def __init__(self, *args, **kwargs):
42 yaml.SafeLoader.__init__(self, *args, **kwargs)
43
44 self.add_constructor(u'tag:yaml.org,2002:map', type(self).construct_yaml_map)
45 self.add_constructor(u'tag:yaml.org,2002:omap', type(self).construct_yaml_map)
46
47 def construct_yaml_map(self, node):
48 data = OrderedDict()
49 yield data
50 value = self.construct_mapping(node)
51 data.update(value)
52
53 def construct_mapping(self, node, deep=False):
54 if isinstance(node, yaml.MappingNode):
55 self.flatten_mapping(node)
56 else:
57 raise yaml.constructor.ConstructorError(
58 None, None,
59 'expected a mapping node, but found %s' % node.id, node.start_mark
60 )
61
62 mapping = OrderedDict()
63 for key_node, value_node in node.value:
64 key = self.construct_object(key_node, deep=deep)
65 try:
66 hash(key)
67 except TypeError as exc:
68 raise yaml.constructor.ConstructorError(
69 'while constructing a mapping',
70 node.start_mark,
71 'found unacceptable key (%s)' % exc, key_node.start_mark
72 )
73 value = self.construct_object(value_node, deep=deep)
74 mapping[key] = value
75 return mapping
76
77
78 class UnsortableList(list):
79 def sort(self, *args, **kwargs):
80 pass
81
82
83 class UnsortableOrderedDict(OrderedDict):
84 def items(self, *args, **kwargs):
85 return UnsortableList(OrderedDict.items(self, *args, **kwargs))
86
87
88 class YAMLDumper(yaml.SafeDumper):
89 def represent_unsorted(self, data):
90 return self.represent_dict(data.items())
91
92
93 YAMLDumper.add_representer(UnsortableOrderedDict, YAMLDumper.represent_unsorted)
94
95
96 class YAMLUnit(base.TranslationUnit):
97 """A YAML entry"""
98
99 def __init__(self, source=None, **kwargs):
100 self._id = None
101 if source:
102 self.source = source
103 super(YAMLUnit, self).__init__(source)
104
105 def getsource(self):
106 return self.target
107
108 def setsource(self, source):
109 self.target = source
110 source = property(getsource, setsource)
111
112 def setid(self, value):
113 self._id = value
114
115 def getid(self):
116 # Ensure we have ID (for serialization)
117 if self._id is None:
118 self._id = str(uuid.uuid4())
119 return self._id
120
121 def getlocations(self):
122 return [self.getid()]
123
124
125 class YAMLFile(base.TranslationStore):
126 """A YAML file"""
127
128 UnitClass = YAMLUnit
129
130 def __init__(self, inputfile=None, **kwargs):
131 """construct a YAML file, optionally reading in from inputfile."""
132 super(YAMLFile, self).__init__(**kwargs)
133 self.filename = ''
134 self._file = u''
135 if inputfile is not None:
136 self.parse(inputfile)
137
138 def get_root_node(self, node):
139 """Returns root node for serialize"""
140 return node
141
142 def serialize(self, out):
143 def nested_set(target, path, value):
144 if len(path) > 1:
145 if len(path) == 2 and path[1][0] == '[' and path[1][-1] == ']' and path[1][1:-1].isdigit():
146 if path[0] not in target:
147 target[path[0]] = []
148 target[path[0]].append(value)
149 else:
150 if path[0] not in target:
151 target[path[0]] = UnsortableOrderedDict()
152 nested_set(target[path[0]], path[1:], value)
153 else:
154 target[path[0]] = value
155
156 units = UnsortableOrderedDict()
157 for unit in self.unit_iter():
158 nested_set(units, unit.getid().split('->'), unit.target)
159 out.write(yaml.dump_all(
160 [self.get_root_node(units)],
161 Dumper=YAMLDumper,
162 default_flow_style=False, encoding='utf-8', allow_unicode=True
163 ))
164
165 def _flatten(self, data, prev=""):
166 """Flatten YAML dictionary.
167 """
168 if isinstance(data, dict):
169 for k, v in six.iteritems(data):
170 if not isinstance(k, six.string_types):
171 raise base.ParseError(
172 'Key not string: {0}/{1} ({2})'.format(prev, k, type(k))
173 )
174
175 for x in self._flatten(v, '->'.join((prev, k)) if prev else k):
176 yield x
177 else:
178 if isinstance(data, six.string_types):
179 yield (prev, data)
180 elif isinstance(data, bool):
181 yield (prev, str(data))
182 elif isinstance(data, list):
183 for k, v in enumerate(data):
184 key = '[{0}]'.format(k)
185 yield ('->'.join((prev, key)), six.text_type(v))
186 elif data is None:
187 pass
188 else:
189 raise ValueError("We don't handle these values:\n"
190 "Type: %s\n"
191 "Data: %s\n"
192 "Previous: %s" % (type(data), data, prev))
193
194 def preprocess(self, data):
195 """Preprocess hook for child formats"""
196 return data
197
198 def parse(self, input):
199 """parse the given file or file source string"""
200 if hasattr(input, 'name'):
201 self.filename = input.name
202 elif not getattr(self, 'filename', ''):
203 self.filename = ''
204 if hasattr(input, "read"):
205 src = input.read()
206 input.close()
207 input = src
208 if isinstance(input, bytes):
209 input = input.decode('utf-8')
210 try:
211 self._file = yaml.load(input, OrderedDictYAMLLoader)
212 except yaml.YAMLError as e:
213 message = e.problem if hasattr(e, 'problem') else e.message
214 if hasattr(e, 'problem_mark'):
215 message += ' {0}'.format(e.problem_mark)
216 raise base.ParseError(message)
217
218 self._file = self.preprocess(self._file)
219
220 for k, data in self._flatten(self._file):
221 unit = self.UnitClass(data)
222 unit.setid(k)
223 self.addunit(unit)
224
225
226 class RubyYAMLFile(YAMLFile):
227 """Ruby YAML file, it has language code as first node."""
228
229 def preprocess(self, data):
230 if isinstance(data, OrderedDict) and len(data) == 1:
231 lang = list(data.keys())[0]
232 self.settargetlanguage(lang)
233 return data[lang]
234 return data
235
236 def get_root_node(self, node):
237 """Returns root node for serialize"""
238 if self.targetlanguage is not None:
239 result = UnsortableOrderedDict()
240 result[self.targetlanguage] = node
241 return result
242 return node
243
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/translate/storage/yaml.py b/translate/storage/yaml.py
--- a/translate/storage/yaml.py
+++ b/translate/storage/yaml.py
@@ -147,7 +147,9 @@
target[path[0]] = []
target[path[0]].append(value)
else:
- if path[0] not in target:
+ # Add empty dict in case there is value and we
+ # expect dict
+ if path[0] not in target or not isinstance(target[path[0]], dict):
target[path[0]] = UnsortableOrderedDict()
nested_set(target[path[0]], path[1:], value)
else:
| {"golden_diff": "diff --git a/translate/storage/yaml.py b/translate/storage/yaml.py\n--- a/translate/storage/yaml.py\n+++ b/translate/storage/yaml.py\n@@ -147,7 +147,9 @@\n target[path[0]] = []\n target[path[0]].append(value)\n else:\n- if path[0] not in target:\n+ # Add empty dict in case there is value and we\n+ # expect dict\n+ if path[0] not in target or not isinstance(target[path[0]], dict):\n target[path[0]] = UnsortableOrderedDict()\n nested_set(target[path[0]], path[1:], value)\n else:\n", "issue": "YAML serialization raises an exception when a node disappears\nhttps://github.com/translate/translate/blob/e5d4d38fbcc7fb310683e7b12f9ae7deab9d7788/translate/storage/yaml.py#L142\r\n\r\n```Exception Value:\t'str' object does not support item assignment```\r\n\r\nIn my case (through weblate), the existing file has:\r\n```\r\nit:\r\n base:\r\n path: Italian path\r\n```\r\n\r\nAnd a new node is used in the source translation (`base->path->sublevel`):\r\n```\r\nen:\r\n base:\r\n path:\r\n sublevel: Now I want these nested actually\r\n```\r\n\r\nThe code in `serialize` will raise an exception on line 154 (and make the file empty). \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2016 Michal \u010ciha\u0159\n#\n# This file is part of the Translate Toolkit.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\nr\"\"\"Class that manages YAML data files for translation\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import unicode_literals\n\nimport uuid\nfrom collections import OrderedDict\n\nimport six\nimport yaml\nimport yaml.constructor\n\nfrom translate.storage import base\n\n\nclass OrderedDictYAMLLoader(yaml.SafeLoader):\n \"\"\"\n A YAML loader that loads mappings into ordered dictionaries.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n yaml.SafeLoader.__init__(self, *args, **kwargs)\n\n self.add_constructor(u'tag:yaml.org,2002:map', type(self).construct_yaml_map)\n self.add_constructor(u'tag:yaml.org,2002:omap', type(self).construct_yaml_map)\n\n def construct_yaml_map(self, node):\n data = OrderedDict()\n yield data\n value = self.construct_mapping(node)\n data.update(value)\n\n def construct_mapping(self, node, deep=False):\n if isinstance(node, yaml.MappingNode):\n self.flatten_mapping(node)\n else:\n raise yaml.constructor.ConstructorError(\n None, None,\n 'expected a mapping node, but found %s' % node.id, node.start_mark\n )\n\n mapping = OrderedDict()\n for key_node, value_node in node.value:\n key = self.construct_object(key_node, deep=deep)\n try:\n hash(key)\n except TypeError as exc:\n raise yaml.constructor.ConstructorError(\n 'while constructing a mapping',\n node.start_mark,\n 'found unacceptable key (%s)' % exc, key_node.start_mark\n )\n value = self.construct_object(value_node, deep=deep)\n mapping[key] = value\n return mapping\n\n\nclass UnsortableList(list):\n def sort(self, *args, **kwargs):\n pass\n\n\nclass UnsortableOrderedDict(OrderedDict):\n def items(self, *args, **kwargs):\n return UnsortableList(OrderedDict.items(self, *args, **kwargs))\n\n\nclass YAMLDumper(yaml.SafeDumper):\n def represent_unsorted(self, data):\n return self.represent_dict(data.items())\n\n\nYAMLDumper.add_representer(UnsortableOrderedDict, YAMLDumper.represent_unsorted)\n\n\nclass YAMLUnit(base.TranslationUnit):\n \"\"\"A YAML entry\"\"\"\n\n def __init__(self, source=None, **kwargs):\n self._id = None\n if source:\n self.source = source\n super(YAMLUnit, self).__init__(source)\n\n def getsource(self):\n return self.target\n\n def setsource(self, source):\n self.target = source\n source = property(getsource, setsource)\n\n def setid(self, value):\n self._id = value\n\n def getid(self):\n # Ensure we have ID (for serialization)\n if self._id is None:\n self._id = str(uuid.uuid4())\n return self._id\n\n def getlocations(self):\n return [self.getid()]\n\n\nclass YAMLFile(base.TranslationStore):\n \"\"\"A YAML file\"\"\"\n\n UnitClass = YAMLUnit\n\n def __init__(self, inputfile=None, **kwargs):\n \"\"\"construct a YAML file, optionally reading in from inputfile.\"\"\"\n super(YAMLFile, self).__init__(**kwargs)\n self.filename = ''\n self._file = u''\n if inputfile is not None:\n self.parse(inputfile)\n\n def get_root_node(self, node):\n \"\"\"Returns root node for serialize\"\"\"\n return node\n\n def serialize(self, out):\n def nested_set(target, path, value):\n if len(path) > 1:\n if len(path) == 2 and path[1][0] == '[' and path[1][-1] == ']' and path[1][1:-1].isdigit():\n if path[0] not in target:\n target[path[0]] = []\n target[path[0]].append(value)\n else:\n if path[0] not in target:\n target[path[0]] = UnsortableOrderedDict()\n nested_set(target[path[0]], path[1:], value)\n else:\n target[path[0]] = value\n\n units = UnsortableOrderedDict()\n for unit in self.unit_iter():\n nested_set(units, unit.getid().split('->'), unit.target)\n out.write(yaml.dump_all(\n [self.get_root_node(units)],\n Dumper=YAMLDumper,\n default_flow_style=False, encoding='utf-8', allow_unicode=True\n ))\n\n def _flatten(self, data, prev=\"\"):\n \"\"\"Flatten YAML dictionary.\n \"\"\"\n if isinstance(data, dict):\n for k, v in six.iteritems(data):\n if not isinstance(k, six.string_types):\n raise base.ParseError(\n 'Key not string: {0}/{1} ({2})'.format(prev, k, type(k))\n )\n\n for x in self._flatten(v, '->'.join((prev, k)) if prev else k):\n yield x\n else:\n if isinstance(data, six.string_types):\n yield (prev, data)\n elif isinstance(data, bool):\n yield (prev, str(data))\n elif isinstance(data, list):\n for k, v in enumerate(data):\n key = '[{0}]'.format(k)\n yield ('->'.join((prev, key)), six.text_type(v))\n elif data is None:\n pass\n else:\n raise ValueError(\"We don't handle these values:\\n\"\n \"Type: %s\\n\"\n \"Data: %s\\n\"\n \"Previous: %s\" % (type(data), data, prev))\n\n def preprocess(self, data):\n \"\"\"Preprocess hook for child formats\"\"\"\n return data\n\n def parse(self, input):\n \"\"\"parse the given file or file source string\"\"\"\n if hasattr(input, 'name'):\n self.filename = input.name\n elif not getattr(self, 'filename', ''):\n self.filename = ''\n if hasattr(input, \"read\"):\n src = input.read()\n input.close()\n input = src\n if isinstance(input, bytes):\n input = input.decode('utf-8')\n try:\n self._file = yaml.load(input, OrderedDictYAMLLoader)\n except yaml.YAMLError as e:\n message = e.problem if hasattr(e, 'problem') else e.message\n if hasattr(e, 'problem_mark'):\n message += ' {0}'.format(e.problem_mark)\n raise base.ParseError(message)\n\n self._file = self.preprocess(self._file)\n\n for k, data in self._flatten(self._file):\n unit = self.UnitClass(data)\n unit.setid(k)\n self.addunit(unit)\n\n\nclass RubyYAMLFile(YAMLFile):\n \"\"\"Ruby YAML file, it has language code as first node.\"\"\"\n\n def preprocess(self, data):\n if isinstance(data, OrderedDict) and len(data) == 1:\n lang = list(data.keys())[0]\n self.settargetlanguage(lang)\n return data[lang]\n return data\n\n def get_root_node(self, node):\n \"\"\"Returns root node for serialize\"\"\"\n if self.targetlanguage is not None:\n result = UnsortableOrderedDict()\n result[self.targetlanguage] = node\n return result\n return node\n", "path": "translate/storage/yaml.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2016 Michal \u010ciha\u0159\n#\n# This file is part of the Translate Toolkit.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\nr\"\"\"Class that manages YAML data files for translation\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import unicode_literals\n\nimport uuid\nfrom collections import OrderedDict\n\nimport six\nimport yaml\nimport yaml.constructor\n\nfrom translate.storage import base\n\n\nclass OrderedDictYAMLLoader(yaml.SafeLoader):\n \"\"\"\n A YAML loader that loads mappings into ordered dictionaries.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n yaml.SafeLoader.__init__(self, *args, **kwargs)\n\n self.add_constructor(u'tag:yaml.org,2002:map', type(self).construct_yaml_map)\n self.add_constructor(u'tag:yaml.org,2002:omap', type(self).construct_yaml_map)\n\n def construct_yaml_map(self, node):\n data = OrderedDict()\n yield data\n value = self.construct_mapping(node)\n data.update(value)\n\n def construct_mapping(self, node, deep=False):\n if isinstance(node, yaml.MappingNode):\n self.flatten_mapping(node)\n else:\n raise yaml.constructor.ConstructorError(\n None, None,\n 'expected a mapping node, but found %s' % node.id, node.start_mark\n )\n\n mapping = OrderedDict()\n for key_node, value_node in node.value:\n key = self.construct_object(key_node, deep=deep)\n try:\n hash(key)\n except TypeError as exc:\n raise yaml.constructor.ConstructorError(\n 'while constructing a mapping',\n node.start_mark,\n 'found unacceptable key (%s)' % exc, key_node.start_mark\n )\n value = self.construct_object(value_node, deep=deep)\n mapping[key] = value\n return mapping\n\n\nclass UnsortableList(list):\n def sort(self, *args, **kwargs):\n pass\n\n\nclass UnsortableOrderedDict(OrderedDict):\n def items(self, *args, **kwargs):\n return UnsortableList(OrderedDict.items(self, *args, **kwargs))\n\n\nclass YAMLDumper(yaml.SafeDumper):\n def represent_unsorted(self, data):\n return self.represent_dict(data.items())\n\n\nYAMLDumper.add_representer(UnsortableOrderedDict, YAMLDumper.represent_unsorted)\n\n\nclass YAMLUnit(base.TranslationUnit):\n \"\"\"A YAML entry\"\"\"\n\n def __init__(self, source=None, **kwargs):\n self._id = None\n if source:\n self.source = source\n super(YAMLUnit, self).__init__(source)\n\n def getsource(self):\n return self.target\n\n def setsource(self, source):\n self.target = source\n source = property(getsource, setsource)\n\n def setid(self, value):\n self._id = value\n\n def getid(self):\n # Ensure we have ID (for serialization)\n if self._id is None:\n self._id = str(uuid.uuid4())\n return self._id\n\n def getlocations(self):\n return [self.getid()]\n\n\nclass YAMLFile(base.TranslationStore):\n \"\"\"A YAML file\"\"\"\n\n UnitClass = YAMLUnit\n\n def __init__(self, inputfile=None, **kwargs):\n \"\"\"construct a YAML file, optionally reading in from inputfile.\"\"\"\n super(YAMLFile, self).__init__(**kwargs)\n self.filename = ''\n self._file = u''\n if inputfile is not None:\n self.parse(inputfile)\n\n def get_root_node(self, node):\n \"\"\"Returns root node for serialize\"\"\"\n return node\n\n def serialize(self, out):\n def nested_set(target, path, value):\n if len(path) > 1:\n if len(path) == 2 and path[1][0] == '[' and path[1][-1] == ']' and path[1][1:-1].isdigit():\n if path[0] not in target:\n target[path[0]] = []\n target[path[0]].append(value)\n else:\n # Add empty dict in case there is value and we\n # expect dict\n if path[0] not in target or not isinstance(target[path[0]], dict):\n target[path[0]] = UnsortableOrderedDict()\n nested_set(target[path[0]], path[1:], value)\n else:\n target[path[0]] = value\n\n units = UnsortableOrderedDict()\n for unit in self.unit_iter():\n nested_set(units, unit.getid().split('->'), unit.target)\n out.write(yaml.dump_all(\n [self.get_root_node(units)],\n Dumper=YAMLDumper,\n default_flow_style=False, encoding='utf-8', allow_unicode=True\n ))\n\n def _flatten(self, data, prev=\"\"):\n \"\"\"Flatten YAML dictionary.\n \"\"\"\n if isinstance(data, dict):\n for k, v in six.iteritems(data):\n if not isinstance(k, six.string_types):\n raise base.ParseError(\n 'Key not string: {0}/{1} ({2})'.format(prev, k, type(k))\n )\n\n for x in self._flatten(v, '->'.join((prev, k)) if prev else k):\n yield x\n else:\n if isinstance(data, six.string_types):\n yield (prev, data)\n elif isinstance(data, bool):\n yield (prev, str(data))\n elif isinstance(data, list):\n for k, v in enumerate(data):\n key = '[{0}]'.format(k)\n yield ('->'.join((prev, key)), six.text_type(v))\n elif data is None:\n pass\n else:\n raise ValueError(\"We don't handle these values:\\n\"\n \"Type: %s\\n\"\n \"Data: %s\\n\"\n \"Previous: %s\" % (type(data), data, prev))\n\n def preprocess(self, data):\n \"\"\"Preprocess hook for child formats\"\"\"\n return data\n\n def parse(self, input):\n \"\"\"parse the given file or file source string\"\"\"\n if hasattr(input, 'name'):\n self.filename = input.name\n elif not getattr(self, 'filename', ''):\n self.filename = ''\n if hasattr(input, \"read\"):\n src = input.read()\n input.close()\n input = src\n if isinstance(input, bytes):\n input = input.decode('utf-8')\n try:\n self._file = yaml.load(input, OrderedDictYAMLLoader)\n except yaml.YAMLError as e:\n message = e.problem if hasattr(e, 'problem') else e.message\n if hasattr(e, 'problem_mark'):\n message += ' {0}'.format(e.problem_mark)\n raise base.ParseError(message)\n\n self._file = self.preprocess(self._file)\n\n for k, data in self._flatten(self._file):\n unit = self.UnitClass(data)\n unit.setid(k)\n self.addunit(unit)\n\n\nclass RubyYAMLFile(YAMLFile):\n \"\"\"Ruby YAML file, it has language code as first node.\"\"\"\n\n def preprocess(self, data):\n if isinstance(data, OrderedDict) and len(data) == 1:\n lang = list(data.keys())[0]\n self.settargetlanguage(lang)\n return data[lang]\n return data\n\n def get_root_node(self, node):\n \"\"\"Returns root node for serialize\"\"\"\n if self.targetlanguage is not None:\n result = UnsortableOrderedDict()\n result[self.targetlanguage] = node\n return result\n return node\n", "path": "translate/storage/yaml.py"}]} | 2,810 | 150 |
gh_patches_debug_10166 | rasdani/github-patches | git_diff | numba__numba-777 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dispatcher bug
``` python
from numba import jit
@jit
def foo(a, b):
return 1
# <class 'int'> <class 'float'>
# <class 'float'> <class 'int'>
# <class 'float'> <class 'float'>
# <class 'int'> <class 'int'>
foo(1, 0.1)
foo(0.1, 1)
foo(0.1, 0.1)
foo(1, 1)
```
```
Traceback (most recent call last):
File "test_disp.py", line 15, in <module>
foo(1, 1)
File "/Users/sklam/dev/numba/numba/dispatcher.py", line 161, in _explain_ambiguous
tuple(self.overloads.keys()), args, kws)
File "/Users/sklam/dev/numba/numba/typing/templates.py", line 84, in resolve_overload
if len(args) == len(case.args):
AttributeError: 'tuple' object has no attribute 'args'
```
Dispatcher bug
``` python
from numba import jit
@jit
def foo(a, b):
return 1
# <class 'int'> <class 'float'>
# <class 'float'> <class 'int'>
# <class 'float'> <class 'float'>
# <class 'int'> <class 'int'>
foo(1, 0.1)
foo(0.1, 1)
foo(0.1, 0.1)
foo(1, 1)
```
```
Traceback (most recent call last):
File "test_disp.py", line 15, in <module>
foo(1, 1)
File "/Users/sklam/dev/numba/numba/dispatcher.py", line 161, in _explain_ambiguous
tuple(self.overloads.keys()), args, kws)
File "/Users/sklam/dev/numba/numba/typing/templates.py", line 84, in resolve_overload
if len(args) == len(case.args):
AttributeError: 'tuple' object has no attribute 'args'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numba/dispatcher.py`
Content:
```
1 from __future__ import print_function, division, absolute_import
2 import contextlib
3 import functools
4 import inspect
5 import sys
6
7 from numba import _dispatcher, compiler, utils
8 from numba.typeconv.rules import default_type_manager
9 from numba import typing
10 from numba.typing.templates import resolve_overload
11 from numba import types, sigutils
12 from numba.bytecode import get_code_object
13
14
15 class _OverloadedBase(_dispatcher.Dispatcher):
16 """
17 Common base class for dispatcher Implementations.
18 """
19
20 __numba__ = "py_func"
21
22 def __init__(self, arg_count, py_func):
23 self.tm = default_type_manager
24 _dispatcher.Dispatcher.__init__(self, self.tm.get_pointer(), arg_count)
25
26 # A mapping of signatures to entry points
27 self.overloads = {}
28 # A mapping of signatures to types.Function objects
29 self._function_types = {}
30 # A mapping of signatures to compile results
31 self._compileinfos = {}
32
33 self.py_func = py_func
34 # other parts of Numba assume the old Python 2 name for code object
35 self.func_code = get_code_object(py_func)
36 # but newer python uses a different name
37 self.__code__ = self.func_code
38
39 self.doc = py_func.__doc__
40 self._compiling = False
41
42 utils.finalize(self, self._make_finalizer())
43
44 def _make_finalizer(self):
45 """
46 Return a finalizer function that will release references to
47 related compiled functions.
48 """
49 overloads = self.overloads
50 targetctx = self.targetctx
51 # Early-bind utils.shutting_down() into the function's local namespace
52 # (see issue #689)
53 def finalizer(shutting_down=utils.shutting_down):
54 # The finalizer may crash at shutdown, skip it (resources
55 # will be cleared by the process exiting, anyway).
56 if shutting_down():
57 return
58 # This function must *not* hold any reference to self:
59 # we take care to bind the necessary objects in the closure.
60 for func in overloads.values():
61 try:
62 targetctx.remove_user_function(func)
63 targetctx.remove_native_function(func)
64 except KeyError:
65 # Not a native function (object mode presumably)
66 pass
67
68 return finalizer
69
70 @property
71 def signatures(self):
72 """
73 Returns a list of compiled function signatures.
74 """
75 return list(self.overloads)
76
77 def disable_compile(self, val=True):
78 """Disable the compilation of new signatures at call time.
79 """
80 self._disable_compile(int(val))
81
82 def add_overload(self, cres):
83 args = tuple(cres.signature.args)
84 sig = [a._code for a in args]
85 self._insert(sig, cres.entry_point, cres.objectmode)
86 self.overloads[args] = cres.entry_point
87 self._compileinfos[args] = cres
88
89 # Add native function for correct typing the code generation
90 target = cres.target_context
91 cfunc = cres.entry_point
92 if cfunc in target.native_funcs:
93 target.dynamic_map_function(cfunc)
94 # Create function type for typing
95 func_name = cres.fndesc.mangled_name
96 name = "CallTemplate(%s)" % cres.fndesc.mangled_name
97 # The `key` isn't really used except for diagnosis here,
98 # so avoid keeping a reference to `cfunc`.
99 call_template = typing.make_concrete_template(
100 name, key=func_name, signatures=[cres.signature])
101 self._function_types[args] = call_template
102
103 def get_call_template(self, args, kws):
104 """
105 Get a typing.ConcreteTemplate for this dispatcher and the given *args*
106 and *kws*. This allows to resolve the return type.
107 """
108 if kws:
109 raise TypeError("kwargs not supported")
110 # Ensure an overload is available, but avoid compiler re-entrance
111 if not self.is_compiling:
112 self.compile(tuple(args))
113 return self._function_types[args]
114
115 def get_overload(self, sig):
116 args, return_type = sigutils.normalize_signature(sig)
117 return self.overloads[tuple(args)]
118
119 @contextlib.contextmanager
120 def _compile_lock(self):
121 if self._compiling:
122 raise RuntimeError("Compiler re-entrant")
123 self._compiling = True
124 try:
125 yield
126 finally:
127 self._compiling = False
128
129 @property
130 def is_compiling(self):
131 return self._compiling
132
133 def jit(self, sig, **kws):
134 """Alias of compile(sig, **kws)
135 """
136 return self.compile(sig, **kws)
137
138 def _compile_for_args(self, *args, **kws):
139 """
140 For internal use. Compile a specialized version of the function
141 for the given *args* and *kws*, and return the resulting callable.
142 """
143 assert not kws
144 sig = tuple([self.typeof_pyval(a) for a in args])
145 return self.jit(sig)
146
147 def inspect_types(self, file=None):
148 if file is None:
149 file = sys.stdout
150
151 for ver, res in utils.iteritems(self._compileinfos):
152 print("%s %s" % (self.py_func.__name__, ver), file=file)
153 print('-' * 80, file=file)
154 print(res.type_annotation, file=file)
155 print('=' * 80, file=file)
156
157 def _explain_ambiguous(self, *args, **kws):
158 assert not kws, "kwargs not handled"
159 args = tuple([self.typeof_pyval(a) for a in args])
160 resolve_overload(self.typingctx, self.py_func,
161 tuple(self.overloads.keys()), args, kws)
162
163 def __repr__(self):
164 return "%s(%s)" % (type(self).__name__, self.py_func)
165
166 def typeof_pyval(self, val):
167 """
168 Resolve the Numba type of Python value *val*.
169 This is called from numba._dispatcher as a fallback if the native code
170 cannot decide the type.
171 """
172 if isinstance(val, utils.INT_TYPES):
173 # Ensure no autoscaling of integer type, to match the
174 # typecode() function in _dispatcher.c.
175 return types.int64
176
177 tp = self.typingctx.resolve_data_type(val)
178 if tp is None:
179 tp = types.pyobject
180 return tp
181
182
183 class Overloaded(_OverloadedBase):
184 """
185 Implementation of user-facing dispatcher objects (i.e. created using
186 the @jit decorator).
187 This is an abstract base class. Subclasses should define the targetdescr
188 class attribute.
189 """
190
191 def __init__(self, py_func, locals={}, targetoptions={}):
192 """
193 Parameters
194 ----------
195 py_func: function object to be compiled
196 locals: dict, optional
197 Mapping of local variable names to Numba types. Used to override
198 the types deduced by the type inference engine.
199 targetoptions: dict, optional
200 Target-specific config options.
201 """
202 self.typingctx = self.targetdescr.typing_context
203 self.targetctx = self.targetdescr.target_context
204
205 argspec = inspect.getargspec(py_func)
206 argct = len(argspec.args)
207
208 _OverloadedBase.__init__(self, argct, py_func)
209
210 functools.update_wrapper(self, py_func)
211
212 self.targetoptions = targetoptions
213 self.locals = locals
214
215 self.typingctx.insert_overloaded(self)
216
217 def compile(self, sig, locals={}, **targetoptions):
218 with self._compile_lock():
219 locs = self.locals.copy()
220 locs.update(locals)
221
222 topt = self.targetoptions.copy()
223 topt.update(targetoptions)
224
225 flags = compiler.Flags()
226 self.targetdescr.options.parse_as_flags(flags, topt)
227
228 args, return_type = sigutils.normalize_signature(sig)
229
230 # Don't recompile if signature already exist.
231 existing = self.overloads.get(tuple(args))
232 if existing is not None:
233 return existing
234
235 cres = compiler.compile_extra(self.typingctx, self.targetctx,
236 self.py_func,
237 args=args, return_type=return_type,
238 flags=flags, locals=locs)
239
240 # Check typing error if object mode is used
241 if cres.typing_error is not None and not flags.enable_pyobject:
242 raise cres.typing_error
243
244 self.add_overload(cres)
245 return cres.entry_point
246
247
248 class LiftedLoop(_OverloadedBase):
249 """
250 Implementation of the hidden dispatcher objects used for lifted loop
251 (a lifted loop is really compiled as a separate function).
252 """
253
254 def __init__(self, bytecode, typingctx, targetctx, locals, flags):
255 self.typingctx = typingctx
256 self.targetctx = targetctx
257
258 argspec = bytecode.argspec
259 argct = len(argspec.args)
260
261 _OverloadedBase.__init__(self, argct, bytecode.func)
262
263 self.locals = locals
264 self.flags = flags
265 self.bytecode = bytecode
266
267 def compile(self, sig):
268 with self._compile_lock():
269 # FIXME this is mostly duplicated from Overloaded
270 flags = self.flags
271 args, return_type = sigutils.normalize_signature(sig)
272
273 # Don't recompile if signature already exist.
274 existing = self.overloads.get(tuple(args))
275 if existing is not None:
276 return existing.entry_point
277
278 assert not flags.enable_looplift, "Enable looplift flags is on"
279 cres = compiler.compile_bytecode(typingctx=self.typingctx,
280 targetctx=self.targetctx,
281 bc=self.bytecode,
282 args=args,
283 return_type=return_type,
284 flags=flags,
285 locals=self.locals)
286
287 # Check typing error if object mode is used
288 if cres.typing_error is not None and not flags.enable_pyobject:
289 raise cres.typing_error
290
291 self.add_overload(cres)
292 return cres.entry_point
293
294
295 # Initialize dispatcher
296 _dispatcher.init_types(dict((str(t), t._code) for t in types.number_domain))
297
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/numba/dispatcher.py b/numba/dispatcher.py
--- a/numba/dispatcher.py
+++ b/numba/dispatcher.py
@@ -157,8 +157,8 @@
def _explain_ambiguous(self, *args, **kws):
assert not kws, "kwargs not handled"
args = tuple([self.typeof_pyval(a) for a in args])
- resolve_overload(self.typingctx, self.py_func,
- tuple(self.overloads.keys()), args, kws)
+ sigs = [cr.signature for cr in self._compileinfos.values()]
+ resolve_overload(self.typingctx, self.py_func, sigs, args, kws)
def __repr__(self):
return "%s(%s)" % (type(self).__name__, self.py_func)
| {"golden_diff": "diff --git a/numba/dispatcher.py b/numba/dispatcher.py\n--- a/numba/dispatcher.py\n+++ b/numba/dispatcher.py\n@@ -157,8 +157,8 @@\n def _explain_ambiguous(self, *args, **kws):\n assert not kws, \"kwargs not handled\"\n args = tuple([self.typeof_pyval(a) for a in args])\n- resolve_overload(self.typingctx, self.py_func,\n- tuple(self.overloads.keys()), args, kws)\n+ sigs = [cr.signature for cr in self._compileinfos.values()]\n+ resolve_overload(self.typingctx, self.py_func, sigs, args, kws)\n \n def __repr__(self):\n return \"%s(%s)\" % (type(self).__name__, self.py_func)\n", "issue": "Dispatcher bug \n``` python\nfrom numba import jit\n\n@jit\ndef foo(a, b):\n return 1\n\n# <class 'int'> <class 'float'>\n# <class 'float'> <class 'int'>\n# <class 'float'> <class 'float'>\n# <class 'int'> <class 'int'>\n\nfoo(1, 0.1)\nfoo(0.1, 1)\nfoo(0.1, 0.1)\nfoo(1, 1)\n```\n\n```\nTraceback (most recent call last):\n File \"test_disp.py\", line 15, in <module>\n foo(1, 1)\n File \"/Users/sklam/dev/numba/numba/dispatcher.py\", line 161, in _explain_ambiguous\n tuple(self.overloads.keys()), args, kws)\n File \"/Users/sklam/dev/numba/numba/typing/templates.py\", line 84, in resolve_overload\n if len(args) == len(case.args):\nAttributeError: 'tuple' object has no attribute 'args'\n```\n\nDispatcher bug \n``` python\nfrom numba import jit\n\n@jit\ndef foo(a, b):\n return 1\n\n# <class 'int'> <class 'float'>\n# <class 'float'> <class 'int'>\n# <class 'float'> <class 'float'>\n# <class 'int'> <class 'int'>\n\nfoo(1, 0.1)\nfoo(0.1, 1)\nfoo(0.1, 0.1)\nfoo(1, 1)\n```\n\n```\nTraceback (most recent call last):\n File \"test_disp.py\", line 15, in <module>\n foo(1, 1)\n File \"/Users/sklam/dev/numba/numba/dispatcher.py\", line 161, in _explain_ambiguous\n tuple(self.overloads.keys()), args, kws)\n File \"/Users/sklam/dev/numba/numba/typing/templates.py\", line 84, in resolve_overload\n if len(args) == len(case.args):\nAttributeError: 'tuple' object has no attribute 'args'\n```\n\n", "before_files": [{"content": "from __future__ import print_function, division, absolute_import\nimport contextlib\nimport functools\nimport inspect\nimport sys\n\nfrom numba import _dispatcher, compiler, utils\nfrom numba.typeconv.rules import default_type_manager\nfrom numba import typing\nfrom numba.typing.templates import resolve_overload\nfrom numba import types, sigutils\nfrom numba.bytecode import get_code_object\n\n\nclass _OverloadedBase(_dispatcher.Dispatcher):\n \"\"\"\n Common base class for dispatcher Implementations.\n \"\"\"\n\n __numba__ = \"py_func\"\n\n def __init__(self, arg_count, py_func):\n self.tm = default_type_manager\n _dispatcher.Dispatcher.__init__(self, self.tm.get_pointer(), arg_count)\n\n # A mapping of signatures to entry points\n self.overloads = {}\n # A mapping of signatures to types.Function objects\n self._function_types = {}\n # A mapping of signatures to compile results\n self._compileinfos = {}\n\n self.py_func = py_func\n # other parts of Numba assume the old Python 2 name for code object\n self.func_code = get_code_object(py_func)\n # but newer python uses a different name\n self.__code__ = self.func_code\n\n self.doc = py_func.__doc__\n self._compiling = False\n\n utils.finalize(self, self._make_finalizer())\n\n def _make_finalizer(self):\n \"\"\"\n Return a finalizer function that will release references to\n related compiled functions.\n \"\"\"\n overloads = self.overloads\n targetctx = self.targetctx\n # Early-bind utils.shutting_down() into the function's local namespace\n # (see issue #689)\n def finalizer(shutting_down=utils.shutting_down):\n # The finalizer may crash at shutdown, skip it (resources\n # will be cleared by the process exiting, anyway).\n if shutting_down():\n return\n # This function must *not* hold any reference to self:\n # we take care to bind the necessary objects in the closure.\n for func in overloads.values():\n try:\n targetctx.remove_user_function(func)\n targetctx.remove_native_function(func)\n except KeyError:\n # Not a native function (object mode presumably)\n pass\n\n return finalizer\n\n @property\n def signatures(self):\n \"\"\"\n Returns a list of compiled function signatures.\n \"\"\"\n return list(self.overloads)\n\n def disable_compile(self, val=True):\n \"\"\"Disable the compilation of new signatures at call time.\n \"\"\"\n self._disable_compile(int(val))\n\n def add_overload(self, cres):\n args = tuple(cres.signature.args)\n sig = [a._code for a in args]\n self._insert(sig, cres.entry_point, cres.objectmode)\n self.overloads[args] = cres.entry_point\n self._compileinfos[args] = cres\n\n # Add native function for correct typing the code generation\n target = cres.target_context\n cfunc = cres.entry_point\n if cfunc in target.native_funcs:\n target.dynamic_map_function(cfunc)\n # Create function type for typing\n func_name = cres.fndesc.mangled_name\n name = \"CallTemplate(%s)\" % cres.fndesc.mangled_name\n # The `key` isn't really used except for diagnosis here,\n # so avoid keeping a reference to `cfunc`.\n call_template = typing.make_concrete_template(\n name, key=func_name, signatures=[cres.signature])\n self._function_types[args] = call_template\n\n def get_call_template(self, args, kws):\n \"\"\"\n Get a typing.ConcreteTemplate for this dispatcher and the given *args*\n and *kws*. This allows to resolve the return type.\n \"\"\"\n if kws:\n raise TypeError(\"kwargs not supported\")\n # Ensure an overload is available, but avoid compiler re-entrance\n if not self.is_compiling:\n self.compile(tuple(args))\n return self._function_types[args]\n\n def get_overload(self, sig):\n args, return_type = sigutils.normalize_signature(sig)\n return self.overloads[tuple(args)]\n\n @contextlib.contextmanager\n def _compile_lock(self):\n if self._compiling:\n raise RuntimeError(\"Compiler re-entrant\")\n self._compiling = True\n try:\n yield\n finally:\n self._compiling = False\n\n @property\n def is_compiling(self):\n return self._compiling\n\n def jit(self, sig, **kws):\n \"\"\"Alias of compile(sig, **kws)\n \"\"\"\n return self.compile(sig, **kws)\n\n def _compile_for_args(self, *args, **kws):\n \"\"\"\n For internal use. Compile a specialized version of the function\n for the given *args* and *kws*, and return the resulting callable.\n \"\"\"\n assert not kws\n sig = tuple([self.typeof_pyval(a) for a in args])\n return self.jit(sig)\n\n def inspect_types(self, file=None):\n if file is None:\n file = sys.stdout\n\n for ver, res in utils.iteritems(self._compileinfos):\n print(\"%s %s\" % (self.py_func.__name__, ver), file=file)\n print('-' * 80, file=file)\n print(res.type_annotation, file=file)\n print('=' * 80, file=file)\n\n def _explain_ambiguous(self, *args, **kws):\n assert not kws, \"kwargs not handled\"\n args = tuple([self.typeof_pyval(a) for a in args])\n resolve_overload(self.typingctx, self.py_func,\n tuple(self.overloads.keys()), args, kws)\n\n def __repr__(self):\n return \"%s(%s)\" % (type(self).__name__, self.py_func)\n\n def typeof_pyval(self, val):\n \"\"\"\n Resolve the Numba type of Python value *val*.\n This is called from numba._dispatcher as a fallback if the native code\n cannot decide the type.\n \"\"\"\n if isinstance(val, utils.INT_TYPES):\n # Ensure no autoscaling of integer type, to match the\n # typecode() function in _dispatcher.c.\n return types.int64\n\n tp = self.typingctx.resolve_data_type(val)\n if tp is None:\n tp = types.pyobject\n return tp\n\n\nclass Overloaded(_OverloadedBase):\n \"\"\"\n Implementation of user-facing dispatcher objects (i.e. created using\n the @jit decorator).\n This is an abstract base class. Subclasses should define the targetdescr\n class attribute.\n \"\"\"\n\n def __init__(self, py_func, locals={}, targetoptions={}):\n \"\"\"\n Parameters\n ----------\n py_func: function object to be compiled\n locals: dict, optional\n Mapping of local variable names to Numba types. Used to override\n the types deduced by the type inference engine.\n targetoptions: dict, optional\n Target-specific config options.\n \"\"\"\n self.typingctx = self.targetdescr.typing_context\n self.targetctx = self.targetdescr.target_context\n\n argspec = inspect.getargspec(py_func)\n argct = len(argspec.args)\n\n _OverloadedBase.__init__(self, argct, py_func)\n\n functools.update_wrapper(self, py_func)\n\n self.targetoptions = targetoptions\n self.locals = locals\n\n self.typingctx.insert_overloaded(self)\n\n def compile(self, sig, locals={}, **targetoptions):\n with self._compile_lock():\n locs = self.locals.copy()\n locs.update(locals)\n\n topt = self.targetoptions.copy()\n topt.update(targetoptions)\n\n flags = compiler.Flags()\n self.targetdescr.options.parse_as_flags(flags, topt)\n\n args, return_type = sigutils.normalize_signature(sig)\n\n # Don't recompile if signature already exist.\n existing = self.overloads.get(tuple(args))\n if existing is not None:\n return existing\n\n cres = compiler.compile_extra(self.typingctx, self.targetctx,\n self.py_func,\n args=args, return_type=return_type,\n flags=flags, locals=locs)\n\n # Check typing error if object mode is used\n if cres.typing_error is not None and not flags.enable_pyobject:\n raise cres.typing_error\n\n self.add_overload(cres)\n return cres.entry_point\n\n\nclass LiftedLoop(_OverloadedBase):\n \"\"\"\n Implementation of the hidden dispatcher objects used for lifted loop\n (a lifted loop is really compiled as a separate function).\n \"\"\"\n\n def __init__(self, bytecode, typingctx, targetctx, locals, flags):\n self.typingctx = typingctx\n self.targetctx = targetctx\n\n argspec = bytecode.argspec\n argct = len(argspec.args)\n\n _OverloadedBase.__init__(self, argct, bytecode.func)\n\n self.locals = locals\n self.flags = flags\n self.bytecode = bytecode\n\n def compile(self, sig):\n with self._compile_lock():\n # FIXME this is mostly duplicated from Overloaded\n flags = self.flags\n args, return_type = sigutils.normalize_signature(sig)\n\n # Don't recompile if signature already exist.\n existing = self.overloads.get(tuple(args))\n if existing is not None:\n return existing.entry_point\n\n assert not flags.enable_looplift, \"Enable looplift flags is on\"\n cres = compiler.compile_bytecode(typingctx=self.typingctx,\n targetctx=self.targetctx,\n bc=self.bytecode,\n args=args,\n return_type=return_type,\n flags=flags,\n locals=self.locals)\n\n # Check typing error if object mode is used\n if cres.typing_error is not None and not flags.enable_pyobject:\n raise cres.typing_error\n\n self.add_overload(cres)\n return cres.entry_point\n\n\n# Initialize dispatcher\n_dispatcher.init_types(dict((str(t), t._code) for t in types.number_domain))\n", "path": "numba/dispatcher.py"}], "after_files": [{"content": "from __future__ import print_function, division, absolute_import\nimport contextlib\nimport functools\nimport inspect\nimport sys\n\nfrom numba import _dispatcher, compiler, utils\nfrom numba.typeconv.rules import default_type_manager\nfrom numba import typing\nfrom numba.typing.templates import resolve_overload\nfrom numba import types, sigutils\nfrom numba.bytecode import get_code_object\n\n\nclass _OverloadedBase(_dispatcher.Dispatcher):\n \"\"\"\n Common base class for dispatcher Implementations.\n \"\"\"\n\n __numba__ = \"py_func\"\n\n def __init__(self, arg_count, py_func):\n self.tm = default_type_manager\n _dispatcher.Dispatcher.__init__(self, self.tm.get_pointer(), arg_count)\n\n # A mapping of signatures to entry points\n self.overloads = {}\n # A mapping of signatures to types.Function objects\n self._function_types = {}\n # A mapping of signatures to compile results\n self._compileinfos = {}\n\n self.py_func = py_func\n # other parts of Numba assume the old Python 2 name for code object\n self.func_code = get_code_object(py_func)\n # but newer python uses a different name\n self.__code__ = self.func_code\n\n self.doc = py_func.__doc__\n self._compiling = False\n\n utils.finalize(self, self._make_finalizer())\n\n def _make_finalizer(self):\n \"\"\"\n Return a finalizer function that will release references to\n related compiled functions.\n \"\"\"\n overloads = self.overloads\n targetctx = self.targetctx\n # Early-bind utils.shutting_down() into the function's local namespace\n # (see issue #689)\n def finalizer(shutting_down=utils.shutting_down):\n # The finalizer may crash at shutdown, skip it (resources\n # will be cleared by the process exiting, anyway).\n if shutting_down():\n return\n # This function must *not* hold any reference to self:\n # we take care to bind the necessary objects in the closure.\n for func in overloads.values():\n try:\n targetctx.remove_user_function(func)\n targetctx.remove_native_function(func)\n except KeyError:\n # Not a native function (object mode presumably)\n pass\n\n return finalizer\n\n @property\n def signatures(self):\n \"\"\"\n Returns a list of compiled function signatures.\n \"\"\"\n return list(self.overloads)\n\n def disable_compile(self, val=True):\n \"\"\"Disable the compilation of new signatures at call time.\n \"\"\"\n self._disable_compile(int(val))\n\n def add_overload(self, cres):\n args = tuple(cres.signature.args)\n sig = [a._code for a in args]\n self._insert(sig, cres.entry_point, cres.objectmode)\n self.overloads[args] = cres.entry_point\n self._compileinfos[args] = cres\n\n # Add native function for correct typing the code generation\n target = cres.target_context\n cfunc = cres.entry_point\n if cfunc in target.native_funcs:\n target.dynamic_map_function(cfunc)\n # Create function type for typing\n func_name = cres.fndesc.mangled_name\n name = \"CallTemplate(%s)\" % cres.fndesc.mangled_name\n # The `key` isn't really used except for diagnosis here,\n # so avoid keeping a reference to `cfunc`.\n call_template = typing.make_concrete_template(\n name, key=func_name, signatures=[cres.signature])\n self._function_types[args] = call_template\n\n def get_call_template(self, args, kws):\n \"\"\"\n Get a typing.ConcreteTemplate for this dispatcher and the given *args*\n and *kws*. This allows to resolve the return type.\n \"\"\"\n if kws:\n raise TypeError(\"kwargs not supported\")\n # Ensure an overload is available, but avoid compiler re-entrance\n if not self.is_compiling:\n self.compile(tuple(args))\n return self._function_types[args]\n\n def get_overload(self, sig):\n args, return_type = sigutils.normalize_signature(sig)\n return self.overloads[tuple(args)]\n\n @contextlib.contextmanager\n def _compile_lock(self):\n if self._compiling:\n raise RuntimeError(\"Compiler re-entrant\")\n self._compiling = True\n try:\n yield\n finally:\n self._compiling = False\n\n @property\n def is_compiling(self):\n return self._compiling\n\n def jit(self, sig, **kws):\n \"\"\"Alias of compile(sig, **kws)\n \"\"\"\n return self.compile(sig, **kws)\n\n def _compile_for_args(self, *args, **kws):\n \"\"\"\n For internal use. Compile a specialized version of the function\n for the given *args* and *kws*, and return the resulting callable.\n \"\"\"\n assert not kws\n sig = tuple([self.typeof_pyval(a) for a in args])\n return self.jit(sig)\n\n def inspect_types(self, file=None):\n if file is None:\n file = sys.stdout\n\n for ver, res in utils.iteritems(self._compileinfos):\n print(\"%s %s\" % (self.py_func.__name__, ver), file=file)\n print('-' * 80, file=file)\n print(res.type_annotation, file=file)\n print('=' * 80, file=file)\n\n def _explain_ambiguous(self, *args, **kws):\n assert not kws, \"kwargs not handled\"\n args = tuple([self.typeof_pyval(a) for a in args])\n sigs = [cr.signature for cr in self._compileinfos.values()]\n resolve_overload(self.typingctx, self.py_func, sigs, args, kws)\n\n def __repr__(self):\n return \"%s(%s)\" % (type(self).__name__, self.py_func)\n\n def typeof_pyval(self, val):\n \"\"\"\n Resolve the Numba type of Python value *val*.\n This is called from numba._dispatcher as a fallback if the native code\n cannot decide the type.\n \"\"\"\n if isinstance(val, utils.INT_TYPES):\n # Ensure no autoscaling of integer type, to match the\n # typecode() function in _dispatcher.c.\n return types.int64\n\n tp = self.typingctx.resolve_data_type(val)\n if tp is None:\n tp = types.pyobject\n return tp\n\n\nclass Overloaded(_OverloadedBase):\n \"\"\"\n Implementation of user-facing dispatcher objects (i.e. created using\n the @jit decorator).\n This is an abstract base class. Subclasses should define the targetdescr\n class attribute.\n \"\"\"\n\n def __init__(self, py_func, locals={}, targetoptions={}):\n \"\"\"\n Parameters\n ----------\n py_func: function object to be compiled\n locals: dict, optional\n Mapping of local variable names to Numba types. Used to override\n the types deduced by the type inference engine.\n targetoptions: dict, optional\n Target-specific config options.\n \"\"\"\n self.typingctx = self.targetdescr.typing_context\n self.targetctx = self.targetdescr.target_context\n\n argspec = inspect.getargspec(py_func)\n argct = len(argspec.args)\n\n _OverloadedBase.__init__(self, argct, py_func)\n\n functools.update_wrapper(self, py_func)\n\n self.targetoptions = targetoptions\n self.locals = locals\n\n self.typingctx.insert_overloaded(self)\n\n def compile(self, sig, locals={}, **targetoptions):\n with self._compile_lock():\n locs = self.locals.copy()\n locs.update(locals)\n\n topt = self.targetoptions.copy()\n topt.update(targetoptions)\n\n flags = compiler.Flags()\n self.targetdescr.options.parse_as_flags(flags, topt)\n\n args, return_type = sigutils.normalize_signature(sig)\n\n # Don't recompile if signature already exist.\n existing = self.overloads.get(tuple(args))\n if existing is not None:\n return existing\n\n cres = compiler.compile_extra(self.typingctx, self.targetctx,\n self.py_func,\n args=args, return_type=return_type,\n flags=flags, locals=locs)\n\n # Check typing error if object mode is used\n if cres.typing_error is not None and not flags.enable_pyobject:\n raise cres.typing_error\n\n self.add_overload(cres)\n return cres.entry_point\n\n\nclass LiftedLoop(_OverloadedBase):\n \"\"\"\n Implementation of the hidden dispatcher objects used for lifted loop\n (a lifted loop is really compiled as a separate function).\n \"\"\"\n\n def __init__(self, bytecode, typingctx, targetctx, locals, flags):\n self.typingctx = typingctx\n self.targetctx = targetctx\n\n argspec = bytecode.argspec\n argct = len(argspec.args)\n\n _OverloadedBase.__init__(self, argct, bytecode.func)\n\n self.locals = locals\n self.flags = flags\n self.bytecode = bytecode\n\n def compile(self, sig):\n with self._compile_lock():\n # FIXME this is mostly duplicated from Overloaded\n flags = self.flags\n args, return_type = sigutils.normalize_signature(sig)\n\n # Don't recompile if signature already exist.\n existing = self.overloads.get(tuple(args))\n if existing is not None:\n return existing.entry_point\n\n assert not flags.enable_looplift, \"Enable looplift flags is on\"\n cres = compiler.compile_bytecode(typingctx=self.typingctx,\n targetctx=self.targetctx,\n bc=self.bytecode,\n args=args,\n return_type=return_type,\n flags=flags,\n locals=self.locals)\n\n # Check typing error if object mode is used\n if cres.typing_error is not None and not flags.enable_pyobject:\n raise cres.typing_error\n\n self.add_overload(cres)\n return cres.entry_point\n\n\n# Initialize dispatcher\n_dispatcher.init_types(dict((str(t), t._code) for t in types.number_domain))\n", "path": "numba/dispatcher.py"}]} | 3,717 | 189 |
gh_patches_debug_19442 | rasdani/github-patches | git_diff | scrapy__scrapy-6098 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Re-enable deps on 3.12 when ready
- [x] bpython (requires greenlet)
- [x] uvloop
- [x] pyftpdlib
Related: https://github.com/scrapy/scrapy/pull/6083
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/contracts/__init__.py`
Content:
```
1 import re
2 import sys
3 from functools import wraps
4 from inspect import getmembers
5 from types import CoroutineType
6 from typing import AsyncGenerator, Dict
7 from unittest import TestCase
8
9 from scrapy.http import Request
10 from scrapy.utils.python import get_spec
11 from scrapy.utils.spider import iterate_spider_output
12
13
14 class Contract:
15 """Abstract class for contracts"""
16
17 request_cls = None
18
19 def __init__(self, method, *args):
20 self.testcase_pre = _create_testcase(method, f"@{self.name} pre-hook")
21 self.testcase_post = _create_testcase(method, f"@{self.name} post-hook")
22 self.args = args
23
24 def add_pre_hook(self, request, results):
25 if hasattr(self, "pre_process"):
26 cb = request.callback
27
28 @wraps(cb)
29 def wrapper(response, **cb_kwargs):
30 try:
31 results.startTest(self.testcase_pre)
32 self.pre_process(response)
33 results.stopTest(self.testcase_pre)
34 except AssertionError:
35 results.addFailure(self.testcase_pre, sys.exc_info())
36 except Exception:
37 results.addError(self.testcase_pre, sys.exc_info())
38 else:
39 results.addSuccess(self.testcase_pre)
40 finally:
41 cb_result = cb(response, **cb_kwargs)
42 if isinstance(cb_result, (AsyncGenerator, CoroutineType)):
43 raise TypeError("Contracts don't support async callbacks")
44 return list(iterate_spider_output(cb_result))
45
46 request.callback = wrapper
47
48 return request
49
50 def add_post_hook(self, request, results):
51 if hasattr(self, "post_process"):
52 cb = request.callback
53
54 @wraps(cb)
55 def wrapper(response, **cb_kwargs):
56 cb_result = cb(response, **cb_kwargs)
57 if isinstance(cb_result, (AsyncGenerator, CoroutineType)):
58 raise TypeError("Contracts don't support async callbacks")
59 output = list(iterate_spider_output(cb_result))
60 try:
61 results.startTest(self.testcase_post)
62 self.post_process(output)
63 results.stopTest(self.testcase_post)
64 except AssertionError:
65 results.addFailure(self.testcase_post, sys.exc_info())
66 except Exception:
67 results.addError(self.testcase_post, sys.exc_info())
68 else:
69 results.addSuccess(self.testcase_post)
70 finally:
71 return output
72
73 request.callback = wrapper
74
75 return request
76
77 def adjust_request_args(self, args):
78 return args
79
80
81 class ContractsManager:
82 contracts: Dict[str, Contract] = {}
83
84 def __init__(self, contracts):
85 for contract in contracts:
86 self.contracts[contract.name] = contract
87
88 def tested_methods_from_spidercls(self, spidercls):
89 is_method = re.compile(r"^\s*@", re.MULTILINE).search
90 methods = []
91 for key, value in getmembers(spidercls):
92 if callable(value) and value.__doc__ and is_method(value.__doc__):
93 methods.append(key)
94
95 return methods
96
97 def extract_contracts(self, method):
98 contracts = []
99 for line in method.__doc__.split("\n"):
100 line = line.strip()
101
102 if line.startswith("@"):
103 name, args = re.match(r"@(\w+)\s*(.*)", line).groups()
104 args = re.split(r"\s+", args)
105
106 contracts.append(self.contracts[name](method, *args))
107
108 return contracts
109
110 def from_spider(self, spider, results):
111 requests = []
112 for method in self.tested_methods_from_spidercls(type(spider)):
113 bound_method = spider.__getattribute__(method)
114 try:
115 requests.append(self.from_method(bound_method, results))
116 except Exception:
117 case = _create_testcase(bound_method, "contract")
118 results.addError(case, sys.exc_info())
119
120 return requests
121
122 def from_method(self, method, results):
123 contracts = self.extract_contracts(method)
124 if contracts:
125 request_cls = Request
126 for contract in contracts:
127 if contract.request_cls is not None:
128 request_cls = contract.request_cls
129
130 # calculate request args
131 args, kwargs = get_spec(request_cls.__init__)
132
133 # Don't filter requests to allow
134 # testing different callbacks on the same URL.
135 kwargs["dont_filter"] = True
136 kwargs["callback"] = method
137
138 for contract in contracts:
139 kwargs = contract.adjust_request_args(kwargs)
140
141 args.remove("self")
142
143 # check if all positional arguments are defined in kwargs
144 if set(args).issubset(set(kwargs)):
145 request = request_cls(**kwargs)
146
147 # execute pre and post hooks in order
148 for contract in reversed(contracts):
149 request = contract.add_pre_hook(request, results)
150 for contract in contracts:
151 request = contract.add_post_hook(request, results)
152
153 self._clean_req(request, method, results)
154 return request
155
156 def _clean_req(self, request, method, results):
157 """stop the request from returning objects and records any errors"""
158
159 cb = request.callback
160
161 @wraps(cb)
162 def cb_wrapper(response, **cb_kwargs):
163 try:
164 output = cb(response, **cb_kwargs)
165 output = list(iterate_spider_output(output))
166 except Exception:
167 case = _create_testcase(method, "callback")
168 results.addError(case, sys.exc_info())
169
170 def eb_wrapper(failure):
171 case = _create_testcase(method, "errback")
172 exc_info = failure.type, failure.value, failure.getTracebackObject()
173 results.addError(case, exc_info)
174
175 request.callback = cb_wrapper
176 request.errback = eb_wrapper
177
178
179 def _create_testcase(method, desc):
180 spider = method.__self__.name
181
182 class ContractTestCase(TestCase):
183 def __str__(_self):
184 return f"[{spider}] {method.__name__} ({desc})"
185
186 name = f"{spider}_{method.__name__}"
187 setattr(ContractTestCase, name, lambda x: x)
188 return ContractTestCase(name)
189
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/contracts/__init__.py b/scrapy/contracts/__init__.py
--- a/scrapy/contracts/__init__.py
+++ b/scrapy/contracts/__init__.py
@@ -41,7 +41,9 @@
cb_result = cb(response, **cb_kwargs)
if isinstance(cb_result, (AsyncGenerator, CoroutineType)):
raise TypeError("Contracts don't support async callbacks")
- return list(iterate_spider_output(cb_result))
+ return list( # pylint: disable=return-in-finally
+ iterate_spider_output(cb_result)
+ )
request.callback = wrapper
@@ -68,7 +70,7 @@
else:
results.addSuccess(self.testcase_post)
finally:
- return output
+ return output # pylint: disable=return-in-finally
request.callback = wrapper
| {"golden_diff": "diff --git a/scrapy/contracts/__init__.py b/scrapy/contracts/__init__.py\n--- a/scrapy/contracts/__init__.py\n+++ b/scrapy/contracts/__init__.py\n@@ -41,7 +41,9 @@\n cb_result = cb(response, **cb_kwargs)\n if isinstance(cb_result, (AsyncGenerator, CoroutineType)):\n raise TypeError(\"Contracts don't support async callbacks\")\n- return list(iterate_spider_output(cb_result))\n+ return list( # pylint: disable=return-in-finally\n+ iterate_spider_output(cb_result)\n+ )\n \n request.callback = wrapper\n \n@@ -68,7 +70,7 @@\n else:\n results.addSuccess(self.testcase_post)\n finally:\n- return output\n+ return output # pylint: disable=return-in-finally\n \n request.callback = wrapper\n", "issue": "Re-enable deps on 3.12 when ready\n- [x] bpython (requires greenlet)\r\n- [x] uvloop\r\n- [x] pyftpdlib\r\n\r\nRelated: https://github.com/scrapy/scrapy/pull/6083\n", "before_files": [{"content": "import re\nimport sys\nfrom functools import wraps\nfrom inspect import getmembers\nfrom types import CoroutineType\nfrom typing import AsyncGenerator, Dict\nfrom unittest import TestCase\n\nfrom scrapy.http import Request\nfrom scrapy.utils.python import get_spec\nfrom scrapy.utils.spider import iterate_spider_output\n\n\nclass Contract:\n \"\"\"Abstract class for contracts\"\"\"\n\n request_cls = None\n\n def __init__(self, method, *args):\n self.testcase_pre = _create_testcase(method, f\"@{self.name} pre-hook\")\n self.testcase_post = _create_testcase(method, f\"@{self.name} post-hook\")\n self.args = args\n\n def add_pre_hook(self, request, results):\n if hasattr(self, \"pre_process\"):\n cb = request.callback\n\n @wraps(cb)\n def wrapper(response, **cb_kwargs):\n try:\n results.startTest(self.testcase_pre)\n self.pre_process(response)\n results.stopTest(self.testcase_pre)\n except AssertionError:\n results.addFailure(self.testcase_pre, sys.exc_info())\n except Exception:\n results.addError(self.testcase_pre, sys.exc_info())\n else:\n results.addSuccess(self.testcase_pre)\n finally:\n cb_result = cb(response, **cb_kwargs)\n if isinstance(cb_result, (AsyncGenerator, CoroutineType)):\n raise TypeError(\"Contracts don't support async callbacks\")\n return list(iterate_spider_output(cb_result))\n\n request.callback = wrapper\n\n return request\n\n def add_post_hook(self, request, results):\n if hasattr(self, \"post_process\"):\n cb = request.callback\n\n @wraps(cb)\n def wrapper(response, **cb_kwargs):\n cb_result = cb(response, **cb_kwargs)\n if isinstance(cb_result, (AsyncGenerator, CoroutineType)):\n raise TypeError(\"Contracts don't support async callbacks\")\n output = list(iterate_spider_output(cb_result))\n try:\n results.startTest(self.testcase_post)\n self.post_process(output)\n results.stopTest(self.testcase_post)\n except AssertionError:\n results.addFailure(self.testcase_post, sys.exc_info())\n except Exception:\n results.addError(self.testcase_post, sys.exc_info())\n else:\n results.addSuccess(self.testcase_post)\n finally:\n return output\n\n request.callback = wrapper\n\n return request\n\n def adjust_request_args(self, args):\n return args\n\n\nclass ContractsManager:\n contracts: Dict[str, Contract] = {}\n\n def __init__(self, contracts):\n for contract in contracts:\n self.contracts[contract.name] = contract\n\n def tested_methods_from_spidercls(self, spidercls):\n is_method = re.compile(r\"^\\s*@\", re.MULTILINE).search\n methods = []\n for key, value in getmembers(spidercls):\n if callable(value) and value.__doc__ and is_method(value.__doc__):\n methods.append(key)\n\n return methods\n\n def extract_contracts(self, method):\n contracts = []\n for line in method.__doc__.split(\"\\n\"):\n line = line.strip()\n\n if line.startswith(\"@\"):\n name, args = re.match(r\"@(\\w+)\\s*(.*)\", line).groups()\n args = re.split(r\"\\s+\", args)\n\n contracts.append(self.contracts[name](method, *args))\n\n return contracts\n\n def from_spider(self, spider, results):\n requests = []\n for method in self.tested_methods_from_spidercls(type(spider)):\n bound_method = spider.__getattribute__(method)\n try:\n requests.append(self.from_method(bound_method, results))\n except Exception:\n case = _create_testcase(bound_method, \"contract\")\n results.addError(case, sys.exc_info())\n\n return requests\n\n def from_method(self, method, results):\n contracts = self.extract_contracts(method)\n if contracts:\n request_cls = Request\n for contract in contracts:\n if contract.request_cls is not None:\n request_cls = contract.request_cls\n\n # calculate request args\n args, kwargs = get_spec(request_cls.__init__)\n\n # Don't filter requests to allow\n # testing different callbacks on the same URL.\n kwargs[\"dont_filter\"] = True\n kwargs[\"callback\"] = method\n\n for contract in contracts:\n kwargs = contract.adjust_request_args(kwargs)\n\n args.remove(\"self\")\n\n # check if all positional arguments are defined in kwargs\n if set(args).issubset(set(kwargs)):\n request = request_cls(**kwargs)\n\n # execute pre and post hooks in order\n for contract in reversed(contracts):\n request = contract.add_pre_hook(request, results)\n for contract in contracts:\n request = contract.add_post_hook(request, results)\n\n self._clean_req(request, method, results)\n return request\n\n def _clean_req(self, request, method, results):\n \"\"\"stop the request from returning objects and records any errors\"\"\"\n\n cb = request.callback\n\n @wraps(cb)\n def cb_wrapper(response, **cb_kwargs):\n try:\n output = cb(response, **cb_kwargs)\n output = list(iterate_spider_output(output))\n except Exception:\n case = _create_testcase(method, \"callback\")\n results.addError(case, sys.exc_info())\n\n def eb_wrapper(failure):\n case = _create_testcase(method, \"errback\")\n exc_info = failure.type, failure.value, failure.getTracebackObject()\n results.addError(case, exc_info)\n\n request.callback = cb_wrapper\n request.errback = eb_wrapper\n\n\ndef _create_testcase(method, desc):\n spider = method.__self__.name\n\n class ContractTestCase(TestCase):\n def __str__(_self):\n return f\"[{spider}] {method.__name__} ({desc})\"\n\n name = f\"{spider}_{method.__name__}\"\n setattr(ContractTestCase, name, lambda x: x)\n return ContractTestCase(name)\n", "path": "scrapy/contracts/__init__.py"}], "after_files": [{"content": "import re\nimport sys\nfrom functools import wraps\nfrom inspect import getmembers\nfrom types import CoroutineType\nfrom typing import AsyncGenerator, Dict\nfrom unittest import TestCase\n\nfrom scrapy.http import Request\nfrom scrapy.utils.python import get_spec\nfrom scrapy.utils.spider import iterate_spider_output\n\n\nclass Contract:\n \"\"\"Abstract class for contracts\"\"\"\n\n request_cls = None\n\n def __init__(self, method, *args):\n self.testcase_pre = _create_testcase(method, f\"@{self.name} pre-hook\")\n self.testcase_post = _create_testcase(method, f\"@{self.name} post-hook\")\n self.args = args\n\n def add_pre_hook(self, request, results):\n if hasattr(self, \"pre_process\"):\n cb = request.callback\n\n @wraps(cb)\n def wrapper(response, **cb_kwargs):\n try:\n results.startTest(self.testcase_pre)\n self.pre_process(response)\n results.stopTest(self.testcase_pre)\n except AssertionError:\n results.addFailure(self.testcase_pre, sys.exc_info())\n except Exception:\n results.addError(self.testcase_pre, sys.exc_info())\n else:\n results.addSuccess(self.testcase_pre)\n finally:\n cb_result = cb(response, **cb_kwargs)\n if isinstance(cb_result, (AsyncGenerator, CoroutineType)):\n raise TypeError(\"Contracts don't support async callbacks\")\n return list( # pylint: disable=return-in-finally\n iterate_spider_output(cb_result)\n )\n\n request.callback = wrapper\n\n return request\n\n def add_post_hook(self, request, results):\n if hasattr(self, \"post_process\"):\n cb = request.callback\n\n @wraps(cb)\n def wrapper(response, **cb_kwargs):\n cb_result = cb(response, **cb_kwargs)\n if isinstance(cb_result, (AsyncGenerator, CoroutineType)):\n raise TypeError(\"Contracts don't support async callbacks\")\n output = list(iterate_spider_output(cb_result))\n try:\n results.startTest(self.testcase_post)\n self.post_process(output)\n results.stopTest(self.testcase_post)\n except AssertionError:\n results.addFailure(self.testcase_post, sys.exc_info())\n except Exception:\n results.addError(self.testcase_post, sys.exc_info())\n else:\n results.addSuccess(self.testcase_post)\n finally:\n return output # pylint: disable=return-in-finally\n\n request.callback = wrapper\n\n return request\n\n def adjust_request_args(self, args):\n return args\n\n\nclass ContractsManager:\n contracts: Dict[str, Contract] = {}\n\n def __init__(self, contracts):\n for contract in contracts:\n self.contracts[contract.name] = contract\n\n def tested_methods_from_spidercls(self, spidercls):\n is_method = re.compile(r\"^\\s*@\", re.MULTILINE).search\n methods = []\n for key, value in getmembers(spidercls):\n if callable(value) and value.__doc__ and is_method(value.__doc__):\n methods.append(key)\n\n return methods\n\n def extract_contracts(self, method):\n contracts = []\n for line in method.__doc__.split(\"\\n\"):\n line = line.strip()\n\n if line.startswith(\"@\"):\n name, args = re.match(r\"@(\\w+)\\s*(.*)\", line).groups()\n args = re.split(r\"\\s+\", args)\n\n contracts.append(self.contracts[name](method, *args))\n\n return contracts\n\n def from_spider(self, spider, results):\n requests = []\n for method in self.tested_methods_from_spidercls(type(spider)):\n bound_method = spider.__getattribute__(method)\n try:\n requests.append(self.from_method(bound_method, results))\n except Exception:\n case = _create_testcase(bound_method, \"contract\")\n results.addError(case, sys.exc_info())\n\n return requests\n\n def from_method(self, method, results):\n contracts = self.extract_contracts(method)\n if contracts:\n request_cls = Request\n for contract in contracts:\n if contract.request_cls is not None:\n request_cls = contract.request_cls\n\n # calculate request args\n args, kwargs = get_spec(request_cls.__init__)\n\n # Don't filter requests to allow\n # testing different callbacks on the same URL.\n kwargs[\"dont_filter\"] = True\n kwargs[\"callback\"] = method\n\n for contract in contracts:\n kwargs = contract.adjust_request_args(kwargs)\n\n args.remove(\"self\")\n\n # check if all positional arguments are defined in kwargs\n if set(args).issubset(set(kwargs)):\n request = request_cls(**kwargs)\n\n # execute pre and post hooks in order\n for contract in reversed(contracts):\n request = contract.add_pre_hook(request, results)\n for contract in contracts:\n request = contract.add_post_hook(request, results)\n\n self._clean_req(request, method, results)\n return request\n\n def _clean_req(self, request, method, results):\n \"\"\"stop the request from returning objects and records any errors\"\"\"\n\n cb = request.callback\n\n @wraps(cb)\n def cb_wrapper(response, **cb_kwargs):\n try:\n output = cb(response, **cb_kwargs)\n output = list(iterate_spider_output(output))\n except Exception:\n case = _create_testcase(method, \"callback\")\n results.addError(case, sys.exc_info())\n\n def eb_wrapper(failure):\n case = _create_testcase(method, \"errback\")\n exc_info = failure.type, failure.value, failure.getTracebackObject()\n results.addError(case, exc_info)\n\n request.callback = cb_wrapper\n request.errback = eb_wrapper\n\n\ndef _create_testcase(method, desc):\n spider = method.__self__.name\n\n class ContractTestCase(TestCase):\n def __str__(_self):\n return f\"[{spider}] {method.__name__} ({desc})\"\n\n name = f\"{spider}_{method.__name__}\"\n setattr(ContractTestCase, name, lambda x: x)\n return ContractTestCase(name)\n", "path": "scrapy/contracts/__init__.py"}]} | 2,067 | 191 |
gh_patches_debug_4154 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-1216 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
full links not recognized
**Describe the bug**
Only the first part of a link with a /@ in it is being linked so https://mastodon.social/@username only auto links https://mastodon.social and strips '@username' from being a link
**To Reproduce**
here is an example https://bookwyrm.social/user/wakest/comment/30867
**Expected behavior**
When a link is in a comment, the whole link should be turned into a link not just part of it
**Screenshots**
<img width="858" alt="image" src="https://user-images.githubusercontent.com/7890201/124171841-ae45cd80-da6e-11eb-9071-a74596df184a.png">
**Instance**
https://bookwyrm.social
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/views/status.py`
Content:
```
1 """ what are we here for if not for posting """
2 import re
3 from django.contrib.auth.decorators import login_required
4 from django.http import HttpResponseBadRequest
5 from django.shortcuts import get_object_or_404, redirect
6 from django.template.response import TemplateResponse
7 from django.utils.decorators import method_decorator
8 from django.views import View
9 from markdown import markdown
10
11 from bookwyrm import forms, models
12 from bookwyrm.sanitize_html import InputHtmlParser
13 from bookwyrm.settings import DOMAIN
14 from bookwyrm.utils import regex
15 from .helpers import handle_remote_webfinger
16 from .reading import edit_readthrough
17
18
19 # pylint: disable= no-self-use
20 @method_decorator(login_required, name="dispatch")
21 class CreateStatus(View):
22 """the view for *posting*"""
23
24 def get(self, request, status_type): # pylint: disable=unused-argument
25 """compose view (used for delete-and-redraft"""
26 book = get_object_or_404(models.Edition, id=request.GET.get("book"))
27 data = {"book": book}
28 return TemplateResponse(request, "compose.html", data)
29
30 def post(self, request, status_type):
31 """create status of whatever type"""
32 status_type = status_type[0].upper() + status_type[1:]
33
34 try:
35 form = getattr(forms, "%sForm" % status_type)(request.POST)
36 except AttributeError:
37 return HttpResponseBadRequest()
38 if not form.is_valid():
39 return redirect(request.headers.get("Referer", "/"))
40
41 status = form.save(commit=False)
42 if not status.sensitive and status.content_warning:
43 # the cw text field remains populated when you click "remove"
44 status.content_warning = None
45 status.save(broadcast=False)
46
47 # inspect the text for user tags
48 content = status.content
49 for (mention_text, mention_user) in find_mentions(content):
50 # add them to status mentions fk
51 status.mention_users.add(mention_user)
52
53 # turn the mention into a link
54 content = re.sub(
55 r"%s([^@]|$)" % mention_text,
56 r'<a href="%s">%s</a>\g<1>' % (mention_user.remote_id, mention_text),
57 content,
58 )
59 # add reply parent to mentions
60 if status.reply_parent:
61 status.mention_users.add(status.reply_parent.user)
62
63 # deduplicate mentions
64 status.mention_users.set(set(status.mention_users.all()))
65
66 # don't apply formatting to generated notes
67 if not isinstance(status, models.GeneratedNote) and content:
68 status.content = to_markdown(content)
69 # do apply formatting to quotes
70 if hasattr(status, "quote"):
71 status.quote = to_markdown(status.quote)
72
73 status.save(created=True)
74
75 # update a readthorugh, if needed
76 edit_readthrough(request)
77
78 return redirect("/")
79
80
81 @method_decorator(login_required, name="dispatch")
82 class DeleteStatus(View):
83 """tombstone that bad boy"""
84
85 def post(self, request, status_id):
86 """delete and tombstone a status"""
87 status = get_object_or_404(models.Status, id=status_id)
88
89 # don't let people delete other people's statuses
90 if status.user != request.user and not request.user.has_perm("moderate_post"):
91 return HttpResponseBadRequest()
92
93 # perform deletion
94 status.delete()
95 return redirect(request.headers.get("Referer", "/"))
96
97
98 @method_decorator(login_required, name="dispatch")
99 class DeleteAndRedraft(View):
100 """delete a status but let the user re-create it"""
101
102 def post(self, request, status_id):
103 """delete and tombstone a status"""
104 status = get_object_or_404(
105 models.Status.objects.select_subclasses(), id=status_id
106 )
107 if isinstance(status, (models.GeneratedNote, models.ReviewRating)):
108 return HttpResponseBadRequest()
109
110 # don't let people redraft other people's statuses
111 if status.user != request.user:
112 return HttpResponseBadRequest()
113
114 status_type = status.status_type.lower()
115 if status.reply_parent:
116 status_type = "reply"
117
118 data = {
119 "draft": status,
120 "type": status_type,
121 }
122 if hasattr(status, "book"):
123 data["book"] = status.book
124 elif status.mention_books:
125 data["book"] = status.mention_books.first()
126
127 # perform deletion
128 status.delete()
129 return TemplateResponse(request, "compose.html", data)
130
131
132 def find_mentions(content):
133 """detect @mentions in raw status content"""
134 if not content:
135 return
136 for match in re.finditer(regex.STRICT_USERNAME, content):
137 username = match.group().strip().split("@")[1:]
138 if len(username) == 1:
139 # this looks like a local user (@user), fill in the domain
140 username.append(DOMAIN)
141 username = "@".join(username)
142
143 mention_user = handle_remote_webfinger(username)
144 if not mention_user:
145 # we can ignore users we don't know about
146 continue
147 yield (match.group(), mention_user)
148
149
150 def format_links(content):
151 """detect and format links"""
152 return re.sub(
153 r'([^(href=")]|^|\()(https?:\/\/(%s([\w\.\-_\/+&\?=:;,])*))' % regex.DOMAIN,
154 r'\g<1><a href="\g<2>">\g<3></a>',
155 content,
156 )
157
158
159 def to_markdown(content):
160 """catch links and convert to markdown"""
161 content = markdown(content)
162 content = format_links(content)
163 # sanitize resulting html
164 sanitizer = InputHtmlParser()
165 sanitizer.feed(content)
166 return sanitizer.get_output()
167
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bookwyrm/views/status.py b/bookwyrm/views/status.py
--- a/bookwyrm/views/status.py
+++ b/bookwyrm/views/status.py
@@ -150,7 +150,7 @@
def format_links(content):
"""detect and format links"""
return re.sub(
- r'([^(href=")]|^|\()(https?:\/\/(%s([\w\.\-_\/+&\?=:;,])*))' % regex.DOMAIN,
+ r'([^(href=")]|^|\()(https?:\/\/(%s([\w\.\-_\/+&\?=:;,@#])*))' % regex.DOMAIN,
r'\g<1><a href="\g<2>">\g<3></a>',
content,
)
| {"golden_diff": "diff --git a/bookwyrm/views/status.py b/bookwyrm/views/status.py\n--- a/bookwyrm/views/status.py\n+++ b/bookwyrm/views/status.py\n@@ -150,7 +150,7 @@\n def format_links(content):\n \"\"\"detect and format links\"\"\"\n return re.sub(\n- r'([^(href=\")]|^|\\()(https?:\\/\\/(%s([\\w\\.\\-_\\/+&\\?=:;,])*))' % regex.DOMAIN,\n+ r'([^(href=\")]|^|\\()(https?:\\/\\/(%s([\\w\\.\\-_\\/+&\\?=:;,@#])*))' % regex.DOMAIN,\n r'\\g<1><a href=\"\\g<2>\">\\g<3></a>',\n content,\n )\n", "issue": "full links not recognized\n**Describe the bug**\r\nOnly the first part of a link with a /@ in it is being linked so https://mastodon.social/@username only auto links https://mastodon.social and strips '@username' from being a link\r\n\r\n**To Reproduce**\r\nhere is an example https://bookwyrm.social/user/wakest/comment/30867\r\n\r\n**Expected behavior**\r\nWhen a link is in a comment, the whole link should be turned into a link not just part of it\r\n\r\n**Screenshots**\r\n<img width=\"858\" alt=\"image\" src=\"https://user-images.githubusercontent.com/7890201/124171841-ae45cd80-da6e-11eb-9071-a74596df184a.png\">\r\n\r\n**Instance**\r\nhttps://bookwyrm.social\n", "before_files": [{"content": "\"\"\" what are we here for if not for posting \"\"\"\nimport re\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import HttpResponseBadRequest\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\nfrom markdown import markdown\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.sanitize_html import InputHtmlParser\nfrom bookwyrm.settings import DOMAIN\nfrom bookwyrm.utils import regex\nfrom .helpers import handle_remote_webfinger\nfrom .reading import edit_readthrough\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass CreateStatus(View):\n \"\"\"the view for *posting*\"\"\"\n\n def get(self, request, status_type): # pylint: disable=unused-argument\n \"\"\"compose view (used for delete-and-redraft\"\"\"\n book = get_object_or_404(models.Edition, id=request.GET.get(\"book\"))\n data = {\"book\": book}\n return TemplateResponse(request, \"compose.html\", data)\n\n def post(self, request, status_type):\n \"\"\"create status of whatever type\"\"\"\n status_type = status_type[0].upper() + status_type[1:]\n\n try:\n form = getattr(forms, \"%sForm\" % status_type)(request.POST)\n except AttributeError:\n return HttpResponseBadRequest()\n if not form.is_valid():\n return redirect(request.headers.get(\"Referer\", \"/\"))\n\n status = form.save(commit=False)\n if not status.sensitive and status.content_warning:\n # the cw text field remains populated when you click \"remove\"\n status.content_warning = None\n status.save(broadcast=False)\n\n # inspect the text for user tags\n content = status.content\n for (mention_text, mention_user) in find_mentions(content):\n # add them to status mentions fk\n status.mention_users.add(mention_user)\n\n # turn the mention into a link\n content = re.sub(\n r\"%s([^@]|$)\" % mention_text,\n r'<a href=\"%s\">%s</a>\\g<1>' % (mention_user.remote_id, mention_text),\n content,\n )\n # add reply parent to mentions\n if status.reply_parent:\n status.mention_users.add(status.reply_parent.user)\n\n # deduplicate mentions\n status.mention_users.set(set(status.mention_users.all()))\n\n # don't apply formatting to generated notes\n if not isinstance(status, models.GeneratedNote) and content:\n status.content = to_markdown(content)\n # do apply formatting to quotes\n if hasattr(status, \"quote\"):\n status.quote = to_markdown(status.quote)\n\n status.save(created=True)\n\n # update a readthorugh, if needed\n edit_readthrough(request)\n\n return redirect(\"/\")\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass DeleteStatus(View):\n \"\"\"tombstone that bad boy\"\"\"\n\n def post(self, request, status_id):\n \"\"\"delete and tombstone a status\"\"\"\n status = get_object_or_404(models.Status, id=status_id)\n\n # don't let people delete other people's statuses\n if status.user != request.user and not request.user.has_perm(\"moderate_post\"):\n return HttpResponseBadRequest()\n\n # perform deletion\n status.delete()\n return redirect(request.headers.get(\"Referer\", \"/\"))\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass DeleteAndRedraft(View):\n \"\"\"delete a status but let the user re-create it\"\"\"\n\n def post(self, request, status_id):\n \"\"\"delete and tombstone a status\"\"\"\n status = get_object_or_404(\n models.Status.objects.select_subclasses(), id=status_id\n )\n if isinstance(status, (models.GeneratedNote, models.ReviewRating)):\n return HttpResponseBadRequest()\n\n # don't let people redraft other people's statuses\n if status.user != request.user:\n return HttpResponseBadRequest()\n\n status_type = status.status_type.lower()\n if status.reply_parent:\n status_type = \"reply\"\n\n data = {\n \"draft\": status,\n \"type\": status_type,\n }\n if hasattr(status, \"book\"):\n data[\"book\"] = status.book\n elif status.mention_books:\n data[\"book\"] = status.mention_books.first()\n\n # perform deletion\n status.delete()\n return TemplateResponse(request, \"compose.html\", data)\n\n\ndef find_mentions(content):\n \"\"\"detect @mentions in raw status content\"\"\"\n if not content:\n return\n for match in re.finditer(regex.STRICT_USERNAME, content):\n username = match.group().strip().split(\"@\")[1:]\n if len(username) == 1:\n # this looks like a local user (@user), fill in the domain\n username.append(DOMAIN)\n username = \"@\".join(username)\n\n mention_user = handle_remote_webfinger(username)\n if not mention_user:\n # we can ignore users we don't know about\n continue\n yield (match.group(), mention_user)\n\n\ndef format_links(content):\n \"\"\"detect and format links\"\"\"\n return re.sub(\n r'([^(href=\")]|^|\\()(https?:\\/\\/(%s([\\w\\.\\-_\\/+&\\?=:;,])*))' % regex.DOMAIN,\n r'\\g<1><a href=\"\\g<2>\">\\g<3></a>',\n content,\n )\n\n\ndef to_markdown(content):\n \"\"\"catch links and convert to markdown\"\"\"\n content = markdown(content)\n content = format_links(content)\n # sanitize resulting html\n sanitizer = InputHtmlParser()\n sanitizer.feed(content)\n return sanitizer.get_output()\n", "path": "bookwyrm/views/status.py"}], "after_files": [{"content": "\"\"\" what are we here for if not for posting \"\"\"\nimport re\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import HttpResponseBadRequest\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\nfrom markdown import markdown\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.sanitize_html import InputHtmlParser\nfrom bookwyrm.settings import DOMAIN\nfrom bookwyrm.utils import regex\nfrom .helpers import handle_remote_webfinger\nfrom .reading import edit_readthrough\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass CreateStatus(View):\n \"\"\"the view for *posting*\"\"\"\n\n def get(self, request, status_type): # pylint: disable=unused-argument\n \"\"\"compose view (used for delete-and-redraft\"\"\"\n book = get_object_or_404(models.Edition, id=request.GET.get(\"book\"))\n data = {\"book\": book}\n return TemplateResponse(request, \"compose.html\", data)\n\n def post(self, request, status_type):\n \"\"\"create status of whatever type\"\"\"\n status_type = status_type[0].upper() + status_type[1:]\n\n try:\n form = getattr(forms, \"%sForm\" % status_type)(request.POST)\n except AttributeError:\n return HttpResponseBadRequest()\n if not form.is_valid():\n return redirect(request.headers.get(\"Referer\", \"/\"))\n\n status = form.save(commit=False)\n if not status.sensitive and status.content_warning:\n # the cw text field remains populated when you click \"remove\"\n status.content_warning = None\n status.save(broadcast=False)\n\n # inspect the text for user tags\n content = status.content\n for (mention_text, mention_user) in find_mentions(content):\n # add them to status mentions fk\n status.mention_users.add(mention_user)\n\n # turn the mention into a link\n content = re.sub(\n r\"%s([^@]|$)\" % mention_text,\n r'<a href=\"%s\">%s</a>\\g<1>' % (mention_user.remote_id, mention_text),\n content,\n )\n # add reply parent to mentions\n if status.reply_parent:\n status.mention_users.add(status.reply_parent.user)\n\n # deduplicate mentions\n status.mention_users.set(set(status.mention_users.all()))\n\n # don't apply formatting to generated notes\n if not isinstance(status, models.GeneratedNote) and content:\n status.content = to_markdown(content)\n # do apply formatting to quotes\n if hasattr(status, \"quote\"):\n status.quote = to_markdown(status.quote)\n\n status.save(created=True)\n\n # update a readthorugh, if needed\n edit_readthrough(request)\n\n return redirect(\"/\")\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass DeleteStatus(View):\n \"\"\"tombstone that bad boy\"\"\"\n\n def post(self, request, status_id):\n \"\"\"delete and tombstone a status\"\"\"\n status = get_object_or_404(models.Status, id=status_id)\n\n # don't let people delete other people's statuses\n if status.user != request.user and not request.user.has_perm(\"moderate_post\"):\n return HttpResponseBadRequest()\n\n # perform deletion\n status.delete()\n return redirect(request.headers.get(\"Referer\", \"/\"))\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass DeleteAndRedraft(View):\n \"\"\"delete a status but let the user re-create it\"\"\"\n\n def post(self, request, status_id):\n \"\"\"delete and tombstone a status\"\"\"\n status = get_object_or_404(\n models.Status.objects.select_subclasses(), id=status_id\n )\n if isinstance(status, (models.GeneratedNote, models.ReviewRating)):\n return HttpResponseBadRequest()\n\n # don't let people redraft other people's statuses\n if status.user != request.user:\n return HttpResponseBadRequest()\n\n status_type = status.status_type.lower()\n if status.reply_parent:\n status_type = \"reply\"\n\n data = {\n \"draft\": status,\n \"type\": status_type,\n }\n if hasattr(status, \"book\"):\n data[\"book\"] = status.book\n elif status.mention_books:\n data[\"book\"] = status.mention_books.first()\n\n # perform deletion\n status.delete()\n return TemplateResponse(request, \"compose.html\", data)\n\n\ndef find_mentions(content):\n \"\"\"detect @mentions in raw status content\"\"\"\n if not content:\n return\n for match in re.finditer(regex.STRICT_USERNAME, content):\n username = match.group().strip().split(\"@\")[1:]\n if len(username) == 1:\n # this looks like a local user (@user), fill in the domain\n username.append(DOMAIN)\n username = \"@\".join(username)\n\n mention_user = handle_remote_webfinger(username)\n if not mention_user:\n # we can ignore users we don't know about\n continue\n yield (match.group(), mention_user)\n\n\ndef format_links(content):\n \"\"\"detect and format links\"\"\"\n return re.sub(\n r'([^(href=\")]|^|\\()(https?:\\/\\/(%s([\\w\\.\\-_\\/+&\\?=:;,@#])*))' % regex.DOMAIN,\n r'\\g<1><a href=\"\\g<2>\">\\g<3></a>',\n content,\n )\n\n\ndef to_markdown(content):\n \"\"\"catch links and convert to markdown\"\"\"\n content = markdown(content)\n content = format_links(content)\n # sanitize resulting html\n sanitizer = InputHtmlParser()\n sanitizer.feed(content)\n return sanitizer.get_output()\n", "path": "bookwyrm/views/status.py"}]} | 2,068 | 169 |
gh_patches_debug_4651 | rasdani/github-patches | git_diff | svthalia__concrexit-1782 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Shift product sales are not calculated correctly
### Describe the bug
For some reason, the product sales for shifts in the sales app are not calculated properly:
### How to reproduce
1. Check staging.thalia.nu, shift 1
2.
<img width="453" alt="image" src="https://user-images.githubusercontent.com/7915741/123234193-06af2500-d4db-11eb-99b9-a8be74602c1a.png">
3. It should be waaaaay more as there are 200+ orders
### Expected behaviour
Correct calculation
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/sales/models/shift.py`
Content:
```
1 from django.core.exceptions import ValidationError
2 from django.db import models
3 from django.db.models import (
4 Sum,
5 Q,
6 Count,
7 )
8 from django.db.models.expressions import RawSQL
9 from django.utils import timezone
10 from django.utils.translation import gettext_lazy as _
11 from queryable_properties.managers import QueryablePropertiesManager
12 from queryable_properties.properties import (
13 RangeCheckProperty,
14 queryable_property,
15 )
16
17 from activemembers.models import MemberGroup
18 from sales.models.product import ProductList
19
20
21 class Shift(models.Model):
22 class Meta:
23 permissions = [
24 ("override_manager", _("Can access all shifts as manager")),
25 ]
26
27 objects = QueryablePropertiesManager()
28
29 start = models.DateTimeField(verbose_name=_("start"), blank=False, null=False,)
30 end = models.DateTimeField(
31 verbose_name=_("end"),
32 blank=False,
33 null=False,
34 help_text=_(
35 "The end time is only indicative and does not prevent orders being created after the shift has ended. This only happens after locking the shift."
36 ),
37 )
38
39 title = models.CharField(
40 verbose_name=_("title"), blank=True, null=True, max_length=100
41 )
42
43 product_list = models.ForeignKey(
44 ProductList,
45 verbose_name=_("product list"),
46 blank=False,
47 null=False,
48 on_delete=models.PROTECT,
49 )
50
51 managers = models.ManyToManyField(
52 MemberGroup, verbose_name=_("managers"), related_name="manager_shifts"
53 )
54
55 locked = models.BooleanField(
56 verbose_name=_("locked"),
57 blank=False,
58 null=False,
59 default=False,
60 help_text=_(
61 "Prevent orders being changed or created for this shift. This will also clean up all unpaid orders in this shift."
62 ),
63 )
64
65 def clean(self):
66 super().clean()
67 errors = {}
68
69 if self.orders.filter(created_at__lt=self.start):
70 errors.update(
71 {
72 "start": _(
73 "There are already orders created in this shift before this start time."
74 )
75 }
76 )
77
78 if self.end and self.start and self.end <= self.start:
79 errors.update({"end": _("End cannot be before start.")})
80
81 if errors:
82 raise ValidationError(errors)
83
84 def save(
85 self, force_insert=False, force_update=False, using=None, update_fields=None
86 ):
87 if self.locked:
88 self.orders.filter(
89 (Q(payment__isnull=True) & Q(total_amount__gt=0))
90 | Q(order_items__isnull=True)
91 ).delete()
92
93 return super(Shift, self).save(force_insert, force_update, using, update_fields)
94
95 active = RangeCheckProperty("start", "end", timezone.now)
96
97 @queryable_property(annotation_based=True)
98 @classmethod
99 def total_revenue(cls):
100 return RawSQL(
101 """(SELECT CAST(COALESCE(SUM("__orders"."total__"), 0) AS NUMERIC) AS "shift_revenue__"
102 FROM (
103 SELECT "sales_order"."id", "sales_order"."shift_id", "sales_order"."discount", "sales_order"."payment_id", CAST(SUM("sales_orderitem"."total") AS NUMERIC) AS "subtotal__", CAST((SUM("sales_orderitem"."total") - COALESCE("sales_order"."discount", 0)) AS NUMERIC) AS "total__", SUM("sales_orderitem"."amount") AS "num_items__"
104 FROM "sales_order" LEFT JOIN "sales_orderitem" ON "sales_orderitem"."order_id" = "sales_order"."id"
105 GROUP BY "sales_order"."id", "sales_order"."shift_id", "sales_order"."discount"
106 ) AS "__orders"
107 WHERE "__orders"."shift_id"="sales_shift"."id"
108 )""",
109 [],
110 )
111
112 @queryable_property(annotation_based=True)
113 @classmethod
114 def total_revenue_paid(cls):
115 return RawSQL(
116 """(SELECT CAST(COALESCE(SUM("__orders"."total__"), 0) AS NUMERIC) AS "shift_revenue__"
117 FROM (
118 SELECT "sales_order"."id", "sales_order"."shift_id", "sales_order"."discount", "sales_order"."payment_id", CAST(SUM("sales_orderitem"."total") AS NUMERIC) AS "subtotal__", CAST((SUM("sales_orderitem"."total") - COALESCE("sales_order"."discount", 0)) AS NUMERIC) AS "total__", SUM("sales_orderitem"."amount") AS "num_items__"
119 FROM "sales_order" LEFT JOIN "sales_orderitem" ON "sales_orderitem"."order_id" = "sales_order"."id"
120 GROUP BY "sales_order"."id", "sales_order"."shift_id", "sales_order"."discount"
121 ) AS "__orders"
122 WHERE "__orders"."shift_id"="sales_shift"."id"
123 AND ("__orders"."payment_id" IS NOT NULL OR ("__orders"."payment_id" IS NULL AND "__orders"."total__"=0))
124 )""",
125 [],
126 )
127
128 @queryable_property(annotation_based=True)
129 @classmethod
130 def num_orders(cls):
131 return Count("orders")
132
133 @queryable_property(annotation_based=True)
134 @classmethod
135 def num_orders_paid(cls):
136 return RawSQL(
137 """(SELECT COUNT(*) AS "num_orders__"
138 FROM (
139 SELECT "sales_order"."id", "sales_order"."shift_id", "sales_order"."discount", "sales_order"."payment_id", CAST(SUM("sales_orderitem"."total") AS NUMERIC) AS "subtotal__", CAST((SUM("sales_orderitem"."total") - COALESCE("sales_order"."discount", 0)) AS NUMERIC) AS "total__", SUM("sales_orderitem"."amount") AS "num_items__"
140 FROM "sales_order" LEFT JOIN "sales_orderitem" ON "sales_orderitem"."order_id" = "sales_order"."id"
141 GROUP BY "sales_order"."id", "sales_order"."shift_id", "sales_order"."discount"
142 ) AS "__orders"
143 WHERE "__orders"."shift_id"="sales_shift"."id"
144 AND ("__orders"."payment_id" IS NOT NULL OR ("__orders"."payment_id" IS NULL AND "__orders"."total__"=0))
145 )""",
146 [],
147 )
148
149 @property
150 def product_sales(self):
151 qs = (
152 self.orders.exclude(order_items__isnull=True)
153 .values("order_items__product")
154 .distinct()
155 .annotate(sold=Sum("order_items__amount"))
156 )
157 return {
158 item[0]: item[1]
159 for item in qs.values_list("order_items__product__product__name", "sold")
160 }
161
162 def __str__(self):
163 if self.title and self.title != "":
164 return f"Shift {self.pk} - {self.title}"
165 return f"Shift {self.pk}"
166
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/sales/models/shift.py b/website/sales/models/shift.py
--- a/website/sales/models/shift.py
+++ b/website/sales/models/shift.py
@@ -151,8 +151,8 @@
qs = (
self.orders.exclude(order_items__isnull=True)
.values("order_items__product")
- .distinct()
.annotate(sold=Sum("order_items__amount"))
+ .order_by()
)
return {
item[0]: item[1]
| {"golden_diff": "diff --git a/website/sales/models/shift.py b/website/sales/models/shift.py\n--- a/website/sales/models/shift.py\n+++ b/website/sales/models/shift.py\n@@ -151,8 +151,8 @@\n qs = (\n self.orders.exclude(order_items__isnull=True)\n .values(\"order_items__product\")\n- .distinct()\n .annotate(sold=Sum(\"order_items__amount\"))\n+ .order_by()\n )\n return {\n item[0]: item[1]\n", "issue": "Shift product sales are not calculated correctly\n### Describe the bug\r\nFor some reason, the product sales for shifts in the sales app are not calculated properly:\r\n\r\n### How to reproduce\r\n1. Check staging.thalia.nu, shift 1\r\n2. \r\n<img width=\"453\" alt=\"image\" src=\"https://user-images.githubusercontent.com/7915741/123234193-06af2500-d4db-11eb-99b9-a8be74602c1a.png\">\r\n3. It should be waaaaay more as there are 200+ orders\r\n\r\n### Expected behaviour\r\nCorrect calculation\r\n\n", "before_files": [{"content": "from django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.db.models import (\n Sum,\n Q,\n Count,\n)\nfrom django.db.models.expressions import RawSQL\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\nfrom queryable_properties.managers import QueryablePropertiesManager\nfrom queryable_properties.properties import (\n RangeCheckProperty,\n queryable_property,\n)\n\nfrom activemembers.models import MemberGroup\nfrom sales.models.product import ProductList\n\n\nclass Shift(models.Model):\n class Meta:\n permissions = [\n (\"override_manager\", _(\"Can access all shifts as manager\")),\n ]\n\n objects = QueryablePropertiesManager()\n\n start = models.DateTimeField(verbose_name=_(\"start\"), blank=False, null=False,)\n end = models.DateTimeField(\n verbose_name=_(\"end\"),\n blank=False,\n null=False,\n help_text=_(\n \"The end time is only indicative and does not prevent orders being created after the shift has ended. This only happens after locking the shift.\"\n ),\n )\n\n title = models.CharField(\n verbose_name=_(\"title\"), blank=True, null=True, max_length=100\n )\n\n product_list = models.ForeignKey(\n ProductList,\n verbose_name=_(\"product list\"),\n blank=False,\n null=False,\n on_delete=models.PROTECT,\n )\n\n managers = models.ManyToManyField(\n MemberGroup, verbose_name=_(\"managers\"), related_name=\"manager_shifts\"\n )\n\n locked = models.BooleanField(\n verbose_name=_(\"locked\"),\n blank=False,\n null=False,\n default=False,\n help_text=_(\n \"Prevent orders being changed or created for this shift. This will also clean up all unpaid orders in this shift.\"\n ),\n )\n\n def clean(self):\n super().clean()\n errors = {}\n\n if self.orders.filter(created_at__lt=self.start):\n errors.update(\n {\n \"start\": _(\n \"There are already orders created in this shift before this start time.\"\n )\n }\n )\n\n if self.end and self.start and self.end <= self.start:\n errors.update({\"end\": _(\"End cannot be before start.\")})\n\n if errors:\n raise ValidationError(errors)\n\n def save(\n self, force_insert=False, force_update=False, using=None, update_fields=None\n ):\n if self.locked:\n self.orders.filter(\n (Q(payment__isnull=True) & Q(total_amount__gt=0))\n | Q(order_items__isnull=True)\n ).delete()\n\n return super(Shift, self).save(force_insert, force_update, using, update_fields)\n\n active = RangeCheckProperty(\"start\", \"end\", timezone.now)\n\n @queryable_property(annotation_based=True)\n @classmethod\n def total_revenue(cls):\n return RawSQL(\n \"\"\"(SELECT CAST(COALESCE(SUM(\"__orders\".\"total__\"), 0) AS NUMERIC) AS \"shift_revenue__\"\n FROM (\n SELECT \"sales_order\".\"id\", \"sales_order\".\"shift_id\", \"sales_order\".\"discount\", \"sales_order\".\"payment_id\", CAST(SUM(\"sales_orderitem\".\"total\") AS NUMERIC) AS \"subtotal__\", CAST((SUM(\"sales_orderitem\".\"total\") - COALESCE(\"sales_order\".\"discount\", 0)) AS NUMERIC) AS \"total__\", SUM(\"sales_orderitem\".\"amount\") AS \"num_items__\"\n FROM \"sales_order\" LEFT JOIN \"sales_orderitem\" ON \"sales_orderitem\".\"order_id\" = \"sales_order\".\"id\"\n GROUP BY \"sales_order\".\"id\", \"sales_order\".\"shift_id\", \"sales_order\".\"discount\"\n ) AS \"__orders\"\n WHERE \"__orders\".\"shift_id\"=\"sales_shift\".\"id\"\n )\"\"\",\n [],\n )\n\n @queryable_property(annotation_based=True)\n @classmethod\n def total_revenue_paid(cls):\n return RawSQL(\n \"\"\"(SELECT CAST(COALESCE(SUM(\"__orders\".\"total__\"), 0) AS NUMERIC) AS \"shift_revenue__\"\n FROM (\n SELECT \"sales_order\".\"id\", \"sales_order\".\"shift_id\", \"sales_order\".\"discount\", \"sales_order\".\"payment_id\", CAST(SUM(\"sales_orderitem\".\"total\") AS NUMERIC) AS \"subtotal__\", CAST((SUM(\"sales_orderitem\".\"total\") - COALESCE(\"sales_order\".\"discount\", 0)) AS NUMERIC) AS \"total__\", SUM(\"sales_orderitem\".\"amount\") AS \"num_items__\"\n FROM \"sales_order\" LEFT JOIN \"sales_orderitem\" ON \"sales_orderitem\".\"order_id\" = \"sales_order\".\"id\"\n GROUP BY \"sales_order\".\"id\", \"sales_order\".\"shift_id\", \"sales_order\".\"discount\"\n ) AS \"__orders\"\n WHERE \"__orders\".\"shift_id\"=\"sales_shift\".\"id\"\n AND (\"__orders\".\"payment_id\" IS NOT NULL OR (\"__orders\".\"payment_id\" IS NULL AND \"__orders\".\"total__\"=0))\n )\"\"\",\n [],\n )\n\n @queryable_property(annotation_based=True)\n @classmethod\n def num_orders(cls):\n return Count(\"orders\")\n\n @queryable_property(annotation_based=True)\n @classmethod\n def num_orders_paid(cls):\n return RawSQL(\n \"\"\"(SELECT COUNT(*) AS \"num_orders__\"\n FROM (\n SELECT \"sales_order\".\"id\", \"sales_order\".\"shift_id\", \"sales_order\".\"discount\", \"sales_order\".\"payment_id\", CAST(SUM(\"sales_orderitem\".\"total\") AS NUMERIC) AS \"subtotal__\", CAST((SUM(\"sales_orderitem\".\"total\") - COALESCE(\"sales_order\".\"discount\", 0)) AS NUMERIC) AS \"total__\", SUM(\"sales_orderitem\".\"amount\") AS \"num_items__\"\n FROM \"sales_order\" LEFT JOIN \"sales_orderitem\" ON \"sales_orderitem\".\"order_id\" = \"sales_order\".\"id\"\n GROUP BY \"sales_order\".\"id\", \"sales_order\".\"shift_id\", \"sales_order\".\"discount\"\n ) AS \"__orders\"\n WHERE \"__orders\".\"shift_id\"=\"sales_shift\".\"id\"\n AND (\"__orders\".\"payment_id\" IS NOT NULL OR (\"__orders\".\"payment_id\" IS NULL AND \"__orders\".\"total__\"=0))\n )\"\"\",\n [],\n )\n\n @property\n def product_sales(self):\n qs = (\n self.orders.exclude(order_items__isnull=True)\n .values(\"order_items__product\")\n .distinct()\n .annotate(sold=Sum(\"order_items__amount\"))\n )\n return {\n item[0]: item[1]\n for item in qs.values_list(\"order_items__product__product__name\", \"sold\")\n }\n\n def __str__(self):\n if self.title and self.title != \"\":\n return f\"Shift {self.pk} - {self.title}\"\n return f\"Shift {self.pk}\"\n", "path": "website/sales/models/shift.py"}], "after_files": [{"content": "from django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.db.models import (\n Sum,\n Q,\n Count,\n)\nfrom django.db.models.expressions import RawSQL\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\nfrom queryable_properties.managers import QueryablePropertiesManager\nfrom queryable_properties.properties import (\n RangeCheckProperty,\n queryable_property,\n)\n\nfrom activemembers.models import MemberGroup\nfrom sales.models.product import ProductList\n\n\nclass Shift(models.Model):\n class Meta:\n permissions = [\n (\"override_manager\", _(\"Can access all shifts as manager\")),\n ]\n\n objects = QueryablePropertiesManager()\n\n start = models.DateTimeField(verbose_name=_(\"start\"), blank=False, null=False,)\n end = models.DateTimeField(\n verbose_name=_(\"end\"),\n blank=False,\n null=False,\n help_text=_(\n \"The end time is only indicative and does not prevent orders being created after the shift has ended. This only happens after locking the shift.\"\n ),\n )\n\n title = models.CharField(\n verbose_name=_(\"title\"), blank=True, null=True, max_length=100\n )\n\n product_list = models.ForeignKey(\n ProductList,\n verbose_name=_(\"product list\"),\n blank=False,\n null=False,\n on_delete=models.PROTECT,\n )\n\n managers = models.ManyToManyField(\n MemberGroup, verbose_name=_(\"managers\"), related_name=\"manager_shifts\"\n )\n\n locked = models.BooleanField(\n verbose_name=_(\"locked\"),\n blank=False,\n null=False,\n default=False,\n help_text=_(\n \"Prevent orders being changed or created for this shift. This will also clean up all unpaid orders in this shift.\"\n ),\n )\n\n def clean(self):\n super().clean()\n errors = {}\n\n if self.orders.filter(created_at__lt=self.start):\n errors.update(\n {\n \"start\": _(\n \"There are already orders created in this shift before this start time.\"\n )\n }\n )\n\n if self.end and self.start and self.end <= self.start:\n errors.update({\"end\": _(\"End cannot be before start.\")})\n\n if errors:\n raise ValidationError(errors)\n\n def save(\n self, force_insert=False, force_update=False, using=None, update_fields=None\n ):\n if self.locked:\n self.orders.filter(\n (Q(payment__isnull=True) & Q(total_amount__gt=0))\n | Q(order_items__isnull=True)\n ).delete()\n\n return super(Shift, self).save(force_insert, force_update, using, update_fields)\n\n active = RangeCheckProperty(\"start\", \"end\", timezone.now)\n\n @queryable_property(annotation_based=True)\n @classmethod\n def total_revenue(cls):\n return RawSQL(\n \"\"\"(SELECT CAST(COALESCE(SUM(\"__orders\".\"total__\"), 0) AS NUMERIC) AS \"shift_revenue__\"\n FROM (\n SELECT \"sales_order\".\"id\", \"sales_order\".\"shift_id\", \"sales_order\".\"discount\", \"sales_order\".\"payment_id\", CAST(SUM(\"sales_orderitem\".\"total\") AS NUMERIC) AS \"subtotal__\", CAST((SUM(\"sales_orderitem\".\"total\") - COALESCE(\"sales_order\".\"discount\", 0)) AS NUMERIC) AS \"total__\", SUM(\"sales_orderitem\".\"amount\") AS \"num_items__\"\n FROM \"sales_order\" LEFT JOIN \"sales_orderitem\" ON \"sales_orderitem\".\"order_id\" = \"sales_order\".\"id\"\n GROUP BY \"sales_order\".\"id\", \"sales_order\".\"shift_id\", \"sales_order\".\"discount\"\n ) AS \"__orders\"\n WHERE \"__orders\".\"shift_id\"=\"sales_shift\".\"id\"\n )\"\"\",\n [],\n )\n\n @queryable_property(annotation_based=True)\n @classmethod\n def total_revenue_paid(cls):\n return RawSQL(\n \"\"\"(SELECT CAST(COALESCE(SUM(\"__orders\".\"total__\"), 0) AS NUMERIC) AS \"shift_revenue__\"\n FROM (\n SELECT \"sales_order\".\"id\", \"sales_order\".\"shift_id\", \"sales_order\".\"discount\", \"sales_order\".\"payment_id\", CAST(SUM(\"sales_orderitem\".\"total\") AS NUMERIC) AS \"subtotal__\", CAST((SUM(\"sales_orderitem\".\"total\") - COALESCE(\"sales_order\".\"discount\", 0)) AS NUMERIC) AS \"total__\", SUM(\"sales_orderitem\".\"amount\") AS \"num_items__\"\n FROM \"sales_order\" LEFT JOIN \"sales_orderitem\" ON \"sales_orderitem\".\"order_id\" = \"sales_order\".\"id\"\n GROUP BY \"sales_order\".\"id\", \"sales_order\".\"shift_id\", \"sales_order\".\"discount\"\n ) AS \"__orders\"\n WHERE \"__orders\".\"shift_id\"=\"sales_shift\".\"id\"\n AND (\"__orders\".\"payment_id\" IS NOT NULL OR (\"__orders\".\"payment_id\" IS NULL AND \"__orders\".\"total__\"=0))\n )\"\"\",\n [],\n )\n\n @queryable_property(annotation_based=True)\n @classmethod\n def num_orders(cls):\n return Count(\"orders\")\n\n @queryable_property(annotation_based=True)\n @classmethod\n def num_orders_paid(cls):\n return RawSQL(\n \"\"\"(SELECT COUNT(*) AS \"num_orders__\"\n FROM (\n SELECT \"sales_order\".\"id\", \"sales_order\".\"shift_id\", \"sales_order\".\"discount\", \"sales_order\".\"payment_id\", CAST(SUM(\"sales_orderitem\".\"total\") AS NUMERIC) AS \"subtotal__\", CAST((SUM(\"sales_orderitem\".\"total\") - COALESCE(\"sales_order\".\"discount\", 0)) AS NUMERIC) AS \"total__\", SUM(\"sales_orderitem\".\"amount\") AS \"num_items__\"\n FROM \"sales_order\" LEFT JOIN \"sales_orderitem\" ON \"sales_orderitem\".\"order_id\" = \"sales_order\".\"id\"\n GROUP BY \"sales_order\".\"id\", \"sales_order\".\"shift_id\", \"sales_order\".\"discount\"\n ) AS \"__orders\"\n WHERE \"__orders\".\"shift_id\"=\"sales_shift\".\"id\"\n AND (\"__orders\".\"payment_id\" IS NOT NULL OR (\"__orders\".\"payment_id\" IS NULL AND \"__orders\".\"total__\"=0))\n )\"\"\",\n [],\n )\n\n @property\n def product_sales(self):\n qs = (\n self.orders.exclude(order_items__isnull=True)\n .values(\"order_items__product\")\n .annotate(sold=Sum(\"order_items__amount\"))\n .order_by()\n )\n return {\n item[0]: item[1]\n for item in qs.values_list(\"order_items__product__product__name\", \"sold\")\n }\n\n def __str__(self):\n if self.title and self.title != \"\":\n return f\"Shift {self.pk} - {self.title}\"\n return f\"Shift {self.pk}\"\n", "path": "website/sales/models/shift.py"}]} | 2,263 | 120 |
gh_patches_debug_22640 | rasdani/github-patches | git_diff | netket__netket-1086 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TensorBoardLog with MPI
Not sure this can be solved in netket, but when using TensorBoardLog with MPI, I'm getting errors.
MWE (mpitest.py):
```python
import netket as nk
logger = nk.logging.TensorBoardLog("test_mpi_log")
```
When you run the above with
```console
mpirun -np 4 python ./mpitest.py
```
You'll see messages like:
```console
KeyError: 'test_mpi_log'
FileExistsError: [Errno 17] File exists: 'test_mpi_log'
Traceback (most recent call last):
File "/usr/local/anaconda3/envs/localjw/lib/python3.8/site-packages/tensorboardX/record_writer.py", line 47, in directory_check
factory = REGISTERED_FACTORIES[prefix]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netket/logging/tensorboard.py`
Content:
```
1 # Copyright 2021 The NetKet Authors - All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from numbers import Number
16
17 from netket.utils import deprecated
18
19
20 def tree_log(tree, root, data):
21 """
22 Maps all elements in tree, recursively calling tree_log with a new root string,
23 and when it reaches leaves pushes (string, leave) tuples to data.
24
25 Args:
26 tree: a pytree where the leaf nodes contain data
27 root: the root of the tags used to log to tensorboard
28 data: a container modified in place
29
30 """
31 if tree is None:
32 return
33 elif isinstance(tree, list):
34 for (i, val) in enumerate(tree):
35 tree_log(val, root + f"/{i}", data)
36
37 elif isinstance(tree, list) and hasattr(tree, "_fields"):
38 for key in tree._fields:
39 tree_log(getattr(tree, key), root + f"/{key}", data)
40
41 elif isinstance(tree, tuple):
42 for (i, val) in enumerate(tree):
43 tree_log(val, root + f"/{i}", data)
44
45 elif isinstance(tree, dict):
46 for key, value in tree.items():
47 tree_log(value, root + f"/{key}", data) # noqa: F722
48
49 elif hasattr(tree, "to_compound"):
50 tree_log(tree.to_compound()[1], root, data) # noqa: F722
51
52 elif hasattr(tree, "to_dict"):
53 tree_log(tree.to_dict(), root, data) # noqa: F722
54
55 elif isinstance(tree, complex):
56 tree_log(tree.real, root + "/re", data) # noqa: F722
57 tree_log(tree.imag, root + "/im", data) # noqa: F722
58
59 else:
60 data.append((root, tree))
61
62
63 class TensorBoardLog:
64 """
65 Creates a tensorboard logger using tensorboardX's summarywriter.
66 Refer to its documentation for further details
67
68 https://tensorboardx.readthedocs.io/en/latest/tensorboard.html
69
70 TensorBoardX must be installed.
71
72 Args:
73 logdir (string): Save directory location. Default is
74 runs/**CURRENT_DATETIME_HOSTNAME**, which changes after each run.
75 Use hierarchical folder structure to compare
76 between runs easily. e.g. pass in 'runs/exp1', 'runs/exp2', etc.
77 for each new experiment to compare across them.
78 comment (string): Comment logdir suffix appended to the default
79 ``logdir``. If ``logdir`` is assigned, this argument has no effect.
80 purge_step (int):
81 When logging crashes at step :math:`T+X` and restarts at step :math:`T`,
82 any events whose global_step larger or equal to :math:`T` will be
83 purged and hidden from TensorBoard.
84 Note that crashed and resumed experiments should have the same ``logdir``.
85 max_queue (int): Size of the queue for pending events and
86 summaries before one of the 'add' calls forces a flush to disk.
87 Default is ten items.
88 flush_secs (int): How often, in seconds, to flush the
89 pending events and summaries to disk. Default is every two minutes.
90 filename_suffix (string): Suffix added to all event filenames in
91 the logdir directory. More details on filename construction in
92 tensorboard.summary.writer.event_file_writer.EventFileWriter.
93 write_to_disk (boolean):
94 If pass `False`, TensorBoardLog will not write to disk.
95 Examples:
96 Logging optimisation to tensorboard.
97
98 >>> import pytest; pytest.skip("skip automated test of this docstring")
99 >>>
100 >>> import netket as nk
101 >>> # create a summary writer with automatically generated folder name.
102 >>> writer = nk.logging.TensorBoardLog()
103 >>> # folder location: runs/May04_22-14-54_s-MacBook-Pro.local/
104 >>> # create a summary writer using the specified folder name.
105 >>> writer = nk.logging.TensorBoardLog("my_experiment")
106 >>> # folder location: my_experiment
107 >>> # create a summary writer with comment appended.
108 >>> writer = nk.logging.TensorBoardLog(comment="LR_0.1_BATCH_16")
109 >>> # folder location: runs/May04_22-14-54_s-MacBook-Pro.localLR_0.1_BATCH_16/
110 """
111
112 def __init__(
113 self,
114 *args,
115 **kwargs,
116 ):
117 from tensorboardX import SummaryWriter
118
119 self._writer = SummaryWriter(*args, **kwargs)
120
121 self._old_step = 0
122
123 def __call__(self, step, item, machine):
124
125 data = []
126 tree_log(item, "", data)
127
128 for key, val in data:
129 if isinstance(val, Number):
130 self._writer.add_scalar(key[1:], val, step)
131
132 self._writer.flush()
133 self._old_step = step
134
135 def _flush_log(self):
136 self._writer.flush()
137
138 def _flush_params(self, _):
139 return None
140
141 def flush(self, machine=None):
142 """
143 Writes to file the content of this logger.
144
145 :param machine: optionally also writes the parameters of the machine.
146 """
147 self._flush_log()
148
149 if machine is not None:
150 self._flush_params(machine)
151
152
153 # TODO: deprecate in 3.1
154 @deprecated(
155 "TBLog has been renamed to `TensorBoardLog` and will be removed in the next"
156 "minor release. Please update your usages."
157 )
158 def TBLog(*args, **kwargs):
159 return TensorBoardLog(*args, **kwargs)
160
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netket/logging/tensorboard.py b/netket/logging/tensorboard.py
--- a/netket/logging/tensorboard.py
+++ b/netket/logging/tensorboard.py
@@ -114,13 +114,24 @@
*args,
**kwargs,
):
- from tensorboardX import SummaryWriter
+ self._init_args = args
+ """Store the args for the lazily initialized SummaryWriter's constructor."""
+ self._init_kwargs = kwargs
+ """Store the kwargs for the lazily initialized SummaryWriter's constructor."""
- self._writer = SummaryWriter(*args, **kwargs)
+ self._writer = None
+ """Lazily initialized summarywriter constructor"""
self._old_step = 0
+ def _init_tensoboard(self):
+ from tensorboardX import SummaryWriter
+
+ self._writer = SummaryWriter(*self._init_args, **self._init_kwargs)
+
def __call__(self, step, item, machine):
+ if self._writer is None:
+ self._init_tensoboard()
data = []
tree_log(item, "", data)
@@ -133,7 +144,8 @@
self._old_step = step
def _flush_log(self):
- self._writer.flush()
+ if self._writer is not None:
+ self._writer.flush()
def _flush_params(self, _):
return None
| {"golden_diff": "diff --git a/netket/logging/tensorboard.py b/netket/logging/tensorboard.py\n--- a/netket/logging/tensorboard.py\n+++ b/netket/logging/tensorboard.py\n@@ -114,13 +114,24 @@\n *args,\n **kwargs,\n ):\n- from tensorboardX import SummaryWriter\n+ self._init_args = args\n+ \"\"\"Store the args for the lazily initialized SummaryWriter's constructor.\"\"\"\n+ self._init_kwargs = kwargs\n+ \"\"\"Store the kwargs for the lazily initialized SummaryWriter's constructor.\"\"\"\n \n- self._writer = SummaryWriter(*args, **kwargs)\n+ self._writer = None\n+ \"\"\"Lazily initialized summarywriter constructor\"\"\"\n \n self._old_step = 0\n \n+ def _init_tensoboard(self):\n+ from tensorboardX import SummaryWriter\n+\n+ self._writer = SummaryWriter(*self._init_args, **self._init_kwargs)\n+\n def __call__(self, step, item, machine):\n+ if self._writer is None:\n+ self._init_tensoboard()\n \n data = []\n tree_log(item, \"\", data)\n@@ -133,7 +144,8 @@\n self._old_step = step\n \n def _flush_log(self):\n- self._writer.flush()\n+ if self._writer is not None:\n+ self._writer.flush()\n \n def _flush_params(self, _):\n return None\n", "issue": "TensorBoardLog with MPI\nNot sure this can be solved in netket, but when using TensorBoardLog with MPI, I'm getting errors.\r\n\r\nMWE (mpitest.py):\r\n```python\r\nimport netket as nk\r\nlogger = nk.logging.TensorBoardLog(\"test_mpi_log\")\r\n```\r\n\r\nWhen you run the above with\r\n```console\r\nmpirun -np 4 python ./mpitest.py\r\n```\r\n\r\nYou'll see messages like:\r\n```console\r\nKeyError: 'test_mpi_log'\r\n\r\nFileExistsError: [Errno 17] File exists: 'test_mpi_log'\r\nTraceback (most recent call last):\r\n File \"/usr/local/anaconda3/envs/localjw/lib/python3.8/site-packages/tensorboardX/record_writer.py\", line 47, in directory_check\r\n factory = REGISTERED_FACTORIES[prefix]\r\n```\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2021 The NetKet Authors - All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom numbers import Number\n\nfrom netket.utils import deprecated\n\n\ndef tree_log(tree, root, data):\n \"\"\"\n Maps all elements in tree, recursively calling tree_log with a new root string,\n and when it reaches leaves pushes (string, leave) tuples to data.\n\n Args:\n tree: a pytree where the leaf nodes contain data\n root: the root of the tags used to log to tensorboard\n data: a container modified in place\n\n \"\"\"\n if tree is None:\n return\n elif isinstance(tree, list):\n for (i, val) in enumerate(tree):\n tree_log(val, root + f\"/{i}\", data)\n\n elif isinstance(tree, list) and hasattr(tree, \"_fields\"):\n for key in tree._fields:\n tree_log(getattr(tree, key), root + f\"/{key}\", data)\n\n elif isinstance(tree, tuple):\n for (i, val) in enumerate(tree):\n tree_log(val, root + f\"/{i}\", data)\n\n elif isinstance(tree, dict):\n for key, value in tree.items():\n tree_log(value, root + f\"/{key}\", data) # noqa: F722\n\n elif hasattr(tree, \"to_compound\"):\n tree_log(tree.to_compound()[1], root, data) # noqa: F722\n\n elif hasattr(tree, \"to_dict\"):\n tree_log(tree.to_dict(), root, data) # noqa: F722\n\n elif isinstance(tree, complex):\n tree_log(tree.real, root + \"/re\", data) # noqa: F722\n tree_log(tree.imag, root + \"/im\", data) # noqa: F722\n\n else:\n data.append((root, tree))\n\n\nclass TensorBoardLog:\n \"\"\"\n Creates a tensorboard logger using tensorboardX's summarywriter.\n Refer to its documentation for further details\n\n https://tensorboardx.readthedocs.io/en/latest/tensorboard.html\n\n TensorBoardX must be installed.\n\n Args:\n logdir (string): Save directory location. Default is\n runs/**CURRENT_DATETIME_HOSTNAME**, which changes after each run.\n Use hierarchical folder structure to compare\n between runs easily. e.g. pass in 'runs/exp1', 'runs/exp2', etc.\n for each new experiment to compare across them.\n comment (string): Comment logdir suffix appended to the default\n ``logdir``. If ``logdir`` is assigned, this argument has no effect.\n purge_step (int):\n When logging crashes at step :math:`T+X` and restarts at step :math:`T`,\n any events whose global_step larger or equal to :math:`T` will be\n purged and hidden from TensorBoard.\n Note that crashed and resumed experiments should have the same ``logdir``.\n max_queue (int): Size of the queue for pending events and\n summaries before one of the 'add' calls forces a flush to disk.\n Default is ten items.\n flush_secs (int): How often, in seconds, to flush the\n pending events and summaries to disk. Default is every two minutes.\n filename_suffix (string): Suffix added to all event filenames in\n the logdir directory. More details on filename construction in\n tensorboard.summary.writer.event_file_writer.EventFileWriter.\n write_to_disk (boolean):\n If pass `False`, TensorBoardLog will not write to disk.\n Examples:\n Logging optimisation to tensorboard.\n\n >>> import pytest; pytest.skip(\"skip automated test of this docstring\")\n >>>\n >>> import netket as nk\n >>> # create a summary writer with automatically generated folder name.\n >>> writer = nk.logging.TensorBoardLog()\n >>> # folder location: runs/May04_22-14-54_s-MacBook-Pro.local/\n >>> # create a summary writer using the specified folder name.\n >>> writer = nk.logging.TensorBoardLog(\"my_experiment\")\n >>> # folder location: my_experiment\n >>> # create a summary writer with comment appended.\n >>> writer = nk.logging.TensorBoardLog(comment=\"LR_0.1_BATCH_16\")\n >>> # folder location: runs/May04_22-14-54_s-MacBook-Pro.localLR_0.1_BATCH_16/\n \"\"\"\n\n def __init__(\n self,\n *args,\n **kwargs,\n ):\n from tensorboardX import SummaryWriter\n\n self._writer = SummaryWriter(*args, **kwargs)\n\n self._old_step = 0\n\n def __call__(self, step, item, machine):\n\n data = []\n tree_log(item, \"\", data)\n\n for key, val in data:\n if isinstance(val, Number):\n self._writer.add_scalar(key[1:], val, step)\n\n self._writer.flush()\n self._old_step = step\n\n def _flush_log(self):\n self._writer.flush()\n\n def _flush_params(self, _):\n return None\n\n def flush(self, machine=None):\n \"\"\"\n Writes to file the content of this logger.\n\n :param machine: optionally also writes the parameters of the machine.\n \"\"\"\n self._flush_log()\n\n if machine is not None:\n self._flush_params(machine)\n\n\n# TODO: deprecate in 3.1\n@deprecated(\n \"TBLog has been renamed to `TensorBoardLog` and will be removed in the next\"\n \"minor release. Please update your usages.\"\n)\ndef TBLog(*args, **kwargs):\n return TensorBoardLog(*args, **kwargs)\n", "path": "netket/logging/tensorboard.py"}], "after_files": [{"content": "# Copyright 2021 The NetKet Authors - All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom numbers import Number\n\nfrom netket.utils import deprecated\n\n\ndef tree_log(tree, root, data):\n \"\"\"\n Maps all elements in tree, recursively calling tree_log with a new root string,\n and when it reaches leaves pushes (string, leave) tuples to data.\n\n Args:\n tree: a pytree where the leaf nodes contain data\n root: the root of the tags used to log to tensorboard\n data: a container modified in place\n\n \"\"\"\n if tree is None:\n return\n elif isinstance(tree, list):\n for (i, val) in enumerate(tree):\n tree_log(val, root + f\"/{i}\", data)\n\n elif isinstance(tree, list) and hasattr(tree, \"_fields\"):\n for key in tree._fields:\n tree_log(getattr(tree, key), root + f\"/{key}\", data)\n\n elif isinstance(tree, tuple):\n for (i, val) in enumerate(tree):\n tree_log(val, root + f\"/{i}\", data)\n\n elif isinstance(tree, dict):\n for key, value in tree.items():\n tree_log(value, root + f\"/{key}\", data) # noqa: F722\n\n elif hasattr(tree, \"to_compound\"):\n tree_log(tree.to_compound()[1], root, data) # noqa: F722\n\n elif hasattr(tree, \"to_dict\"):\n tree_log(tree.to_dict(), root, data) # noqa: F722\n\n elif isinstance(tree, complex):\n tree_log(tree.real, root + \"/re\", data) # noqa: F722\n tree_log(tree.imag, root + \"/im\", data) # noqa: F722\n\n else:\n data.append((root, tree))\n\n\nclass TensorBoardLog:\n \"\"\"\n Creates a tensorboard logger using tensorboardX's summarywriter.\n Refer to its documentation for further details\n\n https://tensorboardx.readthedocs.io/en/latest/tensorboard.html\n\n TensorBoardX must be installed.\n\n Args:\n logdir (string): Save directory location. Default is\n runs/**CURRENT_DATETIME_HOSTNAME**, which changes after each run.\n Use hierarchical folder structure to compare\n between runs easily. e.g. pass in 'runs/exp1', 'runs/exp2', etc.\n for each new experiment to compare across them.\n comment (string): Comment logdir suffix appended to the default\n ``logdir``. If ``logdir`` is assigned, this argument has no effect.\n purge_step (int):\n When logging crashes at step :math:`T+X` and restarts at step :math:`T`,\n any events whose global_step larger or equal to :math:`T` will be\n purged and hidden from TensorBoard.\n Note that crashed and resumed experiments should have the same ``logdir``.\n max_queue (int): Size of the queue for pending events and\n summaries before one of the 'add' calls forces a flush to disk.\n Default is ten items.\n flush_secs (int): How often, in seconds, to flush the\n pending events and summaries to disk. Default is every two minutes.\n filename_suffix (string): Suffix added to all event filenames in\n the logdir directory. More details on filename construction in\n tensorboard.summary.writer.event_file_writer.EventFileWriter.\n write_to_disk (boolean):\n If pass `False`, TensorBoardLog will not write to disk.\n Examples:\n Logging optimisation to tensorboard.\n\n >>> import pytest; pytest.skip(\"skip automated test of this docstring\")\n >>>\n >>> import netket as nk\n >>> # create a summary writer with automatically generated folder name.\n >>> writer = nk.logging.TensorBoardLog()\n >>> # folder location: runs/May04_22-14-54_s-MacBook-Pro.local/\n >>> # create a summary writer using the specified folder name.\n >>> writer = nk.logging.TensorBoardLog(\"my_experiment\")\n >>> # folder location: my_experiment\n >>> # create a summary writer with comment appended.\n >>> writer = nk.logging.TensorBoardLog(comment=\"LR_0.1_BATCH_16\")\n >>> # folder location: runs/May04_22-14-54_s-MacBook-Pro.localLR_0.1_BATCH_16/\n \"\"\"\n\n def __init__(\n self,\n *args,\n **kwargs,\n ):\n self._init_args = args\n \"\"\"Store the args for the lazily initialized SummaryWriter's constructor.\"\"\"\n self._init_kwargs = kwargs\n \"\"\"Store the kwargs for the lazily initialized SummaryWriter's constructor.\"\"\"\n\n self._writer = None\n \"\"\"Lazily initialized summarywriter constructor\"\"\"\n\n self._old_step = 0\n\n def _init_tensoboard(self):\n from tensorboardX import SummaryWriter\n\n self._writer = SummaryWriter(*self._init_args, **self._init_kwargs)\n\n def __call__(self, step, item, machine):\n if self._writer is None:\n self._init_tensoboard()\n\n data = []\n tree_log(item, \"\", data)\n\n for key, val in data:\n if isinstance(val, Number):\n self._writer.add_scalar(key[1:], val, step)\n\n self._writer.flush()\n self._old_step = step\n\n def _flush_log(self):\n if self._writer is not None:\n self._writer.flush()\n\n def _flush_params(self, _):\n return None\n\n def flush(self, machine=None):\n \"\"\"\n Writes to file the content of this logger.\n\n :param machine: optionally also writes the parameters of the machine.\n \"\"\"\n self._flush_log()\n\n if machine is not None:\n self._flush_params(machine)\n\n\n# TODO: deprecate in 3.1\n@deprecated(\n \"TBLog has been renamed to `TensorBoardLog` and will be removed in the next\"\n \"minor release. Please update your usages.\"\n)\ndef TBLog(*args, **kwargs):\n return TensorBoardLog(*args, **kwargs)\n", "path": "netket/logging/tensorboard.py"}]} | 2,180 | 325 |
gh_patches_debug_11884 | rasdani/github-patches | git_diff | redis__redis-py-396 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SentinelConnectionPool. Error when working under uwsgi
It raises following exception:
```
redis.sentinel in get_master_address
AttributeError: 'int' object has no attribute 'discover_master'
```
The reason is `ConnectionPool._checkpid()` calls `__init__()` ([connection.py#L397](https://github.com/andymccurdy/redis-py/blob/7210906be09b969e5833de5a967178c4d6989f14/redis/connection.py#L397)) with
`self.connection_class, self.max_connections`
instead
`self.service_name, self.sentinel_manager`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redis/sentinel.py`
Content:
```
1 import random
2
3 from redis.client import StrictRedis
4 from redis.connection import ConnectionPool, Connection
5 from redis.exceptions import ConnectionError, ResponseError
6 from redis._compat import xrange, nativestr
7
8
9 class MasterNotFoundError(ConnectionError):
10 pass
11
12
13 class SlaveNotFoundError(ConnectionError):
14 pass
15
16
17 class SentinelManagedConnection(Connection):
18 def __init__(self, **kwargs):
19 self.connection_pool = kwargs.pop('connection_pool')
20 super(SentinelManagedConnection, self).__init__(**kwargs)
21
22 def connect_to(self, address):
23 self.host, self.port = address
24 super(SentinelManagedConnection, self).connect()
25 if self.connection_pool.check_connection:
26 self.send_command('PING')
27 if nativestr(self.read_response()) != 'PONG':
28 raise ConnectionError('PING failed')
29
30 def connect(self):
31 if self._sock:
32 return # already connected
33 if self.connection_pool.is_master:
34 self.connect_to(self.connection_pool.get_master_address())
35 else:
36 for slave in self.connection_pool.rotate_slaves():
37 try:
38 return self.connect_to(slave)
39 except ConnectionError:
40 continue
41 raise SlaveNotFoundError # Never be here
42
43
44 class SentinelConnectionPool(ConnectionPool):
45 """
46 Sentinel backed connection pool.
47
48 If ``check_connection`` flag is set to True, SentinelManagedConnection
49 sends a PING command right after establishing the connection.
50 """
51
52 def __init__(self, service_name, sentinel_manager, **kwargs):
53 kwargs['connection_class'] = kwargs.get(
54 'connection_class', SentinelManagedConnection)
55 self.is_master = kwargs.pop('is_master', True)
56 self.check_connection = kwargs.pop('check_connection', False)
57 super(SentinelConnectionPool, self).__init__(**kwargs)
58 self.connection_kwargs['connection_pool'] = self
59 self.service_name = service_name
60 self.sentinel_manager = sentinel_manager
61 self.master_address = None
62 self.slave_rr_counter = None
63
64 def get_master_address(self):
65 master_address = self.sentinel_manager.discover_master(
66 self.service_name)
67 if not self.is_master:
68 pass
69 elif self.master_address is None:
70 self.master_address = master_address
71 elif master_address != self.master_address:
72 self.disconnect() # Master address changed
73 return master_address
74
75 def rotate_slaves(self):
76 "Round-robin slave balancer"
77 slaves = self.sentinel_manager.discover_slaves(self.service_name)
78 if slaves:
79 if self.slave_rr_counter is None:
80 self.slave_rr_counter = random.randint(0, len(slaves) - 1)
81 for _ in xrange(len(slaves)):
82 self.slave_rr_counter = (
83 self.slave_rr_counter + 1) % len(slaves)
84 slave = slaves[self.slave_rr_counter]
85 yield slave
86 # Fallback to the master connection
87 try:
88 yield self.get_master_address()
89 except MasterNotFoundError:
90 pass
91 raise SlaveNotFoundError('No slave found for %r' % (self.service_name))
92
93
94 class Sentinel(object):
95 """
96 Redis Sentinel cluster client
97
98 >>> from redis.sentinel import Sentinel
99 >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1)
100 >>> master = sentinel.master_for('mymaster', socket_timeout=0.1)
101 >>> master.set('foo', 'bar')
102 >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1)
103 >>> slave.get('foo')
104 'bar'
105
106 ``sentinels`` is a list of sentinel nodes. Each node is represented by
107 a pair (hostname, port).
108
109 Use ``socket_timeout`` to specify a timeout for sentinel clients.
110 It's recommended to use short timeouts.
111
112 Use ``min_other_sentinels`` to filter out sentinels with not enough peers.
113 """
114
115 def __init__(self, sentinels, password=None, socket_timeout=None,
116 min_other_sentinels=0):
117 self.sentinels = [StrictRedis(hostname, port, password=password,
118 socket_timeout=socket_timeout)
119 for hostname, port in sentinels]
120 self.min_other_sentinels = min_other_sentinels
121
122 def check_master_state(self, state, service_name):
123 if not state['is_master'] or state['is_sdown'] or state['is_odown']:
124 return False
125 # Check if our sentinel doesn't see other nodes
126 if state['num-other-sentinels'] < self.min_other_sentinels:
127 return False
128 return True
129
130 def discover_master(self, service_name):
131 """
132 Asks sentinel servers for the Redis master's address corresponding
133 to the service labeled ``service_name``.
134
135 Returns a pair (address, port) or raises MasterNotFoundError if no
136 master is found.
137 """
138 for sentinel_no, sentinel in enumerate(self.sentinels):
139 try:
140 masters = sentinel.sentinel_masters()
141 except ConnectionError:
142 continue
143 state = masters.get(service_name)
144 if state and self.check_master_state(state, service_name):
145 # Put this sentinel at the top of the list
146 self.sentinels[0], self.sentinels[sentinel_no] = (
147 sentinel, self.sentinels[0])
148 return state['ip'], state['port']
149 raise MasterNotFoundError("No master found for %r" % (service_name,))
150
151 def filter_slaves(self, slaves):
152 "Remove slaves that are in an ODOWN or SDOWN state"
153 slaves_alive = []
154 for slave in slaves:
155 if slave['is_odown'] or slave['is_sdown']:
156 continue
157 slaves_alive.append((slave['ip'], slave['port']))
158 return slaves_alive
159
160 def discover_slaves(self, service_name):
161 "Returns a list of alive slaves for service ``service_name``"
162 for sentinel in self.sentinels:
163 try:
164 slaves = sentinel.sentinel_slaves(service_name)
165 except (ConnectionError, ResponseError):
166 continue
167 slaves = self.filter_slaves(slaves)
168 if slaves:
169 return slaves
170 return []
171
172 def master_for(self, service_name, redis_class=StrictRedis,
173 connection_pool_class=SentinelConnectionPool, **kwargs):
174 """
175 Returns a redis client instance for the ``service_name`` master.
176
177 A SentinelConnectionPool class is used to retrive the master's
178 address before establishing a new connection.
179
180 NOTE: If the master's address has changed, any cached connections to
181 the old master are closed.
182
183 By default clients will be a redis.StrictRedis instance. Specify a
184 different class to the ``redis_class`` argument if you desire
185 something different.
186
187 The ``connection_pool_class`` specifies the connection pool to use.
188 The SentinelConnectionPool will be used by default.
189
190 All other arguments are passed directly to the SentinelConnectionPool.
191 """
192 kwargs['is_master'] = True
193 return redis_class(connection_pool=connection_pool_class(
194 service_name, self, **kwargs))
195
196 def slave_for(self, service_name, redis_class=StrictRedis,
197 connection_pool_class=SentinelConnectionPool, **kwargs):
198 """
199 Returns redis client instance for the ``service_name`` slave(s).
200
201 A SentinelConnectionPool class is used to retrive the slave's
202 address before establishing a new connection.
203
204 By default clients will be a redis.StrictRedis instance. Specify a
205 different class to the ``redis_class`` argument if you desire
206 something different.
207
208 The ``connection_pool_class`` specifies the connection pool to use.
209 The SentinelConnectionPool will be used by default.
210
211 All other arguments are passed directly to the SentinelConnectionPool.
212 """
213 kwargs['is_master'] = False
214 return redis_class(connection_pool=connection_pool_class(
215 service_name, self, **kwargs))
216
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/redis/sentinel.py b/redis/sentinel.py
--- a/redis/sentinel.py
+++ b/redis/sentinel.py
@@ -1,3 +1,4 @@
+import os
import random
from redis.client import StrictRedis
@@ -90,6 +91,14 @@
pass
raise SlaveNotFoundError('No slave found for %r' % (self.service_name))
+ def _checkpid(self):
+ if self.pid != os.getpid():
+ self.disconnect()
+ self.__init__(self.service_name, self.sentinel_manager,
+ connection_class=self.connection_class,
+ max_connections=self.max_connections,
+ **self.connection_kwargs)
+
class Sentinel(object):
"""
| {"golden_diff": "diff --git a/redis/sentinel.py b/redis/sentinel.py\n--- a/redis/sentinel.py\n+++ b/redis/sentinel.py\n@@ -1,3 +1,4 @@\n+import os\n import random\n \n from redis.client import StrictRedis\n@@ -90,6 +91,14 @@\n pass\n raise SlaveNotFoundError('No slave found for %r' % (self.service_name))\n \n+ def _checkpid(self):\n+ if self.pid != os.getpid():\n+ self.disconnect()\n+ self.__init__(self.service_name, self.sentinel_manager,\n+ connection_class=self.connection_class,\n+ max_connections=self.max_connections,\n+ **self.connection_kwargs)\n+\n \n class Sentinel(object):\n \"\"\"\n", "issue": "SentinelConnectionPool. Error when working under uwsgi\nIt raises following exception:\n\n```\nredis.sentinel in get_master_address\nAttributeError: 'int' object has no attribute 'discover_master'\n```\n\nThe reason is `ConnectionPool._checkpid()` calls `__init__()` ([connection.py#L397](https://github.com/andymccurdy/redis-py/blob/7210906be09b969e5833de5a967178c4d6989f14/redis/connection.py#L397)) with \n`self.connection_class, self.max_connections`\ninstead\n`self.service_name, self.sentinel_manager`.\n\n", "before_files": [{"content": "import random\n\nfrom redis.client import StrictRedis\nfrom redis.connection import ConnectionPool, Connection\nfrom redis.exceptions import ConnectionError, ResponseError\nfrom redis._compat import xrange, nativestr\n\n\nclass MasterNotFoundError(ConnectionError):\n pass\n\n\nclass SlaveNotFoundError(ConnectionError):\n pass\n\n\nclass SentinelManagedConnection(Connection):\n def __init__(self, **kwargs):\n self.connection_pool = kwargs.pop('connection_pool')\n super(SentinelManagedConnection, self).__init__(**kwargs)\n\n def connect_to(self, address):\n self.host, self.port = address\n super(SentinelManagedConnection, self).connect()\n if self.connection_pool.check_connection:\n self.send_command('PING')\n if nativestr(self.read_response()) != 'PONG':\n raise ConnectionError('PING failed')\n\n def connect(self):\n if self._sock:\n return # already connected\n if self.connection_pool.is_master:\n self.connect_to(self.connection_pool.get_master_address())\n else:\n for slave in self.connection_pool.rotate_slaves():\n try:\n return self.connect_to(slave)\n except ConnectionError:\n continue\n raise SlaveNotFoundError # Never be here\n\n\nclass SentinelConnectionPool(ConnectionPool):\n \"\"\"\n Sentinel backed connection pool.\n\n If ``check_connection`` flag is set to True, SentinelManagedConnection\n sends a PING command right after establishing the connection.\n \"\"\"\n\n def __init__(self, service_name, sentinel_manager, **kwargs):\n kwargs['connection_class'] = kwargs.get(\n 'connection_class', SentinelManagedConnection)\n self.is_master = kwargs.pop('is_master', True)\n self.check_connection = kwargs.pop('check_connection', False)\n super(SentinelConnectionPool, self).__init__(**kwargs)\n self.connection_kwargs['connection_pool'] = self\n self.service_name = service_name\n self.sentinel_manager = sentinel_manager\n self.master_address = None\n self.slave_rr_counter = None\n\n def get_master_address(self):\n master_address = self.sentinel_manager.discover_master(\n self.service_name)\n if not self.is_master:\n pass\n elif self.master_address is None:\n self.master_address = master_address\n elif master_address != self.master_address:\n self.disconnect() # Master address changed\n return master_address\n\n def rotate_slaves(self):\n \"Round-robin slave balancer\"\n slaves = self.sentinel_manager.discover_slaves(self.service_name)\n if slaves:\n if self.slave_rr_counter is None:\n self.slave_rr_counter = random.randint(0, len(slaves) - 1)\n for _ in xrange(len(slaves)):\n self.slave_rr_counter = (\n self.slave_rr_counter + 1) % len(slaves)\n slave = slaves[self.slave_rr_counter]\n yield slave\n # Fallback to the master connection\n try:\n yield self.get_master_address()\n except MasterNotFoundError:\n pass\n raise SlaveNotFoundError('No slave found for %r' % (self.service_name))\n\n\nclass Sentinel(object):\n \"\"\"\n Redis Sentinel cluster client\n\n >>> from redis.sentinel import Sentinel\n >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1)\n >>> master = sentinel.master_for('mymaster', socket_timeout=0.1)\n >>> master.set('foo', 'bar')\n >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1)\n >>> slave.get('foo')\n 'bar'\n\n ``sentinels`` is a list of sentinel nodes. Each node is represented by\n a pair (hostname, port).\n\n Use ``socket_timeout`` to specify a timeout for sentinel clients.\n It's recommended to use short timeouts.\n\n Use ``min_other_sentinels`` to filter out sentinels with not enough peers.\n \"\"\"\n\n def __init__(self, sentinels, password=None, socket_timeout=None,\n min_other_sentinels=0):\n self.sentinels = [StrictRedis(hostname, port, password=password,\n socket_timeout=socket_timeout)\n for hostname, port in sentinels]\n self.min_other_sentinels = min_other_sentinels\n\n def check_master_state(self, state, service_name):\n if not state['is_master'] or state['is_sdown'] or state['is_odown']:\n return False\n # Check if our sentinel doesn't see other nodes\n if state['num-other-sentinels'] < self.min_other_sentinels:\n return False\n return True\n\n def discover_master(self, service_name):\n \"\"\"\n Asks sentinel servers for the Redis master's address corresponding\n to the service labeled ``service_name``.\n\n Returns a pair (address, port) or raises MasterNotFoundError if no\n master is found.\n \"\"\"\n for sentinel_no, sentinel in enumerate(self.sentinels):\n try:\n masters = sentinel.sentinel_masters()\n except ConnectionError:\n continue\n state = masters.get(service_name)\n if state and self.check_master_state(state, service_name):\n # Put this sentinel at the top of the list\n self.sentinels[0], self.sentinels[sentinel_no] = (\n sentinel, self.sentinels[0])\n return state['ip'], state['port']\n raise MasterNotFoundError(\"No master found for %r\" % (service_name,))\n\n def filter_slaves(self, slaves):\n \"Remove slaves that are in an ODOWN or SDOWN state\"\n slaves_alive = []\n for slave in slaves:\n if slave['is_odown'] or slave['is_sdown']:\n continue\n slaves_alive.append((slave['ip'], slave['port']))\n return slaves_alive\n\n def discover_slaves(self, service_name):\n \"Returns a list of alive slaves for service ``service_name``\"\n for sentinel in self.sentinels:\n try:\n slaves = sentinel.sentinel_slaves(service_name)\n except (ConnectionError, ResponseError):\n continue\n slaves = self.filter_slaves(slaves)\n if slaves:\n return slaves\n return []\n\n def master_for(self, service_name, redis_class=StrictRedis,\n connection_pool_class=SentinelConnectionPool, **kwargs):\n \"\"\"\n Returns a redis client instance for the ``service_name`` master.\n\n A SentinelConnectionPool class is used to retrive the master's\n address before establishing a new connection.\n\n NOTE: If the master's address has changed, any cached connections to\n the old master are closed.\n\n By default clients will be a redis.StrictRedis instance. Specify a\n different class to the ``redis_class`` argument if you desire\n something different.\n\n The ``connection_pool_class`` specifies the connection pool to use.\n The SentinelConnectionPool will be used by default.\n\n All other arguments are passed directly to the SentinelConnectionPool.\n \"\"\"\n kwargs['is_master'] = True\n return redis_class(connection_pool=connection_pool_class(\n service_name, self, **kwargs))\n\n def slave_for(self, service_name, redis_class=StrictRedis,\n connection_pool_class=SentinelConnectionPool, **kwargs):\n \"\"\"\n Returns redis client instance for the ``service_name`` slave(s).\n\n A SentinelConnectionPool class is used to retrive the slave's\n address before establishing a new connection.\n\n By default clients will be a redis.StrictRedis instance. Specify a\n different class to the ``redis_class`` argument if you desire\n something different.\n\n The ``connection_pool_class`` specifies the connection pool to use.\n The SentinelConnectionPool will be used by default.\n\n All other arguments are passed directly to the SentinelConnectionPool.\n \"\"\"\n kwargs['is_master'] = False\n return redis_class(connection_pool=connection_pool_class(\n service_name, self, **kwargs))\n", "path": "redis/sentinel.py"}], "after_files": [{"content": "import os\nimport random\n\nfrom redis.client import StrictRedis\nfrom redis.connection import ConnectionPool, Connection\nfrom redis.exceptions import ConnectionError, ResponseError\nfrom redis._compat import xrange, nativestr\n\n\nclass MasterNotFoundError(ConnectionError):\n pass\n\n\nclass SlaveNotFoundError(ConnectionError):\n pass\n\n\nclass SentinelManagedConnection(Connection):\n def __init__(self, **kwargs):\n self.connection_pool = kwargs.pop('connection_pool')\n super(SentinelManagedConnection, self).__init__(**kwargs)\n\n def connect_to(self, address):\n self.host, self.port = address\n super(SentinelManagedConnection, self).connect()\n if self.connection_pool.check_connection:\n self.send_command('PING')\n if nativestr(self.read_response()) != 'PONG':\n raise ConnectionError('PING failed')\n\n def connect(self):\n if self._sock:\n return # already connected\n if self.connection_pool.is_master:\n self.connect_to(self.connection_pool.get_master_address())\n else:\n for slave in self.connection_pool.rotate_slaves():\n try:\n return self.connect_to(slave)\n except ConnectionError:\n continue\n raise SlaveNotFoundError # Never be here\n\n\nclass SentinelConnectionPool(ConnectionPool):\n \"\"\"\n Sentinel backed connection pool.\n\n If ``check_connection`` flag is set to True, SentinelManagedConnection\n sends a PING command right after establishing the connection.\n \"\"\"\n\n def __init__(self, service_name, sentinel_manager, **kwargs):\n kwargs['connection_class'] = kwargs.get(\n 'connection_class', SentinelManagedConnection)\n self.is_master = kwargs.pop('is_master', True)\n self.check_connection = kwargs.pop('check_connection', False)\n super(SentinelConnectionPool, self).__init__(**kwargs)\n self.connection_kwargs['connection_pool'] = self\n self.service_name = service_name\n self.sentinel_manager = sentinel_manager\n self.master_address = None\n self.slave_rr_counter = None\n\n def get_master_address(self):\n master_address = self.sentinel_manager.discover_master(\n self.service_name)\n if not self.is_master:\n pass\n elif self.master_address is None:\n self.master_address = master_address\n elif master_address != self.master_address:\n self.disconnect() # Master address changed\n return master_address\n\n def rotate_slaves(self):\n \"Round-robin slave balancer\"\n slaves = self.sentinel_manager.discover_slaves(self.service_name)\n if slaves:\n if self.slave_rr_counter is None:\n self.slave_rr_counter = random.randint(0, len(slaves) - 1)\n for _ in xrange(len(slaves)):\n self.slave_rr_counter = (\n self.slave_rr_counter + 1) % len(slaves)\n slave = slaves[self.slave_rr_counter]\n yield slave\n # Fallback to the master connection\n try:\n yield self.get_master_address()\n except MasterNotFoundError:\n pass\n raise SlaveNotFoundError('No slave found for %r' % (self.service_name))\n\n def _checkpid(self):\n if self.pid != os.getpid():\n self.disconnect()\n self.__init__(self.service_name, self.sentinel_manager,\n connection_class=self.connection_class,\n max_connections=self.max_connections,\n **self.connection_kwargs)\n\n\nclass Sentinel(object):\n \"\"\"\n Redis Sentinel cluster client\n\n >>> from redis.sentinel import Sentinel\n >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1)\n >>> master = sentinel.master_for('mymaster', socket_timeout=0.1)\n >>> master.set('foo', 'bar')\n >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1)\n >>> slave.get('foo')\n 'bar'\n\n ``sentinels`` is a list of sentinel nodes. Each node is represented by\n a pair (hostname, port).\n\n Use ``socket_timeout`` to specify a timeout for sentinel clients.\n It's recommended to use short timeouts.\n\n Use ``min_other_sentinels`` to filter out sentinels with not enough peers.\n \"\"\"\n\n def __init__(self, sentinels, password=None, socket_timeout=None,\n min_other_sentinels=0):\n self.sentinels = [StrictRedis(hostname, port, password=password,\n socket_timeout=socket_timeout)\n for hostname, port in sentinels]\n self.min_other_sentinels = min_other_sentinels\n\n def check_master_state(self, state, service_name):\n if not state['is_master'] or state['is_sdown'] or state['is_odown']:\n return False\n # Check if our sentinel doesn't see other nodes\n if state['num-other-sentinels'] < self.min_other_sentinels:\n return False\n return True\n\n def discover_master(self, service_name):\n \"\"\"\n Asks sentinel servers for the Redis master's address corresponding\n to the service labeled ``service_name``.\n\n Returns a pair (address, port) or raises MasterNotFoundError if no\n master is found.\n \"\"\"\n for sentinel_no, sentinel in enumerate(self.sentinels):\n try:\n masters = sentinel.sentinel_masters()\n except ConnectionError:\n continue\n state = masters.get(service_name)\n if state and self.check_master_state(state, service_name):\n # Put this sentinel at the top of the list\n self.sentinels[0], self.sentinels[sentinel_no] = (\n sentinel, self.sentinels[0])\n return state['ip'], state['port']\n raise MasterNotFoundError(\"No master found for %r\" % (service_name,))\n\n def filter_slaves(self, slaves):\n \"Remove slaves that are in an ODOWN or SDOWN state\"\n slaves_alive = []\n for slave in slaves:\n if slave['is_odown'] or slave['is_sdown']:\n continue\n slaves_alive.append((slave['ip'], slave['port']))\n return slaves_alive\n\n def discover_slaves(self, service_name):\n \"Returns a list of alive slaves for service ``service_name``\"\n for sentinel in self.sentinels:\n try:\n slaves = sentinel.sentinel_slaves(service_name)\n except (ConnectionError, ResponseError):\n continue\n slaves = self.filter_slaves(slaves)\n if slaves:\n return slaves\n return []\n\n def master_for(self, service_name, redis_class=StrictRedis,\n connection_pool_class=SentinelConnectionPool, **kwargs):\n \"\"\"\n Returns a redis client instance for the ``service_name`` master.\n\n A SentinelConnectionPool class is used to retrive the master's\n address before establishing a new connection.\n\n NOTE: If the master's address has changed, any cached connections to\n the old master are closed.\n\n By default clients will be a redis.StrictRedis instance. Specify a\n different class to the ``redis_class`` argument if you desire\n something different.\n\n The ``connection_pool_class`` specifies the connection pool to use.\n The SentinelConnectionPool will be used by default.\n\n All other arguments are passed directly to the SentinelConnectionPool.\n \"\"\"\n kwargs['is_master'] = True\n return redis_class(connection_pool=connection_pool_class(\n service_name, self, **kwargs))\n\n def slave_for(self, service_name, redis_class=StrictRedis,\n connection_pool_class=SentinelConnectionPool, **kwargs):\n \"\"\"\n Returns redis client instance for the ``service_name`` slave(s).\n\n A SentinelConnectionPool class is used to retrive the slave's\n address before establishing a new connection.\n\n By default clients will be a redis.StrictRedis instance. Specify a\n different class to the ``redis_class`` argument if you desire\n something different.\n\n The ``connection_pool_class`` specifies the connection pool to use.\n The SentinelConnectionPool will be used by default.\n\n All other arguments are passed directly to the SentinelConnectionPool.\n \"\"\"\n kwargs['is_master'] = False\n return redis_class(connection_pool=connection_pool_class(\n service_name, self, **kwargs))\n", "path": "redis/sentinel.py"}]} | 2,650 | 164 |
gh_patches_debug_35677 | rasdani/github-patches | git_diff | medtagger__MedTagger-519 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Log in user after registration
## Current Behavior
User needs to log in after first registration.
## Expected Behavior
User should be logged into MedTagger right after filling registration form.
## Steps to Reproduce the Problem
1. Register new user.
2. You will be redirected to the login page.
3. Type your login once again...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/medtagger/api/auth/business.py`
Content:
```
1 """Module responsible for business logic in all Auth endpoint."""
2 from medtagger.api import InvalidArgumentsException
3 from medtagger.api.security import hash_password, verify_user_password, generate_auth_token
4 from medtagger.database.models import User
5 from medtagger.repositories import roles as RolesRepository, users as UsersRepository
6
7
8 def create_user(email: str, password: str, first_name: str, last_name: str) -> int:
9 """Create user with the given user information. Password is being hashed.
10
11 :param email: user email in string format
12 :param password: user password in string format
13 :param first_name: user first name in string format
14 :param last_name: user last name in string format
15
16 :return: id of the new user
17 """
18 user = UsersRepository.get_user_by_email(email)
19 if user:
20 raise InvalidArgumentsException('User with this email already exists')
21 password_hash = hash_password(password)
22 new_user = User(email, password_hash, first_name, last_name)
23 role = RolesRepository.get_role_with_name('volunteer')
24 if not role:
25 raise InvalidArgumentsException('Role does not exist.')
26 new_user.roles.append(role)
27 return UsersRepository.add_new_user(new_user)
28
29
30 def sign_in_user(email: str, password: str) -> str:
31 """Sign in user using given username and password.
32
33 :param email: user email in string format
34 :param password: user password in string format
35
36 :return: authentication token
37 """
38 user = UsersRepository.get_user_by_email(email)
39 if not user:
40 raise InvalidArgumentsException('User does not exist.')
41 if not verify_user_password(user, password):
42 raise InvalidArgumentsException('Password does not match.')
43 return generate_auth_token(user)
44
```
Path: `backend/medtagger/api/auth/service.py`
Content:
```
1 """Module responsible for definition of Auth service."""
2 from typing import Any
3
4 from flask import request
5 from flask_restplus import Resource
6
7 from medtagger.api import api
8 from medtagger.api.auth.business import create_user, sign_in_user
9 from medtagger.api.auth import serializers
10
11 auth_ns = api.namespace('auth', 'Auth methods')
12
13
14 @auth_ns.route('/register')
15 class Register(Resource):
16 """Register user endpoint."""
17
18 @staticmethod
19 @api.expect(serializers.new_user)
20 @api.doc(responses={201: 'User created', 400: 'Invalid arguments'})
21 def post() -> Any:
22 """Register the user."""
23 user = request.json
24 user_id = create_user(user['email'], user['password'], user['firstName'], user['lastName'])
25 return {'id': user_id}, 201
26
27
28 @auth_ns.route('/sign-in')
29 class SignIn(Resource):
30 """Sign in endpoint."""
31
32 @staticmethod
33 @api.expect(serializers.sign_in)
34 @api.doc(responses={200: 'Signed in', 400: 'User does not exist or wrong password was provided'})
35 def post() -> Any:
36 """Sign in the user."""
37 sign_in = request.json
38 token = sign_in_user(sign_in['email'], sign_in['password'])
39 return {"token": token}, 200
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/backend/medtagger/api/auth/business.py b/backend/medtagger/api/auth/business.py
--- a/backend/medtagger/api/auth/business.py
+++ b/backend/medtagger/api/auth/business.py
@@ -1,11 +1,12 @@
"""Module responsible for business logic in all Auth endpoint."""
+from typing import Tuple
from medtagger.api import InvalidArgumentsException
from medtagger.api.security import hash_password, verify_user_password, generate_auth_token
from medtagger.database.models import User
from medtagger.repositories import roles as RolesRepository, users as UsersRepository
-def create_user(email: str, password: str, first_name: str, last_name: str) -> int:
+def create_user(email: str, password: str, first_name: str, last_name: str) -> Tuple[int, str]:
"""Create user with the given user information. Password is being hashed.
:param email: user email in string format
@@ -13,7 +14,7 @@
:param first_name: user first name in string format
:param last_name: user last name in string format
- :return: id of the new user
+ :return: tuple with user id and authentication token
"""
user = UsersRepository.get_user_by_email(email)
if user:
@@ -24,7 +25,9 @@
if not role:
raise InvalidArgumentsException('Role does not exist.')
new_user.roles.append(role)
- return UsersRepository.add_new_user(new_user)
+ user_id = UsersRepository.add_new_user(new_user)
+ user_token = generate_auth_token(new_user)
+ return user_id, user_token
def sign_in_user(email: str, password: str) -> str:
diff --git a/backend/medtagger/api/auth/service.py b/backend/medtagger/api/auth/service.py
--- a/backend/medtagger/api/auth/service.py
+++ b/backend/medtagger/api/auth/service.py
@@ -21,8 +21,8 @@
def post() -> Any:
"""Register the user."""
user = request.json
- user_id = create_user(user['email'], user['password'], user['firstName'], user['lastName'])
- return {'id': user_id}, 201
+ user_id, user_token = create_user(user['email'], user['password'], user['firstName'], user['lastName'])
+ return {'id': user_id, 'token': user_token}, 201
@auth_ns.route('/sign-in')
| {"golden_diff": "diff --git a/backend/medtagger/api/auth/business.py b/backend/medtagger/api/auth/business.py\n--- a/backend/medtagger/api/auth/business.py\n+++ b/backend/medtagger/api/auth/business.py\n@@ -1,11 +1,12 @@\n \"\"\"Module responsible for business logic in all Auth endpoint.\"\"\"\n+from typing import Tuple\n from medtagger.api import InvalidArgumentsException\n from medtagger.api.security import hash_password, verify_user_password, generate_auth_token\n from medtagger.database.models import User\n from medtagger.repositories import roles as RolesRepository, users as UsersRepository\n \n \n-def create_user(email: str, password: str, first_name: str, last_name: str) -> int:\n+def create_user(email: str, password: str, first_name: str, last_name: str) -> Tuple[int, str]:\n \"\"\"Create user with the given user information. Password is being hashed.\n \n :param email: user email in string format\n@@ -13,7 +14,7 @@\n :param first_name: user first name in string format\n :param last_name: user last name in string format\n \n- :return: id of the new user\n+ :return: tuple with user id and authentication token\n \"\"\"\n user = UsersRepository.get_user_by_email(email)\n if user:\n@@ -24,7 +25,9 @@\n if not role:\n raise InvalidArgumentsException('Role does not exist.')\n new_user.roles.append(role)\n- return UsersRepository.add_new_user(new_user)\n+ user_id = UsersRepository.add_new_user(new_user)\n+ user_token = generate_auth_token(new_user)\n+ return user_id, user_token\n \n \n def sign_in_user(email: str, password: str) -> str:\ndiff --git a/backend/medtagger/api/auth/service.py b/backend/medtagger/api/auth/service.py\n--- a/backend/medtagger/api/auth/service.py\n+++ b/backend/medtagger/api/auth/service.py\n@@ -21,8 +21,8 @@\n def post() -> Any:\n \"\"\"Register the user.\"\"\"\n user = request.json\n- user_id = create_user(user['email'], user['password'], user['firstName'], user['lastName'])\n- return {'id': user_id}, 201\n+ user_id, user_token = create_user(user['email'], user['password'], user['firstName'], user['lastName'])\n+ return {'id': user_id, 'token': user_token}, 201\n \n \n @auth_ns.route('/sign-in')\n", "issue": "Log in user after registration\n## Current Behavior\r\n\r\nUser needs to log in after first registration.\r\n\r\n## Expected Behavior\r\n\r\nUser should be logged into MedTagger right after filling registration form.\r\n\r\n## Steps to Reproduce the Problem\r\n\r\n 1. Register new user.\r\n 2. You will be redirected to the login page.\r\n 3. Type your login once again...\r\n\n", "before_files": [{"content": "\"\"\"Module responsible for business logic in all Auth endpoint.\"\"\"\nfrom medtagger.api import InvalidArgumentsException\nfrom medtagger.api.security import hash_password, verify_user_password, generate_auth_token\nfrom medtagger.database.models import User\nfrom medtagger.repositories import roles as RolesRepository, users as UsersRepository\n\n\ndef create_user(email: str, password: str, first_name: str, last_name: str) -> int:\n \"\"\"Create user with the given user information. Password is being hashed.\n\n :param email: user email in string format\n :param password: user password in string format\n :param first_name: user first name in string format\n :param last_name: user last name in string format\n\n :return: id of the new user\n \"\"\"\n user = UsersRepository.get_user_by_email(email)\n if user:\n raise InvalidArgumentsException('User with this email already exists')\n password_hash = hash_password(password)\n new_user = User(email, password_hash, first_name, last_name)\n role = RolesRepository.get_role_with_name('volunteer')\n if not role:\n raise InvalidArgumentsException('Role does not exist.')\n new_user.roles.append(role)\n return UsersRepository.add_new_user(new_user)\n\n\ndef sign_in_user(email: str, password: str) -> str:\n \"\"\"Sign in user using given username and password.\n\n :param email: user email in string format\n :param password: user password in string format\n\n :return: authentication token\n \"\"\"\n user = UsersRepository.get_user_by_email(email)\n if not user:\n raise InvalidArgumentsException('User does not exist.')\n if not verify_user_password(user, password):\n raise InvalidArgumentsException('Password does not match.')\n return generate_auth_token(user)\n", "path": "backend/medtagger/api/auth/business.py"}, {"content": "\"\"\"Module responsible for definition of Auth service.\"\"\"\nfrom typing import Any\n\nfrom flask import request\nfrom flask_restplus import Resource\n\nfrom medtagger.api import api\nfrom medtagger.api.auth.business import create_user, sign_in_user\nfrom medtagger.api.auth import serializers\n\nauth_ns = api.namespace('auth', 'Auth methods')\n\n\n@auth_ns.route('/register')\nclass Register(Resource):\n \"\"\"Register user endpoint.\"\"\"\n\n @staticmethod\n @api.expect(serializers.new_user)\n @api.doc(responses={201: 'User created', 400: 'Invalid arguments'})\n def post() -> Any:\n \"\"\"Register the user.\"\"\"\n user = request.json\n user_id = create_user(user['email'], user['password'], user['firstName'], user['lastName'])\n return {'id': user_id}, 201\n\n\n@auth_ns.route('/sign-in')\nclass SignIn(Resource):\n \"\"\"Sign in endpoint.\"\"\"\n\n @staticmethod\n @api.expect(serializers.sign_in)\n @api.doc(responses={200: 'Signed in', 400: 'User does not exist or wrong password was provided'})\n def post() -> Any:\n \"\"\"Sign in the user.\"\"\"\n sign_in = request.json\n token = sign_in_user(sign_in['email'], sign_in['password'])\n return {\"token\": token}, 200\n", "path": "backend/medtagger/api/auth/service.py"}], "after_files": [{"content": "\"\"\"Module responsible for business logic in all Auth endpoint.\"\"\"\nfrom typing import Tuple\nfrom medtagger.api import InvalidArgumentsException\nfrom medtagger.api.security import hash_password, verify_user_password, generate_auth_token\nfrom medtagger.database.models import User\nfrom medtagger.repositories import roles as RolesRepository, users as UsersRepository\n\n\ndef create_user(email: str, password: str, first_name: str, last_name: str) -> Tuple[int, str]:\n \"\"\"Create user with the given user information. Password is being hashed.\n\n :param email: user email in string format\n :param password: user password in string format\n :param first_name: user first name in string format\n :param last_name: user last name in string format\n\n :return: tuple with user id and authentication token\n \"\"\"\n user = UsersRepository.get_user_by_email(email)\n if user:\n raise InvalidArgumentsException('User with this email already exists')\n password_hash = hash_password(password)\n new_user = User(email, password_hash, first_name, last_name)\n role = RolesRepository.get_role_with_name('volunteer')\n if not role:\n raise InvalidArgumentsException('Role does not exist.')\n new_user.roles.append(role)\n user_id = UsersRepository.add_new_user(new_user)\n user_token = generate_auth_token(new_user)\n return user_id, user_token\n\n\ndef sign_in_user(email: str, password: str) -> str:\n \"\"\"Sign in user using given username and password.\n\n :param email: user email in string format\n :param password: user password in string format\n\n :return: authentication token\n \"\"\"\n user = UsersRepository.get_user_by_email(email)\n if not user:\n raise InvalidArgumentsException('User does not exist.')\n if not verify_user_password(user, password):\n raise InvalidArgumentsException('Password does not match.')\n return generate_auth_token(user)\n", "path": "backend/medtagger/api/auth/business.py"}, {"content": "\"\"\"Module responsible for definition of Auth service.\"\"\"\nfrom typing import Any\n\nfrom flask import request\nfrom flask_restplus import Resource\n\nfrom medtagger.api import api\nfrom medtagger.api.auth.business import create_user, sign_in_user\nfrom medtagger.api.auth import serializers\n\nauth_ns = api.namespace('auth', 'Auth methods')\n\n\n@auth_ns.route('/register')\nclass Register(Resource):\n \"\"\"Register user endpoint.\"\"\"\n\n @staticmethod\n @api.expect(serializers.new_user)\n @api.doc(responses={201: 'User created', 400: 'Invalid arguments'})\n def post() -> Any:\n \"\"\"Register the user.\"\"\"\n user = request.json\n user_id, user_token = create_user(user['email'], user['password'], user['firstName'], user['lastName'])\n return {'id': user_id, 'token': user_token}, 201\n\n\n@auth_ns.route('/sign-in')\nclass SignIn(Resource):\n \"\"\"Sign in endpoint.\"\"\"\n\n @staticmethod\n @api.expect(serializers.sign_in)\n @api.doc(responses={200: 'Signed in', 400: 'User does not exist or wrong password was provided'})\n def post() -> Any:\n \"\"\"Sign in the user.\"\"\"\n sign_in = request.json\n token = sign_in_user(sign_in['email'], sign_in['password'])\n return {\"token\": token}, 200\n", "path": "backend/medtagger/api/auth/service.py"}]} | 1,188 | 555 |
gh_patches_debug_28812 | rasdani/github-patches | git_diff | sonic-net__sonic-mgmt-4612 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[sensors] `sensor_facts` library is not Python 3 compatible
**Description**
The `sensor_facts` ansible library treats the output of `Popen.communicate()` as a string but it is now a bytes array in Python 3.
See here for details: https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate
**Steps to reproduce the issue:**
1. Run the `test_sensors` test
**Describe the results you received:**
Test fails during collection of sensor facts from ansible with `TypeError: expected bytes-like object, not str`
**Describe the results you expected:**
Test succeeds.
**Note: This failure only occurs on images where Python 3 is the default interpreter such as the bullseye builds**
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ansible/library/sonic_release.py`
Content:
```
1 #!/usr/bin/python
2 import subprocess
3 from ansible.module_utils.basic import *
4
5 DOCUMENTATION = '''
6 ---
7 module: sonic_release
8 version_added: "0.1"
9 author: Ashok Daparthi ([email protected])
10 short_description: Retrive os release facts from device
11 description:
12 - Retrieve sonic release facts for a device, the facts will be
13 inserted to the ansible_facts key.
14 '''
15
16 EXAMPLES = '''
17 # Gather sonic release facts
18 - name: Gather sonic release
19 sonic_release:
20
21 '''
22 def main():
23
24 module = AnsibleModule(argument_spec=dict())
25 """
26 Gets the SONiC OS version that is running on this device.
27 """
28 sonic_release = None
29 sonic_qos_db_fv_reference_with_table = false
30 try:
31 process = subprocess.Popen(['sonic-cfggen', '-y', '/etc/sonic/sonic_version.yml', '-v', 'release'],
32 stdout=subprocess.PIPE, stdin=subprocess.PIPE)
33 self.stdout, stderr = process.communicate()
34 ret_code = process.returncode
35 except Exception as e:
36 module.fail_json(msg=str(e))
37 else:
38 if ret_code != 0:
39 module.fail_json(msg=stderr)
40 else:
41 sonic_release = self.stdout.split('.')[0].strip()
42 """
43 Check for QOS DB format for Field Value refered with tables or not.
44 """
45 old_format_release_list = ["201811", "201911", "202012", "202106"]
46 if any(release == sonic_release for release in old_format_release_list):
47 sonic_qos_db_fv_reference_with_table = true
48
49 module.exit_json(ansible_facts={'sonic_release': sonic_release, 'sonic_qos_db_fv_reference_with_table': sonic_qos_db_fv_reference_with_table})
50
51 if __name__ == '__main__':
52 main()
53
```
Path: `ansible/library/sensors_facts.py`
Content:
```
1 #!/usr/bin/python
2 import re
3 import subprocess
4 from ansible.module_utils.basic import *
5
6 DOCUMENTATION = '''
7 ---
8 module: sensors_facts
9 version_added: "0.2"
10 author: Pavel Shirshov ([email protected])
11 short_description: Retrieve sensors facts for a device. Set alarm if there is hardware alarm
12 description:
13 - Checks are defined in ansible variables. Argument for the module is 'checks' with dictionary with parameters.
14 - Retrieved facts will be inserted to the 'sensors' key.
15 - Retrieved raw values will be inserted to the 'raw' key.
16 - Recognized alarms will be inserted to the 'alarms' key.
17 - 'alarm' key will be set to True if the device has any alarm situation.
18 - If there's only one PSU on the device, 'warning' is set to True and 'warnings' have a message about it.
19 - sensors data: group_vars/sonic/sku-sensors/data.yml
20 '''
21
22 EXAMPLES = '''
23 # Gather sensors facts
24 - name: Gather sensors
25 sensors_facts: checks={{ sensors['Force10-S6000'] }}
26 - name: Output of sensors information
27 debug: var=vars['sensors']
28
29 '''
30
31 # Example of the source data
32 '''
33 acpitz-virtual-0
34 temp1:
35 temp1_input: 26.800
36 temp1_crit: 127.000
37 temp2:
38 temp2_input: 26.800
39 temp2_crit: 118.000
40 '''
41
42 class SensorsModule(object):
43 def __init__(self):
44 self.module = AnsibleModule(
45 argument_spec=dict(
46 checks=dict(required=True, type='dict'),
47 )
48 )
49
50 self.checks = self.module.params['checks']
51
52 self.stdout = None
53 self.skip_devices = set()
54 self.skip_sensors_attr = set()
55 self.raw = {}
56 self.alarms = {}
57 self.warnings = []
58 self.os_version = self._get_os_version()
59 self.facts = {
60 'raw': self.raw,
61 'alarms': self.alarms,
62 'warnings': self.warnings,
63 'alarm': False,
64 'warning': False,
65 }
66
67 return
68
69 def _get_os_version(self):
70 """
71 Gets the SONiC OS version that is running on this device.
72 """
73 os_version = None
74 try:
75 process = subprocess.Popen(['sonic-cfggen', '-y', '/etc/sonic/sonic_version.yml', '-v', 'build_version'],
76 stdout=subprocess.PIPE, stdin=subprocess.PIPE)
77 self.stdout, stderr = process.communicate()
78 ret_code = process.returncode
79 except Exception as e:
80 self.module.fail_json(msg=str(e))
81 else:
82 if ret_code != 0:
83 self.module.fail_json(msg=stderr)
84 else:
85 os_version = self.stdout.split('.')[0].strip()
86
87 return os_version
88
89 def run(self):
90 '''
91 Main method of the class
92 '''
93 self.collect_sensors()
94 self.parse_sensors()
95 self.psu_check()
96 self.sensor_skip_check()
97 self.check_alarms()
98 self.module.exit_json(ansible_facts={'sensors': self.facts})
99
100 return
101
102 def collect_sensors(self):
103 '''
104 Collect sensors by reading output of 'sensors' utility
105 '''
106 try:
107 process = subprocess.Popen(['sensors', '-A', '-u'], stdout=subprocess.PIPE, stdin=subprocess.PIPE)
108 self.stdout, stderr = process.communicate()
109 ret_code = process.returncode
110 except Exception as e:
111 self.module.fail_json(msg=str(e))
112 else:
113 if ret_code != 0:
114 self.module.fail_json(msg=stderr)
115
116 return
117
118 def parse_sensors(self):
119 '''
120 Parse 'sensors' utility output into the dictionary self.raw
121 '''
122
123 # Return true if the row is an empty line
124 is_empty = lambda row: row == ''
125
126 # Return true if the row is a row which represent device
127 # ('acpitz-virtual-0' in the example above)
128 is_device = lambda row: row[0] != ' ' and row[-1] != ':' and ':' not in row
129
130 # Return true if the row is a row which represent a subsystem of the device
131 # ('temp1:' in the example above)
132 is_subsystem = lambda row: row[0] != ' ' and row[-1] == ':'
133
134 # Return true if the row is a row which represent a sensor value
135 # ('temp1_input: 26.800' in the example above)
136 is_sensor = lambda row: row[0] == ' ' and row[-1] != ':' and ':' in row
137
138 device = None
139 subsystem = None
140 for row in self.stdout.splitlines():
141 if is_empty(row):
142 continue
143 elif is_device(row):
144 device = {}
145 self.raw[row] = device
146 elif is_subsystem(row):
147 subsystem = {}
148 device[row[:-1]] = subsystem
149 elif is_sensor(row):
150 key, value = row.split(':')
151 subsystem[key.strip()] = value.strip()
152
153 return
154
155 def psu_check(self):
156 '''
157 Check that both PSU are presented on the remote system.
158 if it's not true, we set up self.skip_devices set with devices,
159 which should be skipped during checks
160 '''
161
162 for dev, attrs in self.checks['psu_skips'].items():
163 if dev not in self.raw:
164 for idev in attrs['skip_list']:
165 self.skip_devices.add(idev)
166 self.facts['warning'] = True
167 self.warnings.append("PSU #%s [%s] is absent" % (attrs['number'], attrs['side']))
168
169 return
170
171 def sensor_skip_check(self):
172 """
173 Check that if some sensors need to be skipped on the DUT.
174 If the image version on DUT match, we set up self.skip_sensors_attr set,
175 which should be skipped during checks
176 """
177
178 for version, attrs in self.checks['sensor_skip_per_version'].items():
179 if version == self.os_version:
180 for attr in attrs['skip_list']:
181 self.skip_sensors_attr.add(attr)
182 self.warnings.append("sensor attributes [%s] is absent in version %s" % (attr, self.os_version))
183 self.facts['warning'] = True
184
185 return
186
187 def get_raw_value(self, path):
188 '''
189 Get value in raw output in the path 'path'
190 Note: Expected path contains two types of value(string and regex)
191 Regular expression is wrapped with backslash '\'
192 '''
193 keys = path.split('/')
194
195 cur_values = self.raw
196 res = None
197 for key in keys:
198 if '\\' not in key:
199 pattern = re.compile(re.escape(key))
200 else:
201 pattern = re.compile(key.replace('\\', ''))
202 for cur_value in cur_values.keys():
203 res = re.match(pattern, cur_value)
204 if res is not None:
205 cur_values = cur_values[res.group()]
206 break
207 if res is None:
208 return None
209
210 return cur_values
211
212 def skip_the_value(self, path):
213 """
214 Check if the path has been added to self.skip_devices of self.skip_sensors_attr.
215 If yes, this path shall be skipped in the alarm check.
216 """
217 if path.split('/')[0] in self.skip_devices or path in self.skip_sensors_attr:
218 return True
219 return False
220
221 def check_alarms(self):
222 """
223 Calculate alarm situation using the lists
224 """
225 # check alarm lists
226 for hw_part, alarm_list in self.checks['alarms'].items():
227 reasons = '%s_reasons' % hw_part
228 self.alarms[hw_part] = False
229 self.alarms[reasons] = []
230 for path in alarm_list:
231 if self.skip_the_value(path):
232 continue
233 value = self.get_raw_value(path)
234 if value is None:
235 self.alarms[hw_part] = True
236 self.facts['alarm'] = True
237 self.alarms[reasons].append('Path %s is not exist' % path)
238 elif value != '0.000':
239 self.alarms[hw_part] = True
240 self.facts['alarm'] = True
241 self.alarms[reasons].append('Alarm on %s' % path)
242
243 # check compare lists
244 for hw_part, compare_list in self.checks['compares'].items():
245 reasons = '%s_reasons' % hw_part
246 for (path_input, path_max) in compare_list:
247 if self.skip_the_value(path_input):
248 continue
249 value_input = self.get_raw_value(path_input)
250 value_max = self.get_raw_value(path_max)
251 if value_input is None:
252 self.alarms[hw_part] = True
253 self.facts['alarm'] = True
254 self.alarms[reasons].append('Path %s is not exist' % path_input)
255 elif value_max is None:
256 self.alarms[hw_part] = True
257 self.facts['alarm'] = True
258 self.alarms[reasons].append('Path %s is not exist' % path_max)
259 elif float(value_input) >= float(value_max) :
260 self.alarms[hw_part] = True
261 self.facts['alarm'] = True
262 self.alarms[reasons].append('Alarm on %s' % path_input)
263
264 # check not zero lists
265 for hw_part, not_zero_list in self.checks['non_zero'].items():
266 reasons = '%s_reasons' % hw_part
267 for path in not_zero_list:
268 if self.skip_the_value(path):
269 continue
270 value = self.get_raw_value(path)
271 if value is None:
272 self.alarms[hw_part] = True
273 self.facts['alarm'] = True
274 self.alarms[reasons].append('Path %s is not exist' % path)
275 elif value == '0.000':
276 self.alarms[hw_part] = True
277 self.facts['alarm'] = True
278 self.alarms[reasons].append('Alarm on %s' % path)
279
280 return
281
282 def main():
283 sensors = SensorsModule()
284 sensors.run()
285
286 return
287
288 if __name__ == '__main__':
289 main()
290
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ansible/library/sensors_facts.py b/ansible/library/sensors_facts.py
--- a/ansible/library/sensors_facts.py
+++ b/ansible/library/sensors_facts.py
@@ -82,7 +82,7 @@
if ret_code != 0:
self.module.fail_json(msg=stderr)
else:
- os_version = self.stdout.split('.')[0].strip()
+ os_version = self.stdout.decode('utf-8').split('.')[0].strip()
return os_version
@@ -106,6 +106,8 @@
try:
process = subprocess.Popen(['sensors', '-A', '-u'], stdout=subprocess.PIPE, stdin=subprocess.PIPE)
self.stdout, stderr = process.communicate()
+ self.stdout = self.stdout.decode('utf-8')
+ stderr = stderr.decode('utf-8')
ret_code = process.returncode
except Exception as e:
self.module.fail_json(msg=str(e))
diff --git a/ansible/library/sonic_release.py b/ansible/library/sonic_release.py
--- a/ansible/library/sonic_release.py
+++ b/ansible/library/sonic_release.py
@@ -31,6 +31,8 @@
process = subprocess.Popen(['sonic-cfggen', '-y', '/etc/sonic/sonic_version.yml', '-v', 'release'],
stdout=subprocess.PIPE, stdin=subprocess.PIPE)
self.stdout, stderr = process.communicate()
+ self.stdout = self.stdout.decode('utf-8')
+ stderr = stderr.decode('utf-8')
ret_code = process.returncode
except Exception as e:
module.fail_json(msg=str(e))
| {"golden_diff": "diff --git a/ansible/library/sensors_facts.py b/ansible/library/sensors_facts.py\n--- a/ansible/library/sensors_facts.py\n+++ b/ansible/library/sensors_facts.py\n@@ -82,7 +82,7 @@\n if ret_code != 0:\n self.module.fail_json(msg=stderr)\n else:\n- os_version = self.stdout.split('.')[0].strip()\n+ os_version = self.stdout.decode('utf-8').split('.')[0].strip()\n \n return os_version\n \n@@ -106,6 +106,8 @@\n try:\n process = subprocess.Popen(['sensors', '-A', '-u'], stdout=subprocess.PIPE, stdin=subprocess.PIPE)\n self.stdout, stderr = process.communicate()\n+ self.stdout = self.stdout.decode('utf-8')\n+ stderr = stderr.decode('utf-8')\n ret_code = process.returncode\n except Exception as e:\n self.module.fail_json(msg=str(e))\ndiff --git a/ansible/library/sonic_release.py b/ansible/library/sonic_release.py\n--- a/ansible/library/sonic_release.py\n+++ b/ansible/library/sonic_release.py\n@@ -31,6 +31,8 @@\n process = subprocess.Popen(['sonic-cfggen', '-y', '/etc/sonic/sonic_version.yml', '-v', 'release'],\n stdout=subprocess.PIPE, stdin=subprocess.PIPE)\n self.stdout, stderr = process.communicate()\n+ self.stdout = self.stdout.decode('utf-8')\n+ stderr = stderr.decode('utf-8')\n ret_code = process.returncode\n except Exception as e:\n module.fail_json(msg=str(e))\n", "issue": "[sensors] `sensor_facts` library is not Python 3 compatible \n**Description**\r\nThe `sensor_facts` ansible library treats the output of `Popen.communicate()` as a string but it is now a bytes array in Python 3.\r\n\r\nSee here for details: https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate\r\n\r\n**Steps to reproduce the issue:**\r\n1. Run the `test_sensors` test\r\n\r\n**Describe the results you received:**\r\nTest fails during collection of sensor facts from ansible with `TypeError: expected bytes-like object, not str`\r\n\r\n**Describe the results you expected:**\r\nTest succeeds. \r\n\r\n**Note: This failure only occurs on images where Python 3 is the default interpreter such as the bullseye builds**\r\n\n", "before_files": [{"content": "#!/usr/bin/python\nimport subprocess\nfrom ansible.module_utils.basic import *\n\nDOCUMENTATION = '''\n---\nmodule: sonic_release\nversion_added: \"0.1\"\nauthor: Ashok Daparthi ([email protected])\nshort_description: Retrive os release facts from device\ndescription:\n - Retrieve sonic release facts for a device, the facts will be\n inserted to the ansible_facts key.\n'''\n\nEXAMPLES = '''\n# Gather sonic release facts\n - name: Gather sonic release\n sonic_release:\n\n'''\ndef main():\n\n module = AnsibleModule(argument_spec=dict())\n \"\"\"\n Gets the SONiC OS version that is running on this device.\n \"\"\"\n sonic_release = None\n sonic_qos_db_fv_reference_with_table = false\n try:\n process = subprocess.Popen(['sonic-cfggen', '-y', '/etc/sonic/sonic_version.yml', '-v', 'release'],\n stdout=subprocess.PIPE, stdin=subprocess.PIPE)\n self.stdout, stderr = process.communicate()\n ret_code = process.returncode\n except Exception as e:\n module.fail_json(msg=str(e))\n else:\n if ret_code != 0:\n module.fail_json(msg=stderr)\n else:\n sonic_release = self.stdout.split('.')[0].strip()\n \"\"\"\n Check for QOS DB format for Field Value refered with tables or not.\n \"\"\"\n old_format_release_list = [\"201811\", \"201911\", \"202012\", \"202106\"]\n if any(release == sonic_release for release in old_format_release_list):\n sonic_qos_db_fv_reference_with_table = true\n\n module.exit_json(ansible_facts={'sonic_release': sonic_release, 'sonic_qos_db_fv_reference_with_table': sonic_qos_db_fv_reference_with_table})\n\nif __name__ == '__main__':\n main()\n", "path": "ansible/library/sonic_release.py"}, {"content": "#!/usr/bin/python\nimport re\nimport subprocess\nfrom ansible.module_utils.basic import *\n\nDOCUMENTATION = '''\n---\nmodule: sensors_facts\nversion_added: \"0.2\"\nauthor: Pavel Shirshov ([email protected])\nshort_description: Retrieve sensors facts for a device. Set alarm if there is hardware alarm\ndescription:\n - Checks are defined in ansible variables. Argument for the module is 'checks' with dictionary with parameters.\n - Retrieved facts will be inserted to the 'sensors' key.\n - Retrieved raw values will be inserted to the 'raw' key.\n - Recognized alarms will be inserted to the 'alarms' key.\n - 'alarm' key will be set to True if the device has any alarm situation.\n - If there's only one PSU on the device, 'warning' is set to True and 'warnings' have a message about it.\n - sensors data: group_vars/sonic/sku-sensors/data.yml\n'''\n\nEXAMPLES = '''\n# Gather sensors facts\n - name: Gather sensors\n sensors_facts: checks={{ sensors['Force10-S6000'] }}\n - name: Output of sensors information\n debug: var=vars['sensors']\n\n'''\n\n# Example of the source data\n'''\nacpitz-virtual-0\ntemp1:\n temp1_input: 26.800\n temp1_crit: 127.000\ntemp2:\n temp2_input: 26.800\n temp2_crit: 118.000\n'''\n\nclass SensorsModule(object):\n def __init__(self):\n self.module = AnsibleModule(\n argument_spec=dict(\n checks=dict(required=True, type='dict'),\n )\n )\n\n self.checks = self.module.params['checks']\n\n self.stdout = None\n self.skip_devices = set()\n self.skip_sensors_attr = set()\n self.raw = {}\n self.alarms = {}\n self.warnings = []\n self.os_version = self._get_os_version()\n self.facts = {\n 'raw': self.raw,\n 'alarms': self.alarms,\n 'warnings': self.warnings,\n 'alarm': False,\n 'warning': False,\n }\n\n return\n\n def _get_os_version(self):\n \"\"\"\n Gets the SONiC OS version that is running on this device.\n \"\"\"\n os_version = None\n try:\n process = subprocess.Popen(['sonic-cfggen', '-y', '/etc/sonic/sonic_version.yml', '-v', 'build_version'],\n stdout=subprocess.PIPE, stdin=subprocess.PIPE)\n self.stdout, stderr = process.communicate()\n ret_code = process.returncode\n except Exception as e:\n self.module.fail_json(msg=str(e))\n else:\n if ret_code != 0:\n self.module.fail_json(msg=stderr)\n else:\n os_version = self.stdout.split('.')[0].strip()\n\n return os_version\n\n def run(self):\n '''\n Main method of the class\n '''\n self.collect_sensors()\n self.parse_sensors()\n self.psu_check()\n self.sensor_skip_check()\n self.check_alarms()\n self.module.exit_json(ansible_facts={'sensors': self.facts})\n\n return\n\n def collect_sensors(self):\n '''\n Collect sensors by reading output of 'sensors' utility\n '''\n try:\n process = subprocess.Popen(['sensors', '-A', '-u'], stdout=subprocess.PIPE, stdin=subprocess.PIPE)\n self.stdout, stderr = process.communicate()\n ret_code = process.returncode\n except Exception as e:\n self.module.fail_json(msg=str(e))\n else:\n if ret_code != 0:\n self.module.fail_json(msg=stderr)\n\n return\n\n def parse_sensors(self):\n '''\n Parse 'sensors' utility output into the dictionary self.raw\n '''\n\n # Return true if the row is an empty line\n is_empty = lambda row: row == ''\n\n # Return true if the row is a row which represent device\n # ('acpitz-virtual-0' in the example above)\n is_device = lambda row: row[0] != ' ' and row[-1] != ':' and ':' not in row\n\n # Return true if the row is a row which represent a subsystem of the device\n # ('temp1:' in the example above)\n is_subsystem = lambda row: row[0] != ' ' and row[-1] == ':'\n\n # Return true if the row is a row which represent a sensor value\n # ('temp1_input: 26.800' in the example above)\n is_sensor = lambda row: row[0] == ' ' and row[-1] != ':' and ':' in row\n\n device = None\n subsystem = None\n for row in self.stdout.splitlines():\n if is_empty(row):\n continue\n elif is_device(row):\n device = {}\n self.raw[row] = device\n elif is_subsystem(row):\n subsystem = {}\n device[row[:-1]] = subsystem\n elif is_sensor(row):\n key, value = row.split(':')\n subsystem[key.strip()] = value.strip()\n\n return\n\n def psu_check(self):\n '''\n Check that both PSU are presented on the remote system.\n if it's not true, we set up self.skip_devices set with devices,\n which should be skipped during checks\n '''\n\n for dev, attrs in self.checks['psu_skips'].items():\n if dev not in self.raw:\n for idev in attrs['skip_list']:\n self.skip_devices.add(idev)\n self.facts['warning'] = True\n self.warnings.append(\"PSU #%s [%s] is absent\" % (attrs['number'], attrs['side']))\n\n return\n\n def sensor_skip_check(self):\n \"\"\"\n Check that if some sensors need to be skipped on the DUT.\n If the image version on DUT match, we set up self.skip_sensors_attr set,\n which should be skipped during checks\n \"\"\"\n\n for version, attrs in self.checks['sensor_skip_per_version'].items():\n if version == self.os_version:\n for attr in attrs['skip_list']:\n self.skip_sensors_attr.add(attr)\n self.warnings.append(\"sensor attributes [%s] is absent in version %s\" % (attr, self.os_version))\n self.facts['warning'] = True\n\n return\n\n def get_raw_value(self, path):\n '''\n Get value in raw output in the path 'path'\n Note: Expected path contains two types of value(string and regex)\n Regular expression is wrapped with backslash '\\'\n '''\n keys = path.split('/')\n\n cur_values = self.raw\n res = None\n for key in keys:\n if '\\\\' not in key:\n pattern = re.compile(re.escape(key))\n else:\n pattern = re.compile(key.replace('\\\\', ''))\n for cur_value in cur_values.keys():\n res = re.match(pattern, cur_value)\n if res is not None:\n cur_values = cur_values[res.group()]\n break\n if res is None:\n return None\n\n return cur_values\n\n def skip_the_value(self, path):\n \"\"\"\n Check if the path has been added to self.skip_devices of self.skip_sensors_attr.\n If yes, this path shall be skipped in the alarm check.\n \"\"\"\n if path.split('/')[0] in self.skip_devices or path in self.skip_sensors_attr:\n return True\n return False\n\n def check_alarms(self):\n \"\"\"\n Calculate alarm situation using the lists\n \"\"\"\n # check alarm lists\n for hw_part, alarm_list in self.checks['alarms'].items():\n reasons = '%s_reasons' % hw_part\n self.alarms[hw_part] = False\n self.alarms[reasons] = []\n for path in alarm_list:\n if self.skip_the_value(path):\n continue\n value = self.get_raw_value(path)\n if value is None:\n self.alarms[hw_part] = True\n self.facts['alarm'] = True\n self.alarms[reasons].append('Path %s is not exist' % path)\n elif value != '0.000':\n self.alarms[hw_part] = True\n self.facts['alarm'] = True\n self.alarms[reasons].append('Alarm on %s' % path)\n\n # check compare lists\n for hw_part, compare_list in self.checks['compares'].items():\n reasons = '%s_reasons' % hw_part\n for (path_input, path_max) in compare_list:\n if self.skip_the_value(path_input):\n continue\n value_input = self.get_raw_value(path_input)\n value_max = self.get_raw_value(path_max)\n if value_input is None:\n self.alarms[hw_part] = True\n self.facts['alarm'] = True\n self.alarms[reasons].append('Path %s is not exist' % path_input)\n elif value_max is None:\n self.alarms[hw_part] = True\n self.facts['alarm'] = True\n self.alarms[reasons].append('Path %s is not exist' % path_max)\n elif float(value_input) >= float(value_max) :\n self.alarms[hw_part] = True\n self.facts['alarm'] = True\n self.alarms[reasons].append('Alarm on %s' % path_input)\n\n # check not zero lists\n for hw_part, not_zero_list in self.checks['non_zero'].items():\n reasons = '%s_reasons' % hw_part\n for path in not_zero_list:\n if self.skip_the_value(path):\n continue\n value = self.get_raw_value(path)\n if value is None:\n self.alarms[hw_part] = True\n self.facts['alarm'] = True\n self.alarms[reasons].append('Path %s is not exist' % path)\n elif value == '0.000':\n self.alarms[hw_part] = True\n self.facts['alarm'] = True\n self.alarms[reasons].append('Alarm on %s' % path)\n\n return\n\ndef main():\n sensors = SensorsModule()\n sensors.run()\n\n return\n\nif __name__ == '__main__':\n main()\n", "path": "ansible/library/sensors_facts.py"}], "after_files": [{"content": "#!/usr/bin/python\nimport subprocess\nfrom ansible.module_utils.basic import *\n\nDOCUMENTATION = '''\n---\nmodule: sonic_release\nversion_added: \"0.1\"\nauthor: Ashok Daparthi ([email protected])\nshort_description: Retrive os release facts from device\ndescription:\n - Retrieve sonic release facts for a device, the facts will be\n inserted to the ansible_facts key.\n'''\n\nEXAMPLES = '''\n# Gather sonic release facts\n - name: Gather sonic release\n sonic_release:\n\n'''\ndef main():\n\n module = AnsibleModule(argument_spec=dict())\n \"\"\"\n Gets the SONiC OS version that is running on this device.\n \"\"\"\n sonic_release = None\n sonic_qos_db_fv_reference_with_table = false\n try:\n process = subprocess.Popen(['sonic-cfggen', '-y', '/etc/sonic/sonic_version.yml', '-v', 'release'],\n stdout=subprocess.PIPE, stdin=subprocess.PIPE)\n self.stdout, stderr = process.communicate()\n self.stdout = self.stdout.decode('utf-8')\n stderr = stderr.decode('utf-8')\n ret_code = process.returncode\n except Exception as e:\n module.fail_json(msg=str(e))\n else:\n if ret_code != 0:\n module.fail_json(msg=stderr)\n else:\n sonic_release = self.stdout.split('.')[0].strip()\n \"\"\"\n Check for QOS DB format for Field Value refered with tables or not.\n \"\"\"\n old_format_release_list = [\"201811\", \"201911\", \"202012\", \"202106\"]\n if any(release == sonic_release for release in old_format_release_list):\n sonic_qos_db_fv_reference_with_table = true\n\n module.exit_json(ansible_facts={'sonic_release': sonic_release, 'sonic_qos_db_fv_reference_with_table': sonic_qos_db_fv_reference_with_table})\n\nif __name__ == '__main__':\n main()\n", "path": "ansible/library/sonic_release.py"}, {"content": "#!/usr/bin/python\nimport re\nimport subprocess\nfrom ansible.module_utils.basic import *\n\nDOCUMENTATION = '''\n---\nmodule: sensors_facts\nversion_added: \"0.2\"\nauthor: Pavel Shirshov ([email protected])\nshort_description: Retrieve sensors facts for a device. Set alarm if there is hardware alarm\ndescription:\n - Checks are defined in ansible variables. Argument for the module is 'checks' with dictionary with parameters.\n - Retrieved facts will be inserted to the 'sensors' key.\n - Retrieved raw values will be inserted to the 'raw' key.\n - Recognized alarms will be inserted to the 'alarms' key.\n - 'alarm' key will be set to True if the device has any alarm situation.\n - If there's only one PSU on the device, 'warning' is set to True and 'warnings' have a message about it.\n - sensors data: group_vars/sonic/sku-sensors/data.yml\n'''\n\nEXAMPLES = '''\n# Gather sensors facts\n - name: Gather sensors\n sensors_facts: checks={{ sensors['Force10-S6000'] }}\n - name: Output of sensors information\n debug: var=vars['sensors']\n\n'''\n\n# Example of the source data\n'''\nacpitz-virtual-0\ntemp1:\n temp1_input: 26.800\n temp1_crit: 127.000\ntemp2:\n temp2_input: 26.800\n temp2_crit: 118.000\n'''\n\nclass SensorsModule(object):\n def __init__(self):\n self.module = AnsibleModule(\n argument_spec=dict(\n checks=dict(required=True, type='dict'),\n )\n )\n\n self.checks = self.module.params['checks']\n\n self.stdout = None\n self.skip_devices = set()\n self.skip_sensors_attr = set()\n self.raw = {}\n self.alarms = {}\n self.warnings = []\n self.os_version = self._get_os_version()\n self.facts = {\n 'raw': self.raw,\n 'alarms': self.alarms,\n 'warnings': self.warnings,\n 'alarm': False,\n 'warning': False,\n }\n\n return\n\n def _get_os_version(self):\n \"\"\"\n Gets the SONiC OS version that is running on this device.\n \"\"\"\n os_version = None\n try:\n process = subprocess.Popen(['sonic-cfggen', '-y', '/etc/sonic/sonic_version.yml', '-v', 'build_version'],\n stdout=subprocess.PIPE, stdin=subprocess.PIPE)\n self.stdout, stderr = process.communicate()\n ret_code = process.returncode\n except Exception as e:\n self.module.fail_json(msg=str(e))\n else:\n if ret_code != 0:\n self.module.fail_json(msg=stderr)\n else:\n os_version = self.stdout.decode('utf-8').split('.')[0].strip()\n\n return os_version\n\n def run(self):\n '''\n Main method of the class\n '''\n self.collect_sensors()\n self.parse_sensors()\n self.psu_check()\n self.sensor_skip_check()\n self.check_alarms()\n self.module.exit_json(ansible_facts={'sensors': self.facts})\n\n return\n\n def collect_sensors(self):\n '''\n Collect sensors by reading output of 'sensors' utility\n '''\n try:\n process = subprocess.Popen(['sensors', '-A', '-u'], stdout=subprocess.PIPE, stdin=subprocess.PIPE)\n self.stdout, stderr = process.communicate()\n self.stdout = self.stdout.decode('utf-8')\n stderr = stderr.decode('utf-8')\n ret_code = process.returncode\n except Exception as e:\n self.module.fail_json(msg=str(e))\n else:\n if ret_code != 0:\n self.module.fail_json(msg=stderr)\n\n return\n\n def parse_sensors(self):\n '''\n Parse 'sensors' utility output into the dictionary self.raw\n '''\n\n # Return true if the row is an empty line\n is_empty = lambda row: row == ''\n\n # Return true if the row is a row which represent device\n # ('acpitz-virtual-0' in the example above)\n is_device = lambda row: row[0] != ' ' and row[-1] != ':' and ':' not in row\n\n # Return true if the row is a row which represent a subsystem of the device\n # ('temp1:' in the example above)\n is_subsystem = lambda row: row[0] != ' ' and row[-1] == ':'\n\n # Return true if the row is a row which represent a sensor value\n # ('temp1_input: 26.800' in the example above)\n is_sensor = lambda row: row[0] == ' ' and row[-1] != ':' and ':' in row\n\n device = None\n subsystem = None\n for row in self.stdout.splitlines():\n if is_empty(row):\n continue\n elif is_device(row):\n device = {}\n self.raw[row] = device\n elif is_subsystem(row):\n subsystem = {}\n device[row[:-1]] = subsystem\n elif is_sensor(row):\n key, value = row.split(':')\n subsystem[key.strip()] = value.strip()\n\n return\n\n def psu_check(self):\n '''\n Check that both PSU are presented on the remote system.\n if it's not true, we set up self.skip_devices set with devices,\n which should be skipped during checks\n '''\n\n for dev, attrs in self.checks['psu_skips'].items():\n if dev not in self.raw:\n for idev in attrs['skip_list']:\n self.skip_devices.add(idev)\n self.facts['warning'] = True\n self.warnings.append(\"PSU #%s [%s] is absent\" % (attrs['number'], attrs['side']))\n\n return\n\n def sensor_skip_check(self):\n \"\"\"\n Check that if some sensors need to be skipped on the DUT.\n If the image version on DUT match, we set up self.skip_sensors_attr set,\n which should be skipped during checks\n \"\"\"\n\n for version, attrs in self.checks['sensor_skip_per_version'].items():\n if version == self.os_version:\n for attr in attrs['skip_list']:\n self.skip_sensors_attr.add(attr)\n self.warnings.append(\"sensor attributes [%s] is absent in version %s\" % (attr, self.os_version))\n self.facts['warning'] = True\n\n return\n\n def get_raw_value(self, path):\n '''\n Get value in raw output in the path 'path'\n Note: Expected path contains two types of value(string and regex)\n Regular expression is wrapped with backslash '\\'\n '''\n keys = path.split('/')\n\n cur_values = self.raw\n res = None\n for key in keys:\n if '\\\\' not in key:\n pattern = re.compile(re.escape(key))\n else:\n pattern = re.compile(key.replace('\\\\', ''))\n for cur_value in cur_values.keys():\n res = re.match(pattern, cur_value)\n if res is not None:\n cur_values = cur_values[res.group()]\n break\n if res is None:\n return None\n\n return cur_values\n\n def skip_the_value(self, path):\n \"\"\"\n Check if the path has been added to self.skip_devices of self.skip_sensors_attr.\n If yes, this path shall be skipped in the alarm check.\n \"\"\"\n if path.split('/')[0] in self.skip_devices or path in self.skip_sensors_attr:\n return True\n return False\n\n def check_alarms(self):\n \"\"\"\n Calculate alarm situation using the lists\n \"\"\"\n # check alarm lists\n for hw_part, alarm_list in self.checks['alarms'].items():\n reasons = '%s_reasons' % hw_part\n self.alarms[hw_part] = False\n self.alarms[reasons] = []\n for path in alarm_list:\n if self.skip_the_value(path):\n continue\n value = self.get_raw_value(path)\n if value is None:\n self.alarms[hw_part] = True\n self.facts['alarm'] = True\n self.alarms[reasons].append('Path %s is not exist' % path)\n elif value != '0.000':\n self.alarms[hw_part] = True\n self.facts['alarm'] = True\n self.alarms[reasons].append('Alarm on %s' % path)\n\n # check compare lists\n for hw_part, compare_list in self.checks['compares'].items():\n reasons = '%s_reasons' % hw_part\n for (path_input, path_max) in compare_list:\n if self.skip_the_value(path_input):\n continue\n value_input = self.get_raw_value(path_input)\n value_max = self.get_raw_value(path_max)\n if value_input is None:\n self.alarms[hw_part] = True\n self.facts['alarm'] = True\n self.alarms[reasons].append('Path %s is not exist' % path_input)\n elif value_max is None:\n self.alarms[hw_part] = True\n self.facts['alarm'] = True\n self.alarms[reasons].append('Path %s is not exist' % path_max)\n elif float(value_input) >= float(value_max) :\n self.alarms[hw_part] = True\n self.facts['alarm'] = True\n self.alarms[reasons].append('Alarm on %s' % path_input)\n\n # check not zero lists\n for hw_part, not_zero_list in self.checks['non_zero'].items():\n reasons = '%s_reasons' % hw_part\n for path in not_zero_list:\n if self.skip_the_value(path):\n continue\n value = self.get_raw_value(path)\n if value is None:\n self.alarms[hw_part] = True\n self.facts['alarm'] = True\n self.alarms[reasons].append('Path %s is not exist' % path)\n elif value == '0.000':\n self.alarms[hw_part] = True\n self.facts['alarm'] = True\n self.alarms[reasons].append('Alarm on %s' % path)\n\n return\n\ndef main():\n sensors = SensorsModule()\n sensors.run()\n\n return\n\nif __name__ == '__main__':\n main()\n", "path": "ansible/library/sensors_facts.py"}]} | 4,012 | 364 |
gh_patches_debug_23773 | rasdani/github-patches | git_diff | mirumee__ariadne-481 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unexpected Snake Case for Acronyms
The snake case conversion of the `snake_case_fallback_resolvers` yields unexpected results for words with multiple uppercase letters in a row, e.g.
- `getHTTPResponse` is converted to `get_h_t_t_p_response`, or
- `externalID` is converted to "external_i_d`.
These are unlikely names for python attributes and I would expect the resolver to look for `get_http_response` / `external_id` instead.
Possible implementations for the camel to snake case conversions are discussed here: https://stackoverflow.com/questions/1175208/elegant-python-function-to-convert-camelcase-to-snake-case
Unexpected Snake Case for Acronyms
The snake case conversion of the `snake_case_fallback_resolvers` yields unexpected results for words with multiple uppercase letters in a row, e.g.
- `getHTTPResponse` is converted to `get_h_t_t_p_response`, or
- `externalID` is converted to "external_i_d`.
These are unlikely names for python attributes and I would expect the resolver to look for `get_http_response` / `external_id` instead.
Possible implementations for the camel to snake case conversions are discussed here: https://stackoverflow.com/questions/1175208/elegant-python-function-to-convert-camelcase-to-snake-case
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ariadne/utils.py`
Content:
```
1 import asyncio
2 from functools import wraps
3 from typing import Optional, Union, Callable, Dict, Any
4
5 from graphql import GraphQLError, parse
6
7
8 def convert_camel_case_to_snake(graphql_name: str) -> str:
9 python_name = ""
10 for i, c in enumerate(graphql_name.lower()):
11 if (
12 i > 0
13 and (
14 all(
15 (
16 c != graphql_name[i],
17 graphql_name[i - 1] != "_",
18 graphql_name[i - 1] == python_name[-1],
19 )
20 )
21 )
22 or all((c.isdigit(), graphql_name[i - 1].isdigit() is False))
23 ):
24 python_name += "_"
25 python_name += c
26 return python_name
27
28
29 def gql(value: str) -> str:
30 parse(value)
31 return value
32
33
34 def unwrap_graphql_error(
35 error: Union[GraphQLError, Optional[Exception]]
36 ) -> Optional[Exception]:
37 if isinstance(error, GraphQLError):
38 return unwrap_graphql_error(error.original_error)
39 return error
40
41
42 def convert_kwargs_to_snake_case(func: Callable) -> Callable:
43 def convert_to_snake_case(d: Dict) -> Dict:
44 converted: Dict = {}
45 for k, v in d.items():
46 if isinstance(v, dict):
47 v = convert_to_snake_case(v)
48 if isinstance(v, list):
49 v = [convert_to_snake_case(i) if isinstance(i, dict) else i for i in v]
50 converted[convert_camel_case_to_snake(k)] = v
51 return converted
52
53 if asyncio.iscoroutinefunction(func):
54
55 @wraps(func)
56 async def async_wrapper(*args: Any, **kwargs: Any) -> Any:
57 return await func(*args, **convert_to_snake_case(kwargs))
58
59 return async_wrapper
60
61 @wraps(func)
62 def wrapper(*args: Any, **kwargs: Any) -> Any:
63 return func(*args, **convert_to_snake_case(kwargs))
64
65 return wrapper
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ariadne/utils.py b/ariadne/utils.py
--- a/ariadne/utils.py
+++ b/ariadne/utils.py
@@ -6,20 +6,29 @@
def convert_camel_case_to_snake(graphql_name: str) -> str:
+ # pylint: disable=too-many-boolean-expressions
+ max_index = len(graphql_name) - 1
+ lowered_name = graphql_name.lower()
+
python_name = ""
- for i, c in enumerate(graphql_name.lower()):
- if (
- i > 0
- and (
- all(
- (
- c != graphql_name[i],
- graphql_name[i - 1] != "_",
- graphql_name[i - 1] == python_name[-1],
- )
- )
+ for i, c in enumerate(lowered_name):
+ if i > 0 and (
+ # testWord -> test_word
+ (
+ c != graphql_name[i]
+ and graphql_name[i - 1] != "_"
+ and graphql_name[i - 1] == python_name[-1]
+ )
+ # TESTWord -> test_word
+ or (
+ i < max_index
+ and graphql_name[i] != lowered_name[i]
+ and graphql_name[i + 1] == lowered_name[i + 1]
)
- or all((c.isdigit(), graphql_name[i - 1].isdigit() is False))
+ # test134 -> test_134
+ or (c.isdigit() and not graphql_name[i - 1].isdigit())
+ # 134test -> 134_test
+ or (not c.isdigit() and graphql_name[i - 1].isdigit())
):
python_name += "_"
python_name += c
| {"golden_diff": "diff --git a/ariadne/utils.py b/ariadne/utils.py\n--- a/ariadne/utils.py\n+++ b/ariadne/utils.py\n@@ -6,20 +6,29 @@\n \n \n def convert_camel_case_to_snake(graphql_name: str) -> str:\n+ # pylint: disable=too-many-boolean-expressions\n+ max_index = len(graphql_name) - 1\n+ lowered_name = graphql_name.lower()\n+\n python_name = \"\"\n- for i, c in enumerate(graphql_name.lower()):\n- if (\n- i > 0\n- and (\n- all(\n- (\n- c != graphql_name[i],\n- graphql_name[i - 1] != \"_\",\n- graphql_name[i - 1] == python_name[-1],\n- )\n- )\n+ for i, c in enumerate(lowered_name):\n+ if i > 0 and (\n+ # testWord -> test_word\n+ (\n+ c != graphql_name[i]\n+ and graphql_name[i - 1] != \"_\"\n+ and graphql_name[i - 1] == python_name[-1]\n+ )\n+ # TESTWord -> test_word\n+ or (\n+ i < max_index\n+ and graphql_name[i] != lowered_name[i]\n+ and graphql_name[i + 1] == lowered_name[i + 1]\n )\n- or all((c.isdigit(), graphql_name[i - 1].isdigit() is False))\n+ # test134 -> test_134\n+ or (c.isdigit() and not graphql_name[i - 1].isdigit())\n+ # 134test -> 134_test\n+ or (not c.isdigit() and graphql_name[i - 1].isdigit())\n ):\n python_name += \"_\"\n python_name += c\n", "issue": "Unexpected Snake Case for Acronyms\nThe snake case conversion of the `snake_case_fallback_resolvers` yields unexpected results for words with multiple uppercase letters in a row, e.g.\r\n - `getHTTPResponse` is converted to `get_h_t_t_p_response`, or\r\n - `externalID` is converted to \"external_i_d`. \r\n\r\nThese are unlikely names for python attributes and I would expect the resolver to look for `get_http_response` / `external_id` instead.\r\n\r\nPossible implementations for the camel to snake case conversions are discussed here: https://stackoverflow.com/questions/1175208/elegant-python-function-to-convert-camelcase-to-snake-case\nUnexpected Snake Case for Acronyms\nThe snake case conversion of the `snake_case_fallback_resolvers` yields unexpected results for words with multiple uppercase letters in a row, e.g.\r\n - `getHTTPResponse` is converted to `get_h_t_t_p_response`, or\r\n - `externalID` is converted to \"external_i_d`. \r\n\r\nThese are unlikely names for python attributes and I would expect the resolver to look for `get_http_response` / `external_id` instead.\r\n\r\nPossible implementations for the camel to snake case conversions are discussed here: https://stackoverflow.com/questions/1175208/elegant-python-function-to-convert-camelcase-to-snake-case\n", "before_files": [{"content": "import asyncio\nfrom functools import wraps\nfrom typing import Optional, Union, Callable, Dict, Any\n\nfrom graphql import GraphQLError, parse\n\n\ndef convert_camel_case_to_snake(graphql_name: str) -> str:\n python_name = \"\"\n for i, c in enumerate(graphql_name.lower()):\n if (\n i > 0\n and (\n all(\n (\n c != graphql_name[i],\n graphql_name[i - 1] != \"_\",\n graphql_name[i - 1] == python_name[-1],\n )\n )\n )\n or all((c.isdigit(), graphql_name[i - 1].isdigit() is False))\n ):\n python_name += \"_\"\n python_name += c\n return python_name\n\n\ndef gql(value: str) -> str:\n parse(value)\n return value\n\n\ndef unwrap_graphql_error(\n error: Union[GraphQLError, Optional[Exception]]\n) -> Optional[Exception]:\n if isinstance(error, GraphQLError):\n return unwrap_graphql_error(error.original_error)\n return error\n\n\ndef convert_kwargs_to_snake_case(func: Callable) -> Callable:\n def convert_to_snake_case(d: Dict) -> Dict:\n converted: Dict = {}\n for k, v in d.items():\n if isinstance(v, dict):\n v = convert_to_snake_case(v)\n if isinstance(v, list):\n v = [convert_to_snake_case(i) if isinstance(i, dict) else i for i in v]\n converted[convert_camel_case_to_snake(k)] = v\n return converted\n\n if asyncio.iscoroutinefunction(func):\n\n @wraps(func)\n async def async_wrapper(*args: Any, **kwargs: Any) -> Any:\n return await func(*args, **convert_to_snake_case(kwargs))\n\n return async_wrapper\n\n @wraps(func)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n return func(*args, **convert_to_snake_case(kwargs))\n\n return wrapper\n", "path": "ariadne/utils.py"}], "after_files": [{"content": "import asyncio\nfrom functools import wraps\nfrom typing import Optional, Union, Callable, Dict, Any\n\nfrom graphql import GraphQLError, parse\n\n\ndef convert_camel_case_to_snake(graphql_name: str) -> str:\n # pylint: disable=too-many-boolean-expressions\n max_index = len(graphql_name) - 1\n lowered_name = graphql_name.lower()\n\n python_name = \"\"\n for i, c in enumerate(lowered_name):\n if i > 0 and (\n # testWord -> test_word\n (\n c != graphql_name[i]\n and graphql_name[i - 1] != \"_\"\n and graphql_name[i - 1] == python_name[-1]\n )\n # TESTWord -> test_word\n or (\n i < max_index\n and graphql_name[i] != lowered_name[i]\n and graphql_name[i + 1] == lowered_name[i + 1]\n )\n # test134 -> test_134\n or (c.isdigit() and not graphql_name[i - 1].isdigit())\n # 134test -> 134_test\n or (not c.isdigit() and graphql_name[i - 1].isdigit())\n ):\n python_name += \"_\"\n python_name += c\n return python_name\n\n\ndef gql(value: str) -> str:\n parse(value)\n return value\n\n\ndef unwrap_graphql_error(\n error: Union[GraphQLError, Optional[Exception]]\n) -> Optional[Exception]:\n if isinstance(error, GraphQLError):\n return unwrap_graphql_error(error.original_error)\n return error\n\n\ndef convert_kwargs_to_snake_case(func: Callable) -> Callable:\n def convert_to_snake_case(d: Dict) -> Dict:\n converted: Dict = {}\n for k, v in d.items():\n if isinstance(v, dict):\n v = convert_to_snake_case(v)\n if isinstance(v, list):\n v = [convert_to_snake_case(i) if isinstance(i, dict) else i for i in v]\n converted[convert_camel_case_to_snake(k)] = v\n return converted\n\n if asyncio.iscoroutinefunction(func):\n\n @wraps(func)\n async def async_wrapper(*args: Any, **kwargs: Any) -> Any:\n return await func(*args, **convert_to_snake_case(kwargs))\n\n return async_wrapper\n\n @wraps(func)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n return func(*args, **convert_to_snake_case(kwargs))\n\n return wrapper\n", "path": "ariadne/utils.py"}]} | 1,101 | 409 |
gh_patches_debug_10822 | rasdani/github-patches | git_diff | translate__pootle-5932 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
IntegrityError: (1062, "Duplicate entry 'xxx-stats' for key > 'pootle_revision_content_type_id_xxx_uniq'")
This error has been spotted in the wild.
From code review its hard to see how its happening, and i ~~havent~~ managed to reproduce - *by clicking very fast on editor buttons*
~~It may be related to update_stores~~
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/pootle_revision/utils.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 import uuid
10
11 from django.contrib.contenttypes.models import ContentType
12 from django.utils.functional import cached_property
13
14 from pootle.core.url_helpers import split_pootle_path
15 from pootle_app.models import Directory
16
17 from .models import Revision
18
19
20 class RevisionContext(object):
21
22 def __init__(self, context):
23 self.context = context
24
25 @cached_property
26 def content_type_id(self):
27 return ContentType.objects.get_for_model(
28 self.context._meta.model).id
29
30 @property
31 def revision_context(self):
32 return self.context.revisions
33
34 def get(self, key=None):
35 """get a revision from db or set one if not set"""
36 if not self.revision_context:
37 return ""
38 return self.revision_context.filter(
39 key=key).values_list("value", flat=True).first() or ""
40
41 def set(self, keys=None, value=None):
42 """get a revision from db or set one if not set"""
43 self.revision_context.filter(key__in=keys).delete()
44 if value:
45 revisions = []
46 for k in keys:
47 revisions.append(
48 Revision(
49 content_type_id=self.content_type_id,
50 object_id=self.context.pk,
51 key=k,
52 value=value))
53 Revision.objects.bulk_create(revisions)
54
55
56 class DirectoryRevision(RevisionContext):
57 pass
58
59
60 class LanguageRevision(RevisionContext):
61
62 @property
63 def revision_context(self):
64 return self.context.directory.revisions
65
66
67 class ProjectRevision(RevisionContext):
68 pass
69
70
71 class ProjectResourceRevision(RevisionContext):
72
73 @property
74 def revision_context(self):
75 first_child = self.context.children.first()
76 if not first_child:
77 return
78 return Directory.objects.get(
79 pootle_path="/projects/%s/"
80 % split_pootle_path(first_child.pootle_path)[1]).revisions
81
82
83 class ProjectSetRevision(RevisionContext):
84
85 @property
86 def revision_context(self):
87 first_project = self.context.children.first()
88 if not first_project:
89 return
90 return first_project.directory.parent.revisions
91
92
93 class TPRevision(RevisionContext):
94 pass
95
96
97 class RevisionUpdater(object):
98
99 def __init__(self, context=None, object_list=None):
100 self.context = context
101 self.object_list = object_list
102
103 @property
104 def object_list_paths(self):
105 return set(
106 self.object_list.values_list(
107 self.related_pootle_path,
108 flat=True))
109
110 @property
111 def all_pootle_paths(self):
112 if self.context and not self.object_list:
113 return set([self.context_path])
114 elif self.object_list:
115 parents = self.object_list_paths
116 if self.context:
117 parents.add(self.context_path)
118 return parents
119 return []
120
121 @property
122 def parents(self):
123 """calculate unit parents for cache update"""
124 return Directory.objects.filter(
125 pootle_path__in=self.get_parent_paths(self.all_pootle_paths))
126
127 def get_parent_paths(self, pootle_paths):
128 paths = set(["/projects/"])
129 for pootle_path in pootle_paths:
130 lang_code, proj_code, dir_path, __ = split_pootle_path(pootle_path)
131 paths.add("/projects/%s/" % proj_code)
132 paths.add("/%s/" % lang_code)
133 paths.add("/%s/%s/" % (lang_code, proj_code))
134 dir_path_parts = dir_path.split("/")
135 for i, name in enumerate(dir_path_parts):
136 if not name:
137 continue
138 paths.add(
139 "/%s/%s/%s/"
140 % (lang_code,
141 proj_code,
142 "/".join(dir_path_parts[:i + 1])))
143 return paths
144
145 @property
146 def new_revision(self):
147 return uuid.uuid4().hex
148
149 @cached_property
150 def content_type_id(self):
151 return ContentType.objects.get_for_model(Directory).id
152
153 def get_revisions(self, parents, keys=None):
154 return Revision.objects.filter(
155 content_type_id=self.content_type_id,
156 key__in=keys or [""],
157 object_id__in=parents)
158
159 def create_revisions(self, parents, keys=None):
160 new_revision = self.new_revision
161 for parent in parents:
162 for key in keys or [""]:
163 yield Revision(
164 content_type_id=self.content_type_id,
165 object_id=parent,
166 key=key,
167 value=new_revision)
168
169 def update(self, keys=None):
170 parents = list(self.parents.values_list("id", flat=True))
171 revisions = self.get_revisions(parents, keys=keys)
172 revisions.delete()
173 Revision.objects.bulk_create(
174 self.create_revisions(parents, keys=keys))
175
176
177 class UnitRevisionUpdater(RevisionUpdater):
178 related_pootle_path = "store__parent__pootle_path"
179
180 @property
181 def context_path(self):
182 return self.context.store.parent.pootle_path
183
184
185 class StoreRevisionUpdater(RevisionUpdater):
186 related_pootle_path = "parent__pootle_path"
187
188 @property
189 def context_path(self):
190 return self.context.parent.pootle_path
191
192
193 class DirectoryRevisionUpdater(RevisionUpdater):
194 related_pootle_path = "pootle_path"
195
196 @property
197 def context_path(self):
198 return self.context.pootle_path
199
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pootle/apps/pootle_revision/utils.py b/pootle/apps/pootle_revision/utils.py
--- a/pootle/apps/pootle_revision/utils.py
+++ b/pootle/apps/pootle_revision/utils.py
@@ -169,7 +169,10 @@
def update(self, keys=None):
parents = list(self.parents.values_list("id", flat=True))
revisions = self.get_revisions(parents, keys=keys)
- revisions.delete()
+ # manually get the list of ids and delete those to prevent
+ # django race condition
+ revision_ids = list(revisions.values_list("id", flat=True))
+ revisions.filter(id__in=revision_ids).delete()
Revision.objects.bulk_create(
self.create_revisions(parents, keys=keys))
| {"golden_diff": "diff --git a/pootle/apps/pootle_revision/utils.py b/pootle/apps/pootle_revision/utils.py\n--- a/pootle/apps/pootle_revision/utils.py\n+++ b/pootle/apps/pootle_revision/utils.py\n@@ -169,7 +169,10 @@\n def update(self, keys=None):\n parents = list(self.parents.values_list(\"id\", flat=True))\n revisions = self.get_revisions(parents, keys=keys)\n- revisions.delete()\n+ # manually get the list of ids and delete those to prevent\n+ # django race condition\n+ revision_ids = list(revisions.values_list(\"id\", flat=True))\n+ revisions.filter(id__in=revision_ids).delete()\n Revision.objects.bulk_create(\n self.create_revisions(parents, keys=keys))\n", "issue": "IntegrityError: (1062, \"Duplicate entry 'xxx-stats' for key > 'pootle_revision_content_type_id_xxx_uniq'\")\nThis error has been spotted in the wild.\r\n\r\nFrom code review its hard to see how its happening, and i ~~havent~~ managed to reproduce - *by clicking very fast on editor buttons*\r\n\r\n~~It may be related to update_stores~~\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport uuid\n\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.utils.functional import cached_property\n\nfrom pootle.core.url_helpers import split_pootle_path\nfrom pootle_app.models import Directory\n\nfrom .models import Revision\n\n\nclass RevisionContext(object):\n\n def __init__(self, context):\n self.context = context\n\n @cached_property\n def content_type_id(self):\n return ContentType.objects.get_for_model(\n self.context._meta.model).id\n\n @property\n def revision_context(self):\n return self.context.revisions\n\n def get(self, key=None):\n \"\"\"get a revision from db or set one if not set\"\"\"\n if not self.revision_context:\n return \"\"\n return self.revision_context.filter(\n key=key).values_list(\"value\", flat=True).first() or \"\"\n\n def set(self, keys=None, value=None):\n \"\"\"get a revision from db or set one if not set\"\"\"\n self.revision_context.filter(key__in=keys).delete()\n if value:\n revisions = []\n for k in keys:\n revisions.append(\n Revision(\n content_type_id=self.content_type_id,\n object_id=self.context.pk,\n key=k,\n value=value))\n Revision.objects.bulk_create(revisions)\n\n\nclass DirectoryRevision(RevisionContext):\n pass\n\n\nclass LanguageRevision(RevisionContext):\n\n @property\n def revision_context(self):\n return self.context.directory.revisions\n\n\nclass ProjectRevision(RevisionContext):\n pass\n\n\nclass ProjectResourceRevision(RevisionContext):\n\n @property\n def revision_context(self):\n first_child = self.context.children.first()\n if not first_child:\n return\n return Directory.objects.get(\n pootle_path=\"/projects/%s/\"\n % split_pootle_path(first_child.pootle_path)[1]).revisions\n\n\nclass ProjectSetRevision(RevisionContext):\n\n @property\n def revision_context(self):\n first_project = self.context.children.first()\n if not first_project:\n return\n return first_project.directory.parent.revisions\n\n\nclass TPRevision(RevisionContext):\n pass\n\n\nclass RevisionUpdater(object):\n\n def __init__(self, context=None, object_list=None):\n self.context = context\n self.object_list = object_list\n\n @property\n def object_list_paths(self):\n return set(\n self.object_list.values_list(\n self.related_pootle_path,\n flat=True))\n\n @property\n def all_pootle_paths(self):\n if self.context and not self.object_list:\n return set([self.context_path])\n elif self.object_list:\n parents = self.object_list_paths\n if self.context:\n parents.add(self.context_path)\n return parents\n return []\n\n @property\n def parents(self):\n \"\"\"calculate unit parents for cache update\"\"\"\n return Directory.objects.filter(\n pootle_path__in=self.get_parent_paths(self.all_pootle_paths))\n\n def get_parent_paths(self, pootle_paths):\n paths = set([\"/projects/\"])\n for pootle_path in pootle_paths:\n lang_code, proj_code, dir_path, __ = split_pootle_path(pootle_path)\n paths.add(\"/projects/%s/\" % proj_code)\n paths.add(\"/%s/\" % lang_code)\n paths.add(\"/%s/%s/\" % (lang_code, proj_code))\n dir_path_parts = dir_path.split(\"/\")\n for i, name in enumerate(dir_path_parts):\n if not name:\n continue\n paths.add(\n \"/%s/%s/%s/\"\n % (lang_code,\n proj_code,\n \"/\".join(dir_path_parts[:i + 1])))\n return paths\n\n @property\n def new_revision(self):\n return uuid.uuid4().hex\n\n @cached_property\n def content_type_id(self):\n return ContentType.objects.get_for_model(Directory).id\n\n def get_revisions(self, parents, keys=None):\n return Revision.objects.filter(\n content_type_id=self.content_type_id,\n key__in=keys or [\"\"],\n object_id__in=parents)\n\n def create_revisions(self, parents, keys=None):\n new_revision = self.new_revision\n for parent in parents:\n for key in keys or [\"\"]:\n yield Revision(\n content_type_id=self.content_type_id,\n object_id=parent,\n key=key,\n value=new_revision)\n\n def update(self, keys=None):\n parents = list(self.parents.values_list(\"id\", flat=True))\n revisions = self.get_revisions(parents, keys=keys)\n revisions.delete()\n Revision.objects.bulk_create(\n self.create_revisions(parents, keys=keys))\n\n\nclass UnitRevisionUpdater(RevisionUpdater):\n related_pootle_path = \"store__parent__pootle_path\"\n\n @property\n def context_path(self):\n return self.context.store.parent.pootle_path\n\n\nclass StoreRevisionUpdater(RevisionUpdater):\n related_pootle_path = \"parent__pootle_path\"\n\n @property\n def context_path(self):\n return self.context.parent.pootle_path\n\n\nclass DirectoryRevisionUpdater(RevisionUpdater):\n related_pootle_path = \"pootle_path\"\n\n @property\n def context_path(self):\n return self.context.pootle_path\n", "path": "pootle/apps/pootle_revision/utils.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport uuid\n\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.utils.functional import cached_property\n\nfrom pootle.core.url_helpers import split_pootle_path\nfrom pootle_app.models import Directory\n\nfrom .models import Revision\n\n\nclass RevisionContext(object):\n\n def __init__(self, context):\n self.context = context\n\n @cached_property\n def content_type_id(self):\n return ContentType.objects.get_for_model(\n self.context._meta.model).id\n\n @property\n def revision_context(self):\n return self.context.revisions\n\n def get(self, key=None):\n \"\"\"get a revision from db or set one if not set\"\"\"\n if not self.revision_context:\n return \"\"\n return self.revision_context.filter(\n key=key).values_list(\"value\", flat=True).first() or \"\"\n\n def set(self, keys=None, value=None):\n \"\"\"get a revision from db or set one if not set\"\"\"\n self.revision_context.filter(key__in=keys).delete()\n if value:\n revisions = []\n for k in keys:\n revisions.append(\n Revision(\n content_type_id=self.content_type_id,\n object_id=self.context.pk,\n key=k,\n value=value))\n Revision.objects.bulk_create(revisions)\n\n\nclass DirectoryRevision(RevisionContext):\n pass\n\n\nclass LanguageRevision(RevisionContext):\n\n @property\n def revision_context(self):\n return self.context.directory.revisions\n\n\nclass ProjectRevision(RevisionContext):\n pass\n\n\nclass ProjectResourceRevision(RevisionContext):\n\n @property\n def revision_context(self):\n first_child = self.context.children.first()\n if not first_child:\n return\n return Directory.objects.get(\n pootle_path=\"/projects/%s/\"\n % split_pootle_path(first_child.pootle_path)[1]).revisions\n\n\nclass ProjectSetRevision(RevisionContext):\n\n @property\n def revision_context(self):\n first_project = self.context.children.first()\n if not first_project:\n return\n return first_project.directory.parent.revisions\n\n\nclass TPRevision(RevisionContext):\n pass\n\n\nclass RevisionUpdater(object):\n\n def __init__(self, context=None, object_list=None):\n self.context = context\n self.object_list = object_list\n\n @property\n def object_list_paths(self):\n return set(\n self.object_list.values_list(\n self.related_pootle_path,\n flat=True))\n\n @property\n def all_pootle_paths(self):\n if self.context and not self.object_list:\n return set([self.context_path])\n elif self.object_list:\n parents = self.object_list_paths\n if self.context:\n parents.add(self.context_path)\n return parents\n return []\n\n @property\n def parents(self):\n \"\"\"calculate unit parents for cache update\"\"\"\n return Directory.objects.filter(\n pootle_path__in=self.get_parent_paths(self.all_pootle_paths))\n\n def get_parent_paths(self, pootle_paths):\n paths = set([\"/projects/\"])\n for pootle_path in pootle_paths:\n lang_code, proj_code, dir_path, __ = split_pootle_path(pootle_path)\n paths.add(\"/projects/%s/\" % proj_code)\n paths.add(\"/%s/\" % lang_code)\n paths.add(\"/%s/%s/\" % (lang_code, proj_code))\n dir_path_parts = dir_path.split(\"/\")\n for i, name in enumerate(dir_path_parts):\n if not name:\n continue\n paths.add(\n \"/%s/%s/%s/\"\n % (lang_code,\n proj_code,\n \"/\".join(dir_path_parts[:i + 1])))\n return paths\n\n @property\n def new_revision(self):\n return uuid.uuid4().hex\n\n @cached_property\n def content_type_id(self):\n return ContentType.objects.get_for_model(Directory).id\n\n def get_revisions(self, parents, keys=None):\n return Revision.objects.filter(\n content_type_id=self.content_type_id,\n key__in=keys or [\"\"],\n object_id__in=parents)\n\n def create_revisions(self, parents, keys=None):\n new_revision = self.new_revision\n for parent in parents:\n for key in keys or [\"\"]:\n yield Revision(\n content_type_id=self.content_type_id,\n object_id=parent,\n key=key,\n value=new_revision)\n\n def update(self, keys=None):\n parents = list(self.parents.values_list(\"id\", flat=True))\n revisions = self.get_revisions(parents, keys=keys)\n # manually get the list of ids and delete those to prevent\n # django race condition\n revision_ids = list(revisions.values_list(\"id\", flat=True))\n revisions.filter(id__in=revision_ids).delete()\n Revision.objects.bulk_create(\n self.create_revisions(parents, keys=keys))\n\n\nclass UnitRevisionUpdater(RevisionUpdater):\n related_pootle_path = \"store__parent__pootle_path\"\n\n @property\n def context_path(self):\n return self.context.store.parent.pootle_path\n\n\nclass StoreRevisionUpdater(RevisionUpdater):\n related_pootle_path = \"parent__pootle_path\"\n\n @property\n def context_path(self):\n return self.context.parent.pootle_path\n\n\nclass DirectoryRevisionUpdater(RevisionUpdater):\n related_pootle_path = \"pootle_path\"\n\n @property\n def context_path(self):\n return self.context.pootle_path\n", "path": "pootle/apps/pootle_revision/utils.py"}]} | 2,061 | 177 |
gh_patches_debug_17526 | rasdani/github-patches | git_diff | nonebot__nonebot2-1968 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: run_sync忽略了上下文变量
### 操作系统
Windows
### Python 版本
3.11.0
### NoneBot 版本
2.0.0rc4
### 适配器
-
### 协议端
-
### 描述问题
[run_sync](https://github.com/nonebot/nonebot2/blob/e98d28f3b4fdda2504ecc07318563ce202464b96/nonebot/utils.py#L114)忽略了上下文变量,可能会导致异常,比如https://github.com/nonebot/nonebot2/issues/1966
### 复现步骤
-
### 期望的结果
使用```copy_context```然后```ctx.run```进executor
### 截图或日志
-
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nonebot/utils.py`
Content:
```
1 """本模块包含了 NoneBot 的一些工具函数
2
3 FrontMatter:
4 sidebar_position: 8
5 description: nonebot.utils 模块
6 """
7
8 import re
9 import json
10 import asyncio
11 import inspect
12 import importlib
13 import dataclasses
14 from pathlib import Path
15 from functools import wraps, partial
16 from contextlib import asynccontextmanager
17 from typing_extensions import ParamSpec, get_args, get_origin
18 from typing import (
19 Any,
20 Type,
21 Tuple,
22 Union,
23 TypeVar,
24 Callable,
25 Optional,
26 Coroutine,
27 AsyncGenerator,
28 ContextManager,
29 overload,
30 )
31
32 from pydantic.typing import is_union, is_none_type
33
34 from nonebot.log import logger
35 from nonebot.typing import overrides
36
37 P = ParamSpec("P")
38 R = TypeVar("R")
39 T = TypeVar("T")
40 K = TypeVar("K")
41 V = TypeVar("V")
42
43
44 def escape_tag(s: str) -> str:
45 """用于记录带颜色日志时转义 `<tag>` 类型特殊标签
46
47 参考: [loguru color 标签](https://loguru.readthedocs.io/en/stable/api/logger.html#color)
48
49 参数:
50 s: 需要转义的字符串
51 """
52 return re.sub(r"</?((?:[fb]g\s)?[^<>\s]*)>", r"\\\g<0>", s)
53
54
55 def generic_check_issubclass(
56 cls: Any, class_or_tuple: Union[Type[Any], Tuple[Type[Any], ...]]
57 ) -> bool:
58 """检查 cls 是否是 class_or_tuple 中的一个类型子类。
59
60 特别的,如果 cls 是 `typing.Union` 或 `types.UnionType` 类型,
61 则会检查其中的类型是否是 class_or_tuple 中的一个类型子类。(None 会被忽略)
62 """
63 try:
64 return issubclass(cls, class_or_tuple)
65 except TypeError:
66 origin = get_origin(cls)
67 if is_union(origin):
68 return all(
69 is_none_type(type_) or generic_check_issubclass(type_, class_or_tuple)
70 for type_ in get_args(cls)
71 )
72 elif origin:
73 return issubclass(origin, class_or_tuple)
74 return False
75
76
77 def is_coroutine_callable(call: Callable[..., Any]) -> bool:
78 """检查 call 是否是一个 callable 协程函数"""
79 if inspect.isroutine(call):
80 return inspect.iscoroutinefunction(call)
81 if inspect.isclass(call):
82 return False
83 func_ = getattr(call, "__call__", None)
84 return inspect.iscoroutinefunction(func_)
85
86
87 def is_gen_callable(call: Callable[..., Any]) -> bool:
88 """检查 call 是否是一个生成器函数"""
89 if inspect.isgeneratorfunction(call):
90 return True
91 func_ = getattr(call, "__call__", None)
92 return inspect.isgeneratorfunction(func_)
93
94
95 def is_async_gen_callable(call: Callable[..., Any]) -> bool:
96 """检查 call 是否是一个异步生成器函数"""
97 if inspect.isasyncgenfunction(call):
98 return True
99 func_ = getattr(call, "__call__", None)
100 return inspect.isasyncgenfunction(func_)
101
102
103 def run_sync(call: Callable[P, R]) -> Callable[P, Coroutine[None, None, R]]:
104 """一个用于包装 sync function 为 async function 的装饰器
105
106 参数:
107 call: 被装饰的同步函数
108 """
109
110 @wraps(call)
111 async def _wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
112 loop = asyncio.get_running_loop()
113 pfunc = partial(call, *args, **kwargs)
114 result = await loop.run_in_executor(None, pfunc)
115 return result
116
117 return _wrapper
118
119
120 @asynccontextmanager
121 async def run_sync_ctx_manager(
122 cm: ContextManager[T],
123 ) -> AsyncGenerator[T, None]:
124 """一个用于包装 sync context manager 为 async context manager 的执行函数"""
125 try:
126 yield await run_sync(cm.__enter__)()
127 except Exception as e:
128 ok = await run_sync(cm.__exit__)(type(e), e, None)
129 if not ok:
130 raise e
131 else:
132 await run_sync(cm.__exit__)(None, None, None)
133
134
135 @overload
136 async def run_coro_with_catch(
137 coro: Coroutine[Any, Any, T],
138 exc: Tuple[Type[Exception], ...],
139 ) -> Union[T, None]:
140 ...
141
142
143 @overload
144 async def run_coro_with_catch(
145 coro: Coroutine[Any, Any, T],
146 exc: Tuple[Type[Exception], ...],
147 return_on_err: R,
148 ) -> Union[T, R]:
149 ...
150
151
152 async def run_coro_with_catch(
153 coro: Coroutine[Any, Any, T],
154 exc: Tuple[Type[Exception], ...],
155 return_on_err: Optional[R] = None,
156 ) -> Optional[Union[T, R]]:
157 try:
158 return await coro
159 except exc:
160 return return_on_err
161
162
163 def get_name(obj: Any) -> str:
164 """获取对象的名称"""
165 if inspect.isfunction(obj) or inspect.isclass(obj):
166 return obj.__name__
167 return obj.__class__.__name__
168
169
170 def path_to_module_name(path: Path) -> str:
171 """转换路径为模块名"""
172 rel_path = path.resolve().relative_to(Path.cwd().resolve())
173 if rel_path.stem == "__init__":
174 return ".".join(rel_path.parts[:-1])
175 else:
176 return ".".join(rel_path.parts[:-1] + (rel_path.stem,))
177
178
179 def resolve_dot_notation(
180 obj_str: str, default_attr: str, default_prefix: Optional[str] = None
181 ) -> Any:
182 """解析并导入点分表示法的对象"""
183 modulename, _, cls = obj_str.partition(":")
184 if default_prefix is not None and modulename.startswith("~"):
185 modulename = default_prefix + modulename[1:]
186 module = importlib.import_module(modulename)
187 if not cls:
188 return getattr(module, default_attr)
189 instance = module
190 for attr_str in cls.split("."):
191 instance = getattr(instance, attr_str)
192 return instance
193
194
195 class DataclassEncoder(json.JSONEncoder):
196 """在JSON序列化 {re}`nonebot.adapters._message.Message` (List[Dataclass]) 时使用的 `JSONEncoder`"""
197
198 @overrides(json.JSONEncoder)
199 def default(self, o):
200 if dataclasses.is_dataclass(o):
201 return {f.name: getattr(o, f.name) for f in dataclasses.fields(o)}
202 return super().default(o)
203
204
205 def logger_wrapper(logger_name: str):
206 """用于打印 adapter 的日志。
207
208 参数:
209 logger_name: adapter 的名称
210
211 返回:
212 日志记录函数
213
214 - level: 日志等级
215 - message: 日志信息
216 - exception: 异常信息
217 """
218
219 def log(level: str, message: str, exception: Optional[Exception] = None):
220 logger.opt(colors=True, exception=exception).log(
221 level, f"<m>{escape_tag(logger_name)}</m> | {message}"
222 )
223
224 return log
225
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nonebot/utils.py b/nonebot/utils.py
--- a/nonebot/utils.py
+++ b/nonebot/utils.py
@@ -12,6 +12,7 @@
import importlib
import dataclasses
from pathlib import Path
+from contextvars import copy_context
from functools import wraps, partial
from contextlib import asynccontextmanager
from typing_extensions import ParamSpec, get_args, get_origin
@@ -111,7 +112,9 @@
async def _wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
loop = asyncio.get_running_loop()
pfunc = partial(call, *args, **kwargs)
- result = await loop.run_in_executor(None, pfunc)
+ context = copy_context()
+ context_run = context.run
+ result = await loop.run_in_executor(None, context_run, pfunc)
return result
return _wrapper
| {"golden_diff": "diff --git a/nonebot/utils.py b/nonebot/utils.py\n--- a/nonebot/utils.py\n+++ b/nonebot/utils.py\n@@ -12,6 +12,7 @@\n import importlib\n import dataclasses\n from pathlib import Path\n+from contextvars import copy_context\n from functools import wraps, partial\n from contextlib import asynccontextmanager\n from typing_extensions import ParamSpec, get_args, get_origin\n@@ -111,7 +112,9 @@\n async def _wrapper(*args: P.args, **kwargs: P.kwargs) -> R:\n loop = asyncio.get_running_loop()\n pfunc = partial(call, *args, **kwargs)\n- result = await loop.run_in_executor(None, pfunc)\n+ context = copy_context()\n+ context_run = context.run\n+ result = await loop.run_in_executor(None, context_run, pfunc)\n return result\n \n return _wrapper\n", "issue": "Bug: run_sync\u5ffd\u7565\u4e86\u4e0a\u4e0b\u6587\u53d8\u91cf\n### \u64cd\u4f5c\u7cfb\u7edf\n\nWindows\n\n### Python \u7248\u672c\n\n3.11.0\n\n### NoneBot \u7248\u672c\n\n2.0.0rc4\n\n### \u9002\u914d\u5668\n\n-\n\n### \u534f\u8bae\u7aef\n\n-\n\n### \u63cf\u8ff0\u95ee\u9898\n\n[run_sync](https://github.com/nonebot/nonebot2/blob/e98d28f3b4fdda2504ecc07318563ce202464b96/nonebot/utils.py#L114)\u5ffd\u7565\u4e86\u4e0a\u4e0b\u6587\u53d8\u91cf\uff0c\u53ef\u80fd\u4f1a\u5bfc\u81f4\u5f02\u5e38\uff0c\u6bd4\u5982https://github.com/nonebot/nonebot2/issues/1966\n\n### \u590d\u73b0\u6b65\u9aa4\n\n-\n\n### \u671f\u671b\u7684\u7ed3\u679c\n\n\u4f7f\u7528```copy_context```\u7136\u540e```ctx.run```\u8fdbexecutor\n\n### \u622a\u56fe\u6216\u65e5\u5fd7\n\n-\n", "before_files": [{"content": "\"\"\"\u672c\u6a21\u5757\u5305\u542b\u4e86 NoneBot \u7684\u4e00\u4e9b\u5de5\u5177\u51fd\u6570\n\nFrontMatter:\n sidebar_position: 8\n description: nonebot.utils \u6a21\u5757\n\"\"\"\n\nimport re\nimport json\nimport asyncio\nimport inspect\nimport importlib\nimport dataclasses\nfrom pathlib import Path\nfrom functools import wraps, partial\nfrom contextlib import asynccontextmanager\nfrom typing_extensions import ParamSpec, get_args, get_origin\nfrom typing import (\n Any,\n Type,\n Tuple,\n Union,\n TypeVar,\n Callable,\n Optional,\n Coroutine,\n AsyncGenerator,\n ContextManager,\n overload,\n)\n\nfrom pydantic.typing import is_union, is_none_type\n\nfrom nonebot.log import logger\nfrom nonebot.typing import overrides\n\nP = ParamSpec(\"P\")\nR = TypeVar(\"R\")\nT = TypeVar(\"T\")\nK = TypeVar(\"K\")\nV = TypeVar(\"V\")\n\n\ndef escape_tag(s: str) -> str:\n \"\"\"\u7528\u4e8e\u8bb0\u5f55\u5e26\u989c\u8272\u65e5\u5fd7\u65f6\u8f6c\u4e49 `<tag>` \u7c7b\u578b\u7279\u6b8a\u6807\u7b7e\n\n \u53c2\u8003: [loguru color \u6807\u7b7e](https://loguru.readthedocs.io/en/stable/api/logger.html#color)\n\n \u53c2\u6570:\n s: \u9700\u8981\u8f6c\u4e49\u7684\u5b57\u7b26\u4e32\n \"\"\"\n return re.sub(r\"</?((?:[fb]g\\s)?[^<>\\s]*)>\", r\"\\\\\\g<0>\", s)\n\n\ndef generic_check_issubclass(\n cls: Any, class_or_tuple: Union[Type[Any], Tuple[Type[Any], ...]]\n) -> bool:\n \"\"\"\u68c0\u67e5 cls \u662f\u5426\u662f class_or_tuple \u4e2d\u7684\u4e00\u4e2a\u7c7b\u578b\u5b50\u7c7b\u3002\n\n \u7279\u522b\u7684\uff0c\u5982\u679c cls \u662f `typing.Union` \u6216 `types.UnionType` \u7c7b\u578b\uff0c\n \u5219\u4f1a\u68c0\u67e5\u5176\u4e2d\u7684\u7c7b\u578b\u662f\u5426\u662f class_or_tuple \u4e2d\u7684\u4e00\u4e2a\u7c7b\u578b\u5b50\u7c7b\u3002\uff08None \u4f1a\u88ab\u5ffd\u7565\uff09\n \"\"\"\n try:\n return issubclass(cls, class_or_tuple)\n except TypeError:\n origin = get_origin(cls)\n if is_union(origin):\n return all(\n is_none_type(type_) or generic_check_issubclass(type_, class_or_tuple)\n for type_ in get_args(cls)\n )\n elif origin:\n return issubclass(origin, class_or_tuple)\n return False\n\n\ndef is_coroutine_callable(call: Callable[..., Any]) -> bool:\n \"\"\"\u68c0\u67e5 call \u662f\u5426\u662f\u4e00\u4e2a callable \u534f\u7a0b\u51fd\u6570\"\"\"\n if inspect.isroutine(call):\n return inspect.iscoroutinefunction(call)\n if inspect.isclass(call):\n return False\n func_ = getattr(call, \"__call__\", None)\n return inspect.iscoroutinefunction(func_)\n\n\ndef is_gen_callable(call: Callable[..., Any]) -> bool:\n \"\"\"\u68c0\u67e5 call \u662f\u5426\u662f\u4e00\u4e2a\u751f\u6210\u5668\u51fd\u6570\"\"\"\n if inspect.isgeneratorfunction(call):\n return True\n func_ = getattr(call, \"__call__\", None)\n return inspect.isgeneratorfunction(func_)\n\n\ndef is_async_gen_callable(call: Callable[..., Any]) -> bool:\n \"\"\"\u68c0\u67e5 call \u662f\u5426\u662f\u4e00\u4e2a\u5f02\u6b65\u751f\u6210\u5668\u51fd\u6570\"\"\"\n if inspect.isasyncgenfunction(call):\n return True\n func_ = getattr(call, \"__call__\", None)\n return inspect.isasyncgenfunction(func_)\n\n\ndef run_sync(call: Callable[P, R]) -> Callable[P, Coroutine[None, None, R]]:\n \"\"\"\u4e00\u4e2a\u7528\u4e8e\u5305\u88c5 sync function \u4e3a async function \u7684\u88c5\u9970\u5668\n\n \u53c2\u6570:\n call: \u88ab\u88c5\u9970\u7684\u540c\u6b65\u51fd\u6570\n \"\"\"\n\n @wraps(call)\n async def _wrapper(*args: P.args, **kwargs: P.kwargs) -> R:\n loop = asyncio.get_running_loop()\n pfunc = partial(call, *args, **kwargs)\n result = await loop.run_in_executor(None, pfunc)\n return result\n\n return _wrapper\n\n\n@asynccontextmanager\nasync def run_sync_ctx_manager(\n cm: ContextManager[T],\n) -> AsyncGenerator[T, None]:\n \"\"\"\u4e00\u4e2a\u7528\u4e8e\u5305\u88c5 sync context manager \u4e3a async context manager \u7684\u6267\u884c\u51fd\u6570\"\"\"\n try:\n yield await run_sync(cm.__enter__)()\n except Exception as e:\n ok = await run_sync(cm.__exit__)(type(e), e, None)\n if not ok:\n raise e\n else:\n await run_sync(cm.__exit__)(None, None, None)\n\n\n@overload\nasync def run_coro_with_catch(\n coro: Coroutine[Any, Any, T],\n exc: Tuple[Type[Exception], ...],\n) -> Union[T, None]:\n ...\n\n\n@overload\nasync def run_coro_with_catch(\n coro: Coroutine[Any, Any, T],\n exc: Tuple[Type[Exception], ...],\n return_on_err: R,\n) -> Union[T, R]:\n ...\n\n\nasync def run_coro_with_catch(\n coro: Coroutine[Any, Any, T],\n exc: Tuple[Type[Exception], ...],\n return_on_err: Optional[R] = None,\n) -> Optional[Union[T, R]]:\n try:\n return await coro\n except exc:\n return return_on_err\n\n\ndef get_name(obj: Any) -> str:\n \"\"\"\u83b7\u53d6\u5bf9\u8c61\u7684\u540d\u79f0\"\"\"\n if inspect.isfunction(obj) or inspect.isclass(obj):\n return obj.__name__\n return obj.__class__.__name__\n\n\ndef path_to_module_name(path: Path) -> str:\n \"\"\"\u8f6c\u6362\u8def\u5f84\u4e3a\u6a21\u5757\u540d\"\"\"\n rel_path = path.resolve().relative_to(Path.cwd().resolve())\n if rel_path.stem == \"__init__\":\n return \".\".join(rel_path.parts[:-1])\n else:\n return \".\".join(rel_path.parts[:-1] + (rel_path.stem,))\n\n\ndef resolve_dot_notation(\n obj_str: str, default_attr: str, default_prefix: Optional[str] = None\n) -> Any:\n \"\"\"\u89e3\u6790\u5e76\u5bfc\u5165\u70b9\u5206\u8868\u793a\u6cd5\u7684\u5bf9\u8c61\"\"\"\n modulename, _, cls = obj_str.partition(\":\")\n if default_prefix is not None and modulename.startswith(\"~\"):\n modulename = default_prefix + modulename[1:]\n module = importlib.import_module(modulename)\n if not cls:\n return getattr(module, default_attr)\n instance = module\n for attr_str in cls.split(\".\"):\n instance = getattr(instance, attr_str)\n return instance\n\n\nclass DataclassEncoder(json.JSONEncoder):\n \"\"\"\u5728JSON\u5e8f\u5217\u5316 {re}`nonebot.adapters._message.Message` (List[Dataclass]) \u65f6\u4f7f\u7528\u7684 `JSONEncoder`\"\"\"\n\n @overrides(json.JSONEncoder)\n def default(self, o):\n if dataclasses.is_dataclass(o):\n return {f.name: getattr(o, f.name) for f in dataclasses.fields(o)}\n return super().default(o)\n\n\ndef logger_wrapper(logger_name: str):\n \"\"\"\u7528\u4e8e\u6253\u5370 adapter \u7684\u65e5\u5fd7\u3002\n\n \u53c2\u6570:\n logger_name: adapter \u7684\u540d\u79f0\n\n \u8fd4\u56de:\n \u65e5\u5fd7\u8bb0\u5f55\u51fd\u6570\n\n - level: \u65e5\u5fd7\u7b49\u7ea7\n - message: \u65e5\u5fd7\u4fe1\u606f\n - exception: \u5f02\u5e38\u4fe1\u606f\n \"\"\"\n\n def log(level: str, message: str, exception: Optional[Exception] = None):\n logger.opt(colors=True, exception=exception).log(\n level, f\"<m>{escape_tag(logger_name)}</m> | {message}\"\n )\n\n return log\n", "path": "nonebot/utils.py"}], "after_files": [{"content": "\"\"\"\u672c\u6a21\u5757\u5305\u542b\u4e86 NoneBot \u7684\u4e00\u4e9b\u5de5\u5177\u51fd\u6570\n\nFrontMatter:\n sidebar_position: 8\n description: nonebot.utils \u6a21\u5757\n\"\"\"\n\nimport re\nimport json\nimport asyncio\nimport inspect\nimport importlib\nimport dataclasses\nfrom pathlib import Path\nfrom contextvars import copy_context\nfrom functools import wraps, partial\nfrom contextlib import asynccontextmanager\nfrom typing_extensions import ParamSpec, get_args, get_origin\nfrom typing import (\n Any,\n Type,\n Tuple,\n Union,\n TypeVar,\n Callable,\n Optional,\n Coroutine,\n AsyncGenerator,\n ContextManager,\n overload,\n)\n\nfrom pydantic.typing import is_union, is_none_type\n\nfrom nonebot.log import logger\nfrom nonebot.typing import overrides\n\nP = ParamSpec(\"P\")\nR = TypeVar(\"R\")\nT = TypeVar(\"T\")\nK = TypeVar(\"K\")\nV = TypeVar(\"V\")\n\n\ndef escape_tag(s: str) -> str:\n \"\"\"\u7528\u4e8e\u8bb0\u5f55\u5e26\u989c\u8272\u65e5\u5fd7\u65f6\u8f6c\u4e49 `<tag>` \u7c7b\u578b\u7279\u6b8a\u6807\u7b7e\n\n \u53c2\u8003: [loguru color \u6807\u7b7e](https://loguru.readthedocs.io/en/stable/api/logger.html#color)\n\n \u53c2\u6570:\n s: \u9700\u8981\u8f6c\u4e49\u7684\u5b57\u7b26\u4e32\n \"\"\"\n return re.sub(r\"</?((?:[fb]g\\s)?[^<>\\s]*)>\", r\"\\\\\\g<0>\", s)\n\n\ndef generic_check_issubclass(\n cls: Any, class_or_tuple: Union[Type[Any], Tuple[Type[Any], ...]]\n) -> bool:\n \"\"\"\u68c0\u67e5 cls \u662f\u5426\u662f class_or_tuple \u4e2d\u7684\u4e00\u4e2a\u7c7b\u578b\u5b50\u7c7b\u3002\n\n \u7279\u522b\u7684\uff0c\u5982\u679c cls \u662f `typing.Union` \u6216 `types.UnionType` \u7c7b\u578b\uff0c\n \u5219\u4f1a\u68c0\u67e5\u5176\u4e2d\u7684\u7c7b\u578b\u662f\u5426\u662f class_or_tuple \u4e2d\u7684\u4e00\u4e2a\u7c7b\u578b\u5b50\u7c7b\u3002\uff08None \u4f1a\u88ab\u5ffd\u7565\uff09\n \"\"\"\n try:\n return issubclass(cls, class_or_tuple)\n except TypeError:\n origin = get_origin(cls)\n if is_union(origin):\n return all(\n is_none_type(type_) or generic_check_issubclass(type_, class_or_tuple)\n for type_ in get_args(cls)\n )\n elif origin:\n return issubclass(origin, class_or_tuple)\n return False\n\n\ndef is_coroutine_callable(call: Callable[..., Any]) -> bool:\n \"\"\"\u68c0\u67e5 call \u662f\u5426\u662f\u4e00\u4e2a callable \u534f\u7a0b\u51fd\u6570\"\"\"\n if inspect.isroutine(call):\n return inspect.iscoroutinefunction(call)\n if inspect.isclass(call):\n return False\n func_ = getattr(call, \"__call__\", None)\n return inspect.iscoroutinefunction(func_)\n\n\ndef is_gen_callable(call: Callable[..., Any]) -> bool:\n \"\"\"\u68c0\u67e5 call \u662f\u5426\u662f\u4e00\u4e2a\u751f\u6210\u5668\u51fd\u6570\"\"\"\n if inspect.isgeneratorfunction(call):\n return True\n func_ = getattr(call, \"__call__\", None)\n return inspect.isgeneratorfunction(func_)\n\n\ndef is_async_gen_callable(call: Callable[..., Any]) -> bool:\n \"\"\"\u68c0\u67e5 call \u662f\u5426\u662f\u4e00\u4e2a\u5f02\u6b65\u751f\u6210\u5668\u51fd\u6570\"\"\"\n if inspect.isasyncgenfunction(call):\n return True\n func_ = getattr(call, \"__call__\", None)\n return inspect.isasyncgenfunction(func_)\n\n\ndef run_sync(call: Callable[P, R]) -> Callable[P, Coroutine[None, None, R]]:\n \"\"\"\u4e00\u4e2a\u7528\u4e8e\u5305\u88c5 sync function \u4e3a async function \u7684\u88c5\u9970\u5668\n\n \u53c2\u6570:\n call: \u88ab\u88c5\u9970\u7684\u540c\u6b65\u51fd\u6570\n \"\"\"\n\n @wraps(call)\n async def _wrapper(*args: P.args, **kwargs: P.kwargs) -> R:\n loop = asyncio.get_running_loop()\n pfunc = partial(call, *args, **kwargs)\n context = copy_context()\n context_run = context.run\n result = await loop.run_in_executor(None, context_run, pfunc)\n return result\n\n return _wrapper\n\n\n@asynccontextmanager\nasync def run_sync_ctx_manager(\n cm: ContextManager[T],\n) -> AsyncGenerator[T, None]:\n \"\"\"\u4e00\u4e2a\u7528\u4e8e\u5305\u88c5 sync context manager \u4e3a async context manager \u7684\u6267\u884c\u51fd\u6570\"\"\"\n try:\n yield await run_sync(cm.__enter__)()\n except Exception as e:\n ok = await run_sync(cm.__exit__)(type(e), e, None)\n if not ok:\n raise e\n else:\n await run_sync(cm.__exit__)(None, None, None)\n\n\n@overload\nasync def run_coro_with_catch(\n coro: Coroutine[Any, Any, T],\n exc: Tuple[Type[Exception], ...],\n) -> Union[T, None]:\n ...\n\n\n@overload\nasync def run_coro_with_catch(\n coro: Coroutine[Any, Any, T],\n exc: Tuple[Type[Exception], ...],\n return_on_err: R,\n) -> Union[T, R]:\n ...\n\n\nasync def run_coro_with_catch(\n coro: Coroutine[Any, Any, T],\n exc: Tuple[Type[Exception], ...],\n return_on_err: Optional[R] = None,\n) -> Optional[Union[T, R]]:\n try:\n return await coro\n except exc:\n return return_on_err\n\n\ndef get_name(obj: Any) -> str:\n \"\"\"\u83b7\u53d6\u5bf9\u8c61\u7684\u540d\u79f0\"\"\"\n if inspect.isfunction(obj) or inspect.isclass(obj):\n return obj.__name__\n return obj.__class__.__name__\n\n\ndef path_to_module_name(path: Path) -> str:\n \"\"\"\u8f6c\u6362\u8def\u5f84\u4e3a\u6a21\u5757\u540d\"\"\"\n rel_path = path.resolve().relative_to(Path.cwd().resolve())\n if rel_path.stem == \"__init__\":\n return \".\".join(rel_path.parts[:-1])\n else:\n return \".\".join(rel_path.parts[:-1] + (rel_path.stem,))\n\n\ndef resolve_dot_notation(\n obj_str: str, default_attr: str, default_prefix: Optional[str] = None\n) -> Any:\n \"\"\"\u89e3\u6790\u5e76\u5bfc\u5165\u70b9\u5206\u8868\u793a\u6cd5\u7684\u5bf9\u8c61\"\"\"\n modulename, _, cls = obj_str.partition(\":\")\n if default_prefix is not None and modulename.startswith(\"~\"):\n modulename = default_prefix + modulename[1:]\n module = importlib.import_module(modulename)\n if not cls:\n return getattr(module, default_attr)\n instance = module\n for attr_str in cls.split(\".\"):\n instance = getattr(instance, attr_str)\n return instance\n\n\nclass DataclassEncoder(json.JSONEncoder):\n \"\"\"\u5728JSON\u5e8f\u5217\u5316 {re}`nonebot.adapters._message.Message` (List[Dataclass]) \u65f6\u4f7f\u7528\u7684 `JSONEncoder`\"\"\"\n\n @overrides(json.JSONEncoder)\n def default(self, o):\n if dataclasses.is_dataclass(o):\n return {f.name: getattr(o, f.name) for f in dataclasses.fields(o)}\n return super().default(o)\n\n\ndef logger_wrapper(logger_name: str):\n \"\"\"\u7528\u4e8e\u6253\u5370 adapter \u7684\u65e5\u5fd7\u3002\n\n \u53c2\u6570:\n logger_name: adapter \u7684\u540d\u79f0\n\n \u8fd4\u56de:\n \u65e5\u5fd7\u8bb0\u5f55\u51fd\u6570\n\n - level: \u65e5\u5fd7\u7b49\u7ea7\n - message: \u65e5\u5fd7\u4fe1\u606f\n - exception: \u5f02\u5e38\u4fe1\u606f\n \"\"\"\n\n def log(level: str, message: str, exception: Optional[Exception] = None):\n logger.opt(colors=True, exception=exception).log(\n level, f\"<m>{escape_tag(logger_name)}</m> | {message}\"\n )\n\n return log\n", "path": "nonebot/utils.py"}]} | 2,581 | 203 |
gh_patches_debug_14058 | rasdani/github-patches | git_diff | pyg-team__pytorch_geometric-8519 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CycleMotif lack of label, therefore do not support GNNExplainer.
### 🐛 Describe the bug
when running ./examples/explain/gnn_explainer_ba_shapes.py, when I replace the dataset:
```
dataset = ExplainerDataset(
graph_generator=BAGraph(num_nodes=300, num_edges=5),
motif_generator='house',
num_motifs=80,
transform=T.Constant(),
)
```
with
```
dataset = ExplainerDataset(
graph_generator=BAGraph(num_nodes=300, num_edges=5),
motif_generator=CycleMotif(num_nodes=6),
num_motifs=80,
transform=T.Constant(),
)
```
There is an error:
```
Traceback (most recent call last):
File "/home/stt/py_github_repo_read/pytorch_geometric/examples/explain/gnn_explainer_ba_shapes.py", line 46, in <module>
out_channels=dataset.num_classes).to(device)
File "/home/stt/py_github_repo_read/pytorch_geometric/torch_geometric/data/in_memory_dataset.py", line 90, in num_classes
return super().num_classes
File "/home/stt/py_github_repo_read/pytorch_geometric/torch_geometric/data/dataset.py", line 173, in num_classes
y = torch.cat([data.y for data in data_list if 'y' in data], dim=0)
RuntimeError: torch.cat(): expected a non-empty list of Tensors
```
The reason behind locate at line 23 in `./torch_geometric/datasets/motif_generator/cycle.py`
```
structure = Data(
num_nodes=num_nodes,
edge_index=torch.stack([row, col], dim=0),
# TODO: lack of y label
)
```
lack of y label as in `./torch_geometric/datasets/motif_generator/house.py`
```
structure = Data(
num_nodes=5,
edge_index=torch.tensor([
[0, 0, 0, 1, 1, 1, 2, 2, 3, 3, 4, 4],
[1, 3, 4, 4, 2, 0, 1, 3, 2, 0, 0, 1],
]),
y=torch.tensor([0, 0, 1, 1, 2]),
)
```
According to GNNExplainer original repository, for the cycle motif, the node labels are the same. Therefore, we only need to add `y=torch.tensor([0]*num_nodes)`
### Versions
PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.18 | packaged by conda-forge | (main, Aug 30 2023, 03:49:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.2.0-34-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.113.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 24
在线 CPU 列表: 0-23
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i7-13700KF
CPU 系列: 6
型号: 183
每个核的线程数: 2
每个座的核数: 16
座: 1
步进: 1
CPU 最大 MHz: 5400.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 6835.20
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 640 KiB (16 instances)
L1i 缓存: 768 KiB (16 instances)
L2 缓存: 24 MiB (10 instances)
L3 缓存: 30 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.0
[pip3] torch==2.1.0
[pip3] torchvision==0.16.0
[pip3] triton==2.1.0
[conda] numpy 1.26.0 pypi_0 pypi
[conda] torch 2.1.0 pypi_0 pypi
[conda] torchvision 0.16.0 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torch_geometric/datasets/explainer_dataset.py`
Content:
```
1 from typing import Any, Callable, Dict, Optional, Union
2
3 import torch
4
5 from torch_geometric.data import InMemoryDataset
6 from torch_geometric.datasets.graph_generator import GraphGenerator
7 from torch_geometric.datasets.motif_generator import MotifGenerator
8 from torch_geometric.explain import Explanation
9
10
11 class ExplainerDataset(InMemoryDataset):
12 r"""Generates a synthetic dataset for evaluating explainabilty algorithms,
13 as described in the `"GNNExplainer: Generating Explanations for Graph
14 Neural Networks" <https://arxiv.org/abs/1903.03894>`__ paper.
15 The :class:`~torch_geometric.datasets.ExplainerDataset` creates synthetic
16 graphs coming from a
17 :class:`~torch_geometric.datasets.graph_generator.GraphGenerator`, and
18 randomly attaches :obj:`num_motifs` many motifs to it coming from a
19 :class:`~torch_geometric.datasets.graph_generator.MotifGenerator`.
20 Ground-truth node-level and edge-level explainabilty masks are given based
21 on whether nodes and edges are part of a certain motif or not.
22
23 For example, to generate a random Barabasi-Albert (BA) graph with 300
24 nodes, in which we want to randomly attach 80 :obj:`"house"` motifs, write:
25
26 .. code-block:: python
27
28 from torch_geometric.datasets import ExplainerDataset
29 from torch_geometric.datasets.graph_generator import BAGraph
30
31 dataset = ExplainerDataset(
32 graph_generator=BAGraph(num_nodes=300, num_edges=5),
33 motif_generator='house',
34 num_motifs=80,
35 )
36
37 .. note::
38
39 For an example of using :class:`ExplainerDataset`, see
40 `examples/explain/gnn_explainer_ba_shapes.py
41 <https://github.com/pyg-team/pytorch_geometric/blob/master/examples/
42 /explain/gnn_explainer_ba_shapes.py>`_.
43
44 Args:
45 graph_generator (GraphGenerator or str): The graph generator to be
46 used, *e.g.*,
47 :class:`torch.geometric.datasets.graph_generator.BAGraph`
48 (or any string that automatically resolves to it).
49 motif_generator (MotifGenerator): The motif generator to be used,
50 *e.g.*,
51 :class:`torch_geometric.datasets.motif_generator.HouseMotif`
52 (or any string that automatically resolves to it).
53 num_motifs (int): The number of motifs to attach to the graph.
54 num_graphs (int, optional): The number of graphs to generate.
55 (default: :obj:`1`)
56 graph_generator_kwargs (Dict[str, Any], optional): Arguments passed to
57 the respective graph generator module in case it gets automatically
58 resolved. (default: :obj:`None`)
59 motif_generator_kwargs (Dict[str, Any], optional): Arguments passed to
60 the respective motif generator module in case it gets automatically
61 resolved. (default: :obj:`None`)
62 transform (callable, optional): A function/transform that takes in an
63 :obj:`torch_geometric.data.Data` object and returns a transformed
64 version. The data object will be transformed before every access.
65 (default: :obj:`None`)
66 """
67 def __init__(
68 self,
69 graph_generator: Union[GraphGenerator, str],
70 motif_generator: Union[MotifGenerator, str],
71 num_motifs: int,
72 num_graphs: int = 1,
73 graph_generator_kwargs: Optional[Dict[str, Any]] = None,
74 motif_generator_kwargs: Optional[Dict[str, Any]] = None,
75 transform: Optional[Callable] = None,
76 ):
77 super().__init__(root=None, transform=transform)
78
79 if num_motifs <= 0:
80 raise ValueError(f"At least one motif needs to be attached to the "
81 f"graph (got {num_motifs})")
82
83 self.graph_generator = GraphGenerator.resolve(
84 graph_generator,
85 **(graph_generator_kwargs or {}),
86 )
87 self.motif_generator = MotifGenerator.resolve(
88 motif_generator,
89 **(motif_generator_kwargs or {}),
90 )
91 self.num_motifs = num_motifs
92
93 # TODO (matthias) support on-the-fly graph generation.
94 data_list = [self.get_graph() for _ in range(num_graphs)]
95 self.data, self.slices = self.collate(data_list)
96
97 def get_graph(self) -> Explanation:
98 data = self.graph_generator()
99
100 edge_indices = [data.edge_index]
101 num_nodes = data.num_nodes
102 node_masks = [torch.zeros(data.num_nodes)]
103 edge_masks = [torch.zeros(data.num_edges)]
104 ys = [torch.zeros(num_nodes, dtype=torch.long)]
105
106 connecting_nodes = torch.randperm(num_nodes)[:self.num_motifs]
107 for i in connecting_nodes.tolist():
108 motif = self.motif_generator()
109
110 # Add motif to the graph.
111 edge_indices.append(motif.edge_index + num_nodes)
112 node_masks.append(torch.ones(motif.num_nodes))
113 edge_masks.append(torch.ones(motif.num_edges))
114
115 # Add random motif connection to the graph.
116 j = int(torch.randint(0, motif.num_nodes, (1, ))) + num_nodes
117 edge_indices.append(torch.tensor([[i, j], [j, i]]))
118 edge_masks.append(torch.zeros(2))
119
120 if 'y' in motif:
121 ys.append(motif.y + 1 if motif.y.min() == 0 else motif.y)
122
123 num_nodes += motif.num_nodes
124
125 return Explanation(
126 edge_index=torch.cat(edge_indices, dim=1),
127 y=torch.cat(ys, dim=0) if len(ys) > 1 else None,
128 edge_mask=torch.cat(edge_masks, dim=0),
129 node_mask=torch.cat(node_masks, dim=0),
130 )
131
132 def __repr__(self) -> str:
133 return (f'{self.__class__.__name__}({len(self)}, '
134 f'graph_generator={self.graph_generator}, '
135 f'motif_generator={self.motif_generator}, '
136 f'num_motifs={self.num_motifs})')
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torch_geometric/datasets/explainer_dataset.py b/torch_geometric/datasets/explainer_dataset.py
--- a/torch_geometric/datasets/explainer_dataset.py
+++ b/torch_geometric/datasets/explainer_dataset.py
@@ -119,12 +119,14 @@
if 'y' in motif:
ys.append(motif.y + 1 if motif.y.min() == 0 else motif.y)
+ else:
+ ys.append(torch.ones(motif.num_nodes, dtype=torch.long))
num_nodes += motif.num_nodes
return Explanation(
edge_index=torch.cat(edge_indices, dim=1),
- y=torch.cat(ys, dim=0) if len(ys) > 1 else None,
+ y=torch.cat(ys, dim=0),
edge_mask=torch.cat(edge_masks, dim=0),
node_mask=torch.cat(node_masks, dim=0),
)
| {"golden_diff": "diff --git a/torch_geometric/datasets/explainer_dataset.py b/torch_geometric/datasets/explainer_dataset.py\n--- a/torch_geometric/datasets/explainer_dataset.py\n+++ b/torch_geometric/datasets/explainer_dataset.py\n@@ -119,12 +119,14 @@\n \n if 'y' in motif:\n ys.append(motif.y + 1 if motif.y.min() == 0 else motif.y)\n+ else:\n+ ys.append(torch.ones(motif.num_nodes, dtype=torch.long))\n \n num_nodes += motif.num_nodes\n \n return Explanation(\n edge_index=torch.cat(edge_indices, dim=1),\n- y=torch.cat(ys, dim=0) if len(ys) > 1 else None,\n+ y=torch.cat(ys, dim=0),\n edge_mask=torch.cat(edge_masks, dim=0),\n node_mask=torch.cat(node_masks, dim=0),\n )\n", "issue": "CycleMotif lack of label, therefore do not support GNNExplainer.\n### \ud83d\udc1b Describe the bug\n\nwhen running ./examples/explain/gnn_explainer_ba_shapes.py, when I replace the dataset:\r\n```\r\ndataset = ExplainerDataset(\r\n graph_generator=BAGraph(num_nodes=300, num_edges=5),\r\n motif_generator='house',\r\n num_motifs=80,\r\n transform=T.Constant(),\r\n) \r\n```\r\nwith\r\n```\r\ndataset = ExplainerDataset(\r\n graph_generator=BAGraph(num_nodes=300, num_edges=5),\r\n motif_generator=CycleMotif(num_nodes=6), \r\n num_motifs=80,\r\n transform=T.Constant(),\r\n)\r\n```\r\nThere is an error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/stt/py_github_repo_read/pytorch_geometric/examples/explain/gnn_explainer_ba_shapes.py\", line 46, in <module>\r\n out_channels=dataset.num_classes).to(device)\r\n File \"/home/stt/py_github_repo_read/pytorch_geometric/torch_geometric/data/in_memory_dataset.py\", line 90, in num_classes\r\n return super().num_classes\r\n File \"/home/stt/py_github_repo_read/pytorch_geometric/torch_geometric/data/dataset.py\", line 173, in num_classes\r\n y = torch.cat([data.y for data in data_list if 'y' in data], dim=0)\r\nRuntimeError: torch.cat(): expected a non-empty list of Tensors\r\n```\r\nThe reason behind locate at line 23 in `./torch_geometric/datasets/motif_generator/cycle.py`\r\n```\r\nstructure = Data(\r\n num_nodes=num_nodes,\r\n edge_index=torch.stack([row, col], dim=0),\r\n# TODO: lack of y label\r\n )\r\n```\r\nlack of y label as in `./torch_geometric/datasets/motif_generator/house.py`\r\n```\r\nstructure = Data(\r\n num_nodes=5,\r\n edge_index=torch.tensor([\r\n [0, 0, 0, 1, 1, 1, 2, 2, 3, 3, 4, 4],\r\n [1, 3, 4, 4, 2, 0, 1, 3, 2, 0, 0, 1],\r\n ]),\r\n y=torch.tensor([0, 0, 1, 1, 2]),\r\n )\r\n```\r\nAccording to GNNExplainer original repository, for the cycle motif, the node labels are the same. Therefore, we only need to add `y=torch.tensor([0]*num_nodes)`\n\n### Versions\n\nPyTorch version: 2.1.0+cu121\r\nIs debug build: False\r\nCUDA used to build PyTorch: 12.1\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 22.04.3 LTS (x86_64)\r\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\r\nClang version: Could not collect\r\nCMake version: version 3.22.1\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.9.18 | packaged by conda-forge | (main, Aug 30 2023, 03:49:32) [GCC 12.3.0] (64-bit runtime)\r\nPython platform: Linux-6.2.0-34-generic-x86_64-with-glibc2.35\r\nIs CUDA available: True\r\nCUDA runtime version: 11.8.89\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: \r\nGPU 0: NVIDIA GeForce RTX 3090\r\nGPU 1: NVIDIA GeForce RTX 3090\r\n\r\nNvidia driver version: 535.113.01\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\n\u67b6\u6784\uff1a x86_64\r\nCPU \u8fd0\u884c\u6a21\u5f0f\uff1a 32-bit, 64-bit\r\nAddress sizes: 39 bits physical, 48 bits virtual\r\n\u5b57\u8282\u5e8f\uff1a Little Endian\r\nCPU: 24\r\n\u5728\u7ebf CPU \u5217\u8868\uff1a 0-23\r\n\u5382\u5546 ID\uff1a GenuineIntel\r\n\u578b\u53f7\u540d\u79f0\uff1a 13th Gen Intel(R) Core(TM) i7-13700KF\r\nCPU \u7cfb\u5217\uff1a 6\r\n\u578b\u53f7\uff1a 183\r\n\u6bcf\u4e2a\u6838\u7684\u7ebf\u7a0b\u6570\uff1a 2\r\n\u6bcf\u4e2a\u5ea7\u7684\u6838\u6570\uff1a 16\r\n\u5ea7\uff1a 1\r\n\u6b65\u8fdb\uff1a 1\r\nCPU \u6700\u5927 MHz\uff1a 5400.0000\r\nCPU \u6700\u5c0f MHz\uff1a 800.0000\r\nBogoMIPS\uff1a 6835.20\r\n\u6807\u8bb0\uff1a fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities\r\n\u865a\u62df\u5316\uff1a VT-x\r\nL1d \u7f13\u5b58\uff1a 640 KiB (16 instances)\r\nL1i \u7f13\u5b58\uff1a 768 KiB (16 instances)\r\nL2 \u7f13\u5b58\uff1a 24 MiB (10 instances)\r\nL3 \u7f13\u5b58\uff1a 30 MiB (1 instance)\r\nNUMA \u8282\u70b9\uff1a 1\r\nNUMA \u8282\u70b90 CPU\uff1a 0-23\r\nVulnerability Gather data sampling: Not affected\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Mmio stale data: Not affected\r\nVulnerability Retbleed: Not affected\r\nVulnerability Spec rstack overflow: Not affected\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\r\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence\r\nVulnerability Srbds: Not affected\r\nVulnerability Tsx async abort: Not affected\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.26.0\r\n[pip3] torch==2.1.0\r\n[pip3] torchvision==0.16.0\r\n[pip3] triton==2.1.0\r\n[conda] numpy 1.26.0 pypi_0 pypi\r\n[conda] torch 2.1.0 pypi_0 pypi\r\n[conda] torchvision 0.16.0 pypi_0 pypi\r\n[conda] triton 2.1.0 pypi_0 pypi\n", "before_files": [{"content": "from typing import Any, Callable, Dict, Optional, Union\n\nimport torch\n\nfrom torch_geometric.data import InMemoryDataset\nfrom torch_geometric.datasets.graph_generator import GraphGenerator\nfrom torch_geometric.datasets.motif_generator import MotifGenerator\nfrom torch_geometric.explain import Explanation\n\n\nclass ExplainerDataset(InMemoryDataset):\n r\"\"\"Generates a synthetic dataset for evaluating explainabilty algorithms,\n as described in the `\"GNNExplainer: Generating Explanations for Graph\n Neural Networks\" <https://arxiv.org/abs/1903.03894>`__ paper.\n The :class:`~torch_geometric.datasets.ExplainerDataset` creates synthetic\n graphs coming from a\n :class:`~torch_geometric.datasets.graph_generator.GraphGenerator`, and\n randomly attaches :obj:`num_motifs` many motifs to it coming from a\n :class:`~torch_geometric.datasets.graph_generator.MotifGenerator`.\n Ground-truth node-level and edge-level explainabilty masks are given based\n on whether nodes and edges are part of a certain motif or not.\n\n For example, to generate a random Barabasi-Albert (BA) graph with 300\n nodes, in which we want to randomly attach 80 :obj:`\"house\"` motifs, write:\n\n .. code-block:: python\n\n from torch_geometric.datasets import ExplainerDataset\n from torch_geometric.datasets.graph_generator import BAGraph\n\n dataset = ExplainerDataset(\n graph_generator=BAGraph(num_nodes=300, num_edges=5),\n motif_generator='house',\n num_motifs=80,\n )\n\n .. note::\n\n For an example of using :class:`ExplainerDataset`, see\n `examples/explain/gnn_explainer_ba_shapes.py\n <https://github.com/pyg-team/pytorch_geometric/blob/master/examples/\n /explain/gnn_explainer_ba_shapes.py>`_.\n\n Args:\n graph_generator (GraphGenerator or str): The graph generator to be\n used, *e.g.*,\n :class:`torch.geometric.datasets.graph_generator.BAGraph`\n (or any string that automatically resolves to it).\n motif_generator (MotifGenerator): The motif generator to be used,\n *e.g.*,\n :class:`torch_geometric.datasets.motif_generator.HouseMotif`\n (or any string that automatically resolves to it).\n num_motifs (int): The number of motifs to attach to the graph.\n num_graphs (int, optional): The number of graphs to generate.\n (default: :obj:`1`)\n graph_generator_kwargs (Dict[str, Any], optional): Arguments passed to\n the respective graph generator module in case it gets automatically\n resolved. (default: :obj:`None`)\n motif_generator_kwargs (Dict[str, Any], optional): Arguments passed to\n the respective motif generator module in case it gets automatically\n resolved. (default: :obj:`None`)\n transform (callable, optional): A function/transform that takes in an\n :obj:`torch_geometric.data.Data` object and returns a transformed\n version. The data object will be transformed before every access.\n (default: :obj:`None`)\n \"\"\"\n def __init__(\n self,\n graph_generator: Union[GraphGenerator, str],\n motif_generator: Union[MotifGenerator, str],\n num_motifs: int,\n num_graphs: int = 1,\n graph_generator_kwargs: Optional[Dict[str, Any]] = None,\n motif_generator_kwargs: Optional[Dict[str, Any]] = None,\n transform: Optional[Callable] = None,\n ):\n super().__init__(root=None, transform=transform)\n\n if num_motifs <= 0:\n raise ValueError(f\"At least one motif needs to be attached to the \"\n f\"graph (got {num_motifs})\")\n\n self.graph_generator = GraphGenerator.resolve(\n graph_generator,\n **(graph_generator_kwargs or {}),\n )\n self.motif_generator = MotifGenerator.resolve(\n motif_generator,\n **(motif_generator_kwargs or {}),\n )\n self.num_motifs = num_motifs\n\n # TODO (matthias) support on-the-fly graph generation.\n data_list = [self.get_graph() for _ in range(num_graphs)]\n self.data, self.slices = self.collate(data_list)\n\n def get_graph(self) -> Explanation:\n data = self.graph_generator()\n\n edge_indices = [data.edge_index]\n num_nodes = data.num_nodes\n node_masks = [torch.zeros(data.num_nodes)]\n edge_masks = [torch.zeros(data.num_edges)]\n ys = [torch.zeros(num_nodes, dtype=torch.long)]\n\n connecting_nodes = torch.randperm(num_nodes)[:self.num_motifs]\n for i in connecting_nodes.tolist():\n motif = self.motif_generator()\n\n # Add motif to the graph.\n edge_indices.append(motif.edge_index + num_nodes)\n node_masks.append(torch.ones(motif.num_nodes))\n edge_masks.append(torch.ones(motif.num_edges))\n\n # Add random motif connection to the graph.\n j = int(torch.randint(0, motif.num_nodes, (1, ))) + num_nodes\n edge_indices.append(torch.tensor([[i, j], [j, i]]))\n edge_masks.append(torch.zeros(2))\n\n if 'y' in motif:\n ys.append(motif.y + 1 if motif.y.min() == 0 else motif.y)\n\n num_nodes += motif.num_nodes\n\n return Explanation(\n edge_index=torch.cat(edge_indices, dim=1),\n y=torch.cat(ys, dim=0) if len(ys) > 1 else None,\n edge_mask=torch.cat(edge_masks, dim=0),\n node_mask=torch.cat(node_masks, dim=0),\n )\n\n def __repr__(self) -> str:\n return (f'{self.__class__.__name__}({len(self)}, '\n f'graph_generator={self.graph_generator}, '\n f'motif_generator={self.motif_generator}, '\n f'num_motifs={self.num_motifs})')\n", "path": "torch_geometric/datasets/explainer_dataset.py"}], "after_files": [{"content": "from typing import Any, Callable, Dict, Optional, Union\n\nimport torch\n\nfrom torch_geometric.data import InMemoryDataset\nfrom torch_geometric.datasets.graph_generator import GraphGenerator\nfrom torch_geometric.datasets.motif_generator import MotifGenerator\nfrom torch_geometric.explain import Explanation\n\n\nclass ExplainerDataset(InMemoryDataset):\n r\"\"\"Generates a synthetic dataset for evaluating explainabilty algorithms,\n as described in the `\"GNNExplainer: Generating Explanations for Graph\n Neural Networks\" <https://arxiv.org/abs/1903.03894>`__ paper.\n The :class:`~torch_geometric.datasets.ExplainerDataset` creates synthetic\n graphs coming from a\n :class:`~torch_geometric.datasets.graph_generator.GraphGenerator`, and\n randomly attaches :obj:`num_motifs` many motifs to it coming from a\n :class:`~torch_geometric.datasets.graph_generator.MotifGenerator`.\n Ground-truth node-level and edge-level explainabilty masks are given based\n on whether nodes and edges are part of a certain motif or not.\n\n For example, to generate a random Barabasi-Albert (BA) graph with 300\n nodes, in which we want to randomly attach 80 :obj:`\"house\"` motifs, write:\n\n .. code-block:: python\n\n from torch_geometric.datasets import ExplainerDataset\n from torch_geometric.datasets.graph_generator import BAGraph\n\n dataset = ExplainerDataset(\n graph_generator=BAGraph(num_nodes=300, num_edges=5),\n motif_generator='house',\n num_motifs=80,\n )\n\n .. note::\n\n For an example of using :class:`ExplainerDataset`, see\n `examples/explain/gnn_explainer_ba_shapes.py\n <https://github.com/pyg-team/pytorch_geometric/blob/master/examples/\n /explain/gnn_explainer_ba_shapes.py>`_.\n\n Args:\n graph_generator (GraphGenerator or str): The graph generator to be\n used, *e.g.*,\n :class:`torch.geometric.datasets.graph_generator.BAGraph`\n (or any string that automatically resolves to it).\n motif_generator (MotifGenerator): The motif generator to be used,\n *e.g.*,\n :class:`torch_geometric.datasets.motif_generator.HouseMotif`\n (or any string that automatically resolves to it).\n num_motifs (int): The number of motifs to attach to the graph.\n num_graphs (int, optional): The number of graphs to generate.\n (default: :obj:`1`)\n graph_generator_kwargs (Dict[str, Any], optional): Arguments passed to\n the respective graph generator module in case it gets automatically\n resolved. (default: :obj:`None`)\n motif_generator_kwargs (Dict[str, Any], optional): Arguments passed to\n the respective motif generator module in case it gets automatically\n resolved. (default: :obj:`None`)\n transform (callable, optional): A function/transform that takes in an\n :obj:`torch_geometric.data.Data` object and returns a transformed\n version. The data object will be transformed before every access.\n (default: :obj:`None`)\n \"\"\"\n def __init__(\n self,\n graph_generator: Union[GraphGenerator, str],\n motif_generator: Union[MotifGenerator, str],\n num_motifs: int,\n num_graphs: int = 1,\n graph_generator_kwargs: Optional[Dict[str, Any]] = None,\n motif_generator_kwargs: Optional[Dict[str, Any]] = None,\n transform: Optional[Callable] = None,\n ):\n super().__init__(root=None, transform=transform)\n\n if num_motifs <= 0:\n raise ValueError(f\"At least one motif needs to be attached to the \"\n f\"graph (got {num_motifs})\")\n\n self.graph_generator = GraphGenerator.resolve(\n graph_generator,\n **(graph_generator_kwargs or {}),\n )\n self.motif_generator = MotifGenerator.resolve(\n motif_generator,\n **(motif_generator_kwargs or {}),\n )\n self.num_motifs = num_motifs\n\n # TODO (matthias) support on-the-fly graph generation.\n data_list = [self.get_graph() for _ in range(num_graphs)]\n self.data, self.slices = self.collate(data_list)\n\n def get_graph(self) -> Explanation:\n data = self.graph_generator()\n\n edge_indices = [data.edge_index]\n num_nodes = data.num_nodes\n node_masks = [torch.zeros(data.num_nodes)]\n edge_masks = [torch.zeros(data.num_edges)]\n ys = [torch.zeros(num_nodes, dtype=torch.long)]\n\n connecting_nodes = torch.randperm(num_nodes)[:self.num_motifs]\n for i in connecting_nodes.tolist():\n motif = self.motif_generator()\n\n # Add motif to the graph.\n edge_indices.append(motif.edge_index + num_nodes)\n node_masks.append(torch.ones(motif.num_nodes))\n edge_masks.append(torch.ones(motif.num_edges))\n\n # Add random motif connection to the graph.\n j = int(torch.randint(0, motif.num_nodes, (1, ))) + num_nodes\n edge_indices.append(torch.tensor([[i, j], [j, i]]))\n edge_masks.append(torch.zeros(2))\n\n if 'y' in motif:\n ys.append(motif.y + 1 if motif.y.min() == 0 else motif.y)\n else:\n ys.append(torch.ones(motif.num_nodes, dtype=torch.long))\n\n num_nodes += motif.num_nodes\n\n return Explanation(\n edge_index=torch.cat(edge_indices, dim=1),\n y=torch.cat(ys, dim=0),\n edge_mask=torch.cat(edge_masks, dim=0),\n node_mask=torch.cat(node_masks, dim=0),\n )\n\n def __repr__(self) -> str:\n return (f'{self.__class__.__name__}({len(self)}, '\n f'graph_generator={self.graph_generator}, '\n f'motif_generator={self.motif_generator}, '\n f'num_motifs={self.num_motifs})')\n", "path": "torch_geometric/datasets/explainer_dataset.py"}]} | 3,784 | 209 |
gh_patches_debug_23742 | rasdani/github-patches | git_diff | python-discord__bot-1556 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use embed timestamp in mod pings off
When a mods turns off mod pings, a confirmation is sent to inform the user that their pings have successfully been turned off.
In this confirmation, we currently include the time at which it is due to be sent, this time is in UTC.
I propose we refactor this part of the code to instead use an Embed, with a the timestamp field.
https://github.com/python-discord/bot/blob/ce819ade482e82ecbc474bce5fb8ac9dd8b37b40/bot/exts/moderation/modpings.py#L107
This would mean that the time would automatically get converted to the user's current time zone by Discord.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bot/exts/moderation/modpings.py`
Content:
```
1 import datetime
2 import logging
3
4 from async_rediscache import RedisCache
5 from dateutil.parser import isoparse
6 from discord import Member
7 from discord.ext.commands import Cog, Context, group, has_any_role
8
9 from bot.bot import Bot
10 from bot.constants import Emojis, Guild, MODERATION_ROLES, Roles
11 from bot.converters import Expiry
12 from bot.utils.scheduling import Scheduler
13
14 log = logging.getLogger(__name__)
15
16
17 class ModPings(Cog):
18 """Commands for a moderator to turn moderator pings on and off."""
19
20 # RedisCache[discord.Member.id, 'Naïve ISO 8601 string']
21 # The cache's keys are mods who have pings off.
22 # The cache's values are the times when the role should be re-applied to them, stored in ISO format.
23 pings_off_mods = RedisCache()
24
25 def __init__(self, bot: Bot):
26 self.bot = bot
27 self._role_scheduler = Scheduler(self.__class__.__name__)
28
29 self.guild = None
30 self.moderators_role = None
31
32 self.reschedule_task = self.bot.loop.create_task(self.reschedule_roles(), name="mod-pings-reschedule")
33
34 async def reschedule_roles(self) -> None:
35 """Reschedule moderators role re-apply times."""
36 await self.bot.wait_until_guild_available()
37 self.guild = self.bot.get_guild(Guild.id)
38 self.moderators_role = self.guild.get_role(Roles.moderators)
39
40 mod_team = self.guild.get_role(Roles.mod_team)
41 pings_on = self.moderators_role.members
42 pings_off = await self.pings_off_mods.to_dict()
43
44 log.trace("Applying the moderators role to the mod team where necessary.")
45 for mod in mod_team.members:
46 if mod in pings_on: # Make sure that on-duty mods aren't in the cache.
47 if mod in pings_off:
48 await self.pings_off_mods.delete(mod.id)
49 continue
50
51 # Keep the role off only for those in the cache.
52 if mod.id not in pings_off:
53 await self.reapply_role(mod)
54 else:
55 expiry = isoparse(pings_off[mod.id]).replace(tzinfo=None)
56 self._role_scheduler.schedule_at(expiry, mod.id, self.reapply_role(mod))
57
58 async def reapply_role(self, mod: Member) -> None:
59 """Reapply the moderator's role to the given moderator."""
60 log.trace(f"Re-applying role to mod with ID {mod.id}.")
61 await mod.add_roles(self.moderators_role, reason="Pings off period expired.")
62
63 @group(name='modpings', aliases=('modping',), invoke_without_command=True)
64 @has_any_role(*MODERATION_ROLES)
65 async def modpings_group(self, ctx: Context) -> None:
66 """Allow the removal and re-addition of the pingable moderators role."""
67 await ctx.send_help(ctx.command)
68
69 @modpings_group.command(name='off')
70 @has_any_role(*MODERATION_ROLES)
71 async def off_command(self, ctx: Context, duration: Expiry) -> None:
72 """
73 Temporarily removes the pingable moderators role for a set amount of time.
74
75 A unit of time should be appended to the duration.
76 Units (∗case-sensitive):
77 \u2003`y` - years
78 \u2003`m` - months∗
79 \u2003`w` - weeks
80 \u2003`d` - days
81 \u2003`h` - hours
82 \u2003`M` - minutes∗
83 \u2003`s` - seconds
84
85 Alternatively, an ISO 8601 timestamp can be provided for the duration.
86
87 The duration cannot be longer than 30 days.
88 """
89 duration: datetime.datetime
90 delta = duration - datetime.datetime.utcnow()
91 if delta > datetime.timedelta(days=30):
92 await ctx.send(":x: Cannot remove the role for longer than 30 days.")
93 return
94
95 mod = ctx.author
96
97 until_date = duration.replace(microsecond=0).isoformat() # Looks noisy with microseconds.
98 await mod.remove_roles(self.moderators_role, reason=f"Turned pings off until {until_date}.")
99
100 await self.pings_off_mods.set(mod.id, duration.isoformat())
101
102 # Allow rescheduling the task without cancelling it separately via the `on` command.
103 if mod.id in self._role_scheduler:
104 self._role_scheduler.cancel(mod.id)
105 self._role_scheduler.schedule_at(duration, mod.id, self.reapply_role(mod))
106
107 await ctx.send(f"{Emojis.check_mark} Moderators role has been removed until {until_date}.")
108
109 @modpings_group.command(name='on')
110 @has_any_role(*MODERATION_ROLES)
111 async def on_command(self, ctx: Context) -> None:
112 """Re-apply the pingable moderators role."""
113 mod = ctx.author
114 if mod in self.moderators_role.members:
115 await ctx.send(":question: You already have the role.")
116 return
117
118 await mod.add_roles(self.moderators_role, reason="Pings off period canceled.")
119
120 await self.pings_off_mods.delete(mod.id)
121
122 # We assume the task exists. Lack of it may indicate a bug.
123 self._role_scheduler.cancel(mod.id)
124
125 await ctx.send(f"{Emojis.check_mark} Moderators role has been re-applied.")
126
127 def cog_unload(self) -> None:
128 """Cancel role tasks when the cog unloads."""
129 log.trace("Cog unload: canceling role tasks.")
130 self.reschedule_task.cancel()
131 self._role_scheduler.cancel_all()
132
133
134 def setup(bot: Bot) -> None:
135 """Load the ModPings cog."""
136 bot.add_cog(ModPings(bot))
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bot/exts/moderation/modpings.py b/bot/exts/moderation/modpings.py
--- a/bot/exts/moderation/modpings.py
+++ b/bot/exts/moderation/modpings.py
@@ -3,11 +3,11 @@
from async_rediscache import RedisCache
from dateutil.parser import isoparse
-from discord import Member
+from discord import Embed, Member
from discord.ext.commands import Cog, Context, group, has_any_role
from bot.bot import Bot
-from bot.constants import Emojis, Guild, MODERATION_ROLES, Roles
+from bot.constants import Colours, Emojis, Guild, Icons, MODERATION_ROLES, Roles
from bot.converters import Expiry
from bot.utils.scheduling import Scheduler
@@ -104,7 +104,9 @@
self._role_scheduler.cancel(mod.id)
self._role_scheduler.schedule_at(duration, mod.id, self.reapply_role(mod))
- await ctx.send(f"{Emojis.check_mark} Moderators role has been removed until {until_date}.")
+ embed = Embed(timestamp=duration, colour=Colours.bright_green)
+ embed.set_footer(text="Moderators role has been removed until", icon_url=Icons.green_checkmark)
+ await ctx.send(embed=embed)
@modpings_group.command(name='on')
@has_any_role(*MODERATION_ROLES)
| {"golden_diff": "diff --git a/bot/exts/moderation/modpings.py b/bot/exts/moderation/modpings.py\n--- a/bot/exts/moderation/modpings.py\n+++ b/bot/exts/moderation/modpings.py\n@@ -3,11 +3,11 @@\n \n from async_rediscache import RedisCache\n from dateutil.parser import isoparse\n-from discord import Member\n+from discord import Embed, Member\n from discord.ext.commands import Cog, Context, group, has_any_role\n \n from bot.bot import Bot\n-from bot.constants import Emojis, Guild, MODERATION_ROLES, Roles\n+from bot.constants import Colours, Emojis, Guild, Icons, MODERATION_ROLES, Roles\n from bot.converters import Expiry\n from bot.utils.scheduling import Scheduler\n \n@@ -104,7 +104,9 @@\n self._role_scheduler.cancel(mod.id)\n self._role_scheduler.schedule_at(duration, mod.id, self.reapply_role(mod))\n \n- await ctx.send(f\"{Emojis.check_mark} Moderators role has been removed until {until_date}.\")\n+ embed = Embed(timestamp=duration, colour=Colours.bright_green)\n+ embed.set_footer(text=\"Moderators role has been removed until\", icon_url=Icons.green_checkmark)\n+ await ctx.send(embed=embed)\n \n @modpings_group.command(name='on')\n @has_any_role(*MODERATION_ROLES)\n", "issue": "Use embed timestamp in mod pings off\nWhen a mods turns off mod pings, a confirmation is sent to inform the user that their pings have successfully been turned off.\r\n\r\nIn this confirmation, we currently include the time at which it is due to be sent, this time is in UTC.\r\n\r\nI propose we refactor this part of the code to instead use an Embed, with a the timestamp field.\r\nhttps://github.com/python-discord/bot/blob/ce819ade482e82ecbc474bce5fb8ac9dd8b37b40/bot/exts/moderation/modpings.py#L107\r\nThis would mean that the time would automatically get converted to the user's current time zone by Discord.\n", "before_files": [{"content": "import datetime\nimport logging\n\nfrom async_rediscache import RedisCache\nfrom dateutil.parser import isoparse\nfrom discord import Member\nfrom discord.ext.commands import Cog, Context, group, has_any_role\n\nfrom bot.bot import Bot\nfrom bot.constants import Emojis, Guild, MODERATION_ROLES, Roles\nfrom bot.converters import Expiry\nfrom bot.utils.scheduling import Scheduler\n\nlog = logging.getLogger(__name__)\n\n\nclass ModPings(Cog):\n \"\"\"Commands for a moderator to turn moderator pings on and off.\"\"\"\n\n # RedisCache[discord.Member.id, 'Na\u00efve ISO 8601 string']\n # The cache's keys are mods who have pings off.\n # The cache's values are the times when the role should be re-applied to them, stored in ISO format.\n pings_off_mods = RedisCache()\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self._role_scheduler = Scheduler(self.__class__.__name__)\n\n self.guild = None\n self.moderators_role = None\n\n self.reschedule_task = self.bot.loop.create_task(self.reschedule_roles(), name=\"mod-pings-reschedule\")\n\n async def reschedule_roles(self) -> None:\n \"\"\"Reschedule moderators role re-apply times.\"\"\"\n await self.bot.wait_until_guild_available()\n self.guild = self.bot.get_guild(Guild.id)\n self.moderators_role = self.guild.get_role(Roles.moderators)\n\n mod_team = self.guild.get_role(Roles.mod_team)\n pings_on = self.moderators_role.members\n pings_off = await self.pings_off_mods.to_dict()\n\n log.trace(\"Applying the moderators role to the mod team where necessary.\")\n for mod in mod_team.members:\n if mod in pings_on: # Make sure that on-duty mods aren't in the cache.\n if mod in pings_off:\n await self.pings_off_mods.delete(mod.id)\n continue\n\n # Keep the role off only for those in the cache.\n if mod.id not in pings_off:\n await self.reapply_role(mod)\n else:\n expiry = isoparse(pings_off[mod.id]).replace(tzinfo=None)\n self._role_scheduler.schedule_at(expiry, mod.id, self.reapply_role(mod))\n\n async def reapply_role(self, mod: Member) -> None:\n \"\"\"Reapply the moderator's role to the given moderator.\"\"\"\n log.trace(f\"Re-applying role to mod with ID {mod.id}.\")\n await mod.add_roles(self.moderators_role, reason=\"Pings off period expired.\")\n\n @group(name='modpings', aliases=('modping',), invoke_without_command=True)\n @has_any_role(*MODERATION_ROLES)\n async def modpings_group(self, ctx: Context) -> None:\n \"\"\"Allow the removal and re-addition of the pingable moderators role.\"\"\"\n await ctx.send_help(ctx.command)\n\n @modpings_group.command(name='off')\n @has_any_role(*MODERATION_ROLES)\n async def off_command(self, ctx: Context, duration: Expiry) -> None:\n \"\"\"\n Temporarily removes the pingable moderators role for a set amount of time.\n\n A unit of time should be appended to the duration.\n Units (\u2217case-sensitive):\n \\u2003`y` - years\n \\u2003`m` - months\u2217\n \\u2003`w` - weeks\n \\u2003`d` - days\n \\u2003`h` - hours\n \\u2003`M` - minutes\u2217\n \\u2003`s` - seconds\n\n Alternatively, an ISO 8601 timestamp can be provided for the duration.\n\n The duration cannot be longer than 30 days.\n \"\"\"\n duration: datetime.datetime\n delta = duration - datetime.datetime.utcnow()\n if delta > datetime.timedelta(days=30):\n await ctx.send(\":x: Cannot remove the role for longer than 30 days.\")\n return\n\n mod = ctx.author\n\n until_date = duration.replace(microsecond=0).isoformat() # Looks noisy with microseconds.\n await mod.remove_roles(self.moderators_role, reason=f\"Turned pings off until {until_date}.\")\n\n await self.pings_off_mods.set(mod.id, duration.isoformat())\n\n # Allow rescheduling the task without cancelling it separately via the `on` command.\n if mod.id in self._role_scheduler:\n self._role_scheduler.cancel(mod.id)\n self._role_scheduler.schedule_at(duration, mod.id, self.reapply_role(mod))\n\n await ctx.send(f\"{Emojis.check_mark} Moderators role has been removed until {until_date}.\")\n\n @modpings_group.command(name='on')\n @has_any_role(*MODERATION_ROLES)\n async def on_command(self, ctx: Context) -> None:\n \"\"\"Re-apply the pingable moderators role.\"\"\"\n mod = ctx.author\n if mod in self.moderators_role.members:\n await ctx.send(\":question: You already have the role.\")\n return\n\n await mod.add_roles(self.moderators_role, reason=\"Pings off period canceled.\")\n\n await self.pings_off_mods.delete(mod.id)\n\n # We assume the task exists. Lack of it may indicate a bug.\n self._role_scheduler.cancel(mod.id)\n\n await ctx.send(f\"{Emojis.check_mark} Moderators role has been re-applied.\")\n\n def cog_unload(self) -> None:\n \"\"\"Cancel role tasks when the cog unloads.\"\"\"\n log.trace(\"Cog unload: canceling role tasks.\")\n self.reschedule_task.cancel()\n self._role_scheduler.cancel_all()\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the ModPings cog.\"\"\"\n bot.add_cog(ModPings(bot))\n", "path": "bot/exts/moderation/modpings.py"}], "after_files": [{"content": "import datetime\nimport logging\n\nfrom async_rediscache import RedisCache\nfrom dateutil.parser import isoparse\nfrom discord import Embed, Member\nfrom discord.ext.commands import Cog, Context, group, has_any_role\n\nfrom bot.bot import Bot\nfrom bot.constants import Colours, Emojis, Guild, Icons, MODERATION_ROLES, Roles\nfrom bot.converters import Expiry\nfrom bot.utils.scheduling import Scheduler\n\nlog = logging.getLogger(__name__)\n\n\nclass ModPings(Cog):\n \"\"\"Commands for a moderator to turn moderator pings on and off.\"\"\"\n\n # RedisCache[discord.Member.id, 'Na\u00efve ISO 8601 string']\n # The cache's keys are mods who have pings off.\n # The cache's values are the times when the role should be re-applied to them, stored in ISO format.\n pings_off_mods = RedisCache()\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self._role_scheduler = Scheduler(self.__class__.__name__)\n\n self.guild = None\n self.moderators_role = None\n\n self.reschedule_task = self.bot.loop.create_task(self.reschedule_roles(), name=\"mod-pings-reschedule\")\n\n async def reschedule_roles(self) -> None:\n \"\"\"Reschedule moderators role re-apply times.\"\"\"\n await self.bot.wait_until_guild_available()\n self.guild = self.bot.get_guild(Guild.id)\n self.moderators_role = self.guild.get_role(Roles.moderators)\n\n mod_team = self.guild.get_role(Roles.mod_team)\n pings_on = self.moderators_role.members\n pings_off = await self.pings_off_mods.to_dict()\n\n log.trace(\"Applying the moderators role to the mod team where necessary.\")\n for mod in mod_team.members:\n if mod in pings_on: # Make sure that on-duty mods aren't in the cache.\n if mod in pings_off:\n await self.pings_off_mods.delete(mod.id)\n continue\n\n # Keep the role off only for those in the cache.\n if mod.id not in pings_off:\n await self.reapply_role(mod)\n else:\n expiry = isoparse(pings_off[mod.id]).replace(tzinfo=None)\n self._role_scheduler.schedule_at(expiry, mod.id, self.reapply_role(mod))\n\n async def reapply_role(self, mod: Member) -> None:\n \"\"\"Reapply the moderator's role to the given moderator.\"\"\"\n log.trace(f\"Re-applying role to mod with ID {mod.id}.\")\n await mod.add_roles(self.moderators_role, reason=\"Pings off period expired.\")\n\n @group(name='modpings', aliases=('modping',), invoke_without_command=True)\n @has_any_role(*MODERATION_ROLES)\n async def modpings_group(self, ctx: Context) -> None:\n \"\"\"Allow the removal and re-addition of the pingable moderators role.\"\"\"\n await ctx.send_help(ctx.command)\n\n @modpings_group.command(name='off')\n @has_any_role(*MODERATION_ROLES)\n async def off_command(self, ctx: Context, duration: Expiry) -> None:\n \"\"\"\n Temporarily removes the pingable moderators role for a set amount of time.\n\n A unit of time should be appended to the duration.\n Units (\u2217case-sensitive):\n \\u2003`y` - years\n \\u2003`m` - months\u2217\n \\u2003`w` - weeks\n \\u2003`d` - days\n \\u2003`h` - hours\n \\u2003`M` - minutes\u2217\n \\u2003`s` - seconds\n\n Alternatively, an ISO 8601 timestamp can be provided for the duration.\n\n The duration cannot be longer than 30 days.\n \"\"\"\n duration: datetime.datetime\n delta = duration - datetime.datetime.utcnow()\n if delta > datetime.timedelta(days=30):\n await ctx.send(\":x: Cannot remove the role for longer than 30 days.\")\n return\n\n mod = ctx.author\n\n until_date = duration.replace(microsecond=0).isoformat() # Looks noisy with microseconds.\n await mod.remove_roles(self.moderators_role, reason=f\"Turned pings off until {until_date}.\")\n\n await self.pings_off_mods.set(mod.id, duration.isoformat())\n\n # Allow rescheduling the task without cancelling it separately via the `on` command.\n if mod.id in self._role_scheduler:\n self._role_scheduler.cancel(mod.id)\n self._role_scheduler.schedule_at(duration, mod.id, self.reapply_role(mod))\n\n embed = Embed(timestamp=duration, colour=Colours.bright_green)\n embed.set_footer(text=\"Moderators role has been removed until\", icon_url=Icons.green_checkmark)\n await ctx.send(embed=embed)\n\n @modpings_group.command(name='on')\n @has_any_role(*MODERATION_ROLES)\n async def on_command(self, ctx: Context) -> None:\n \"\"\"Re-apply the pingable moderators role.\"\"\"\n mod = ctx.author\n if mod in self.moderators_role.members:\n await ctx.send(\":question: You already have the role.\")\n return\n\n await mod.add_roles(self.moderators_role, reason=\"Pings off period canceled.\")\n\n await self.pings_off_mods.delete(mod.id)\n\n # We assume the task exists. Lack of it may indicate a bug.\n self._role_scheduler.cancel(mod.id)\n\n await ctx.send(f\"{Emojis.check_mark} Moderators role has been re-applied.\")\n\n def cog_unload(self) -> None:\n \"\"\"Cancel role tasks when the cog unloads.\"\"\"\n log.trace(\"Cog unload: canceling role tasks.\")\n self.reschedule_task.cancel()\n self._role_scheduler.cancel_all()\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the ModPings cog.\"\"\"\n bot.add_cog(ModPings(bot))\n", "path": "bot/exts/moderation/modpings.py"}]} | 2,018 | 313 |
gh_patches_debug_17596 | rasdani/github-patches | git_diff | streamlink__streamlink-4517 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SegmentedStreamWriter.close() does not reliably finish before CLI exits (race condition)
### Checklist
- [X] This is a bug report and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed bug reports](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest build from the master branch
### Description
### Background
I was doing some works on latest `master` branch, adding some feature on Segmented/HLS streams, and it requires some cleanup, so I added them in the end of `SegmentedStreamWriter.close()`, like
```python
self.closed = True
self.reader.close()
self.executor.shutdown(wait=True, cancel_futures=True)
__my_extra_cleanup()
```
And I found my cleanup code does not always being executed, when HLS streams end normally.
### Problem
When an HLS stream ends normally, the whole shutdown process is triggered by the last line of `SegmentedStreamWriter.run()`, `self.close()`. Then `SegmentedStreamWriter.close()` calls `SegmentedStreamReader.close()`, then the iteration loop of `stream_cli.main:read_stream()` will reach its end, then `main()` exits.
The problem is, `SegmentedStreamWriter.run() -> SegmentedStreamWriter.close()` was run in a separated thread, which means `SegmentedStreamWriter.close()` cannot reliably finish its work before main thread exits.
### To reproduce
To reliably trigger the problem, adding a sleep to original `SegmentedStreamWriter.close()`, like
```python
self.closed = True
self.reader.close()
self.executor.shutdown(wait=True, cancel_futures=True)
time.sleep(3)
log.debug("SegmentedStreamWriter.close() ends")
```
Then run the CLI with a short HLS stream, the `SegmentedStreamWriter.close() ends` message never appears.
### Debug log
```text
......
[cli][info] Stream ended
[cli][info] Closing currently open stream...
```
### Expected result
```text
......
[cli][info] Stream ended
[cli][info] Closing currently open stream...
[stream.segmented][debug] SegmentedStreamWriter.close() ends
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/stream/segmented.py`
Content:
```
1 import logging
2 import queue
3 from concurrent import futures
4 from concurrent.futures import Future, ThreadPoolExecutor
5 from sys import version_info
6 from threading import Event, Thread
7 from typing import Any, Optional
8
9 from streamlink.buffers import RingBuffer
10 from streamlink.stream.stream import StreamIO
11
12 log = logging.getLogger(__name__)
13
14
15 class CompatThreadPoolExecutor(ThreadPoolExecutor):
16 if version_info < (3, 9):
17 def shutdown(self, wait=True, cancel_futures=False): # pragma: no cover
18 with self._shutdown_lock:
19 self._shutdown = True
20 if cancel_futures:
21 # Drain all work items from the queue, and then cancel their
22 # associated futures.
23 while True:
24 try:
25 work_item = self._work_queue.get_nowait()
26 except queue.Empty:
27 break
28 if work_item is not None:
29 work_item.future.cancel()
30
31 # Send a wake-up to prevent threads calling
32 # _work_queue.get(block=True) from permanently blocking.
33 self._work_queue.put(None)
34 if wait:
35 for t in self._threads:
36 t.join()
37
38
39 class SegmentedStreamWorker(Thread):
40 """The general worker thread.
41
42 This thread is responsible for queueing up segments in the
43 writer thread.
44 """
45
46 def __init__(self, reader, **kwargs):
47 self.closed = False
48 self.reader = reader
49 self.writer = reader.writer
50 self.stream = reader.stream
51 self.session = reader.session
52
53 self._wait = Event()
54
55 super().__init__(daemon=True, name=f"Thread-{self.__class__.__name__}")
56
57 def close(self):
58 """Shuts down the thread."""
59 if self.closed: # pragma: no cover
60 return
61
62 log.debug("Closing worker thread")
63
64 self.closed = True
65 self._wait.set()
66
67 def wait(self, time):
68 """Pauses the thread for a specified time.
69
70 Returns False if interrupted by another thread and True if the
71 time runs out normally.
72 """
73 return not self._wait.wait(time)
74
75 def iter_segments(self):
76 """The iterator that generates segments for the worker thread.
77
78 Should be overridden by the inheriting class.
79 """
80 return
81 yield
82
83 def run(self):
84 for segment in self.iter_segments():
85 if self.closed: # pragma: no cover
86 break
87 self.writer.put(segment)
88
89 # End of stream, tells the writer to exit
90 self.writer.put(None)
91 self.close()
92
93
94 class SegmentedStreamWriter(Thread):
95 """The writer thread.
96
97 This thread is responsible for fetching segments, processing them
98 and finally writing the data to the buffer.
99 """
100
101 def __init__(self, reader, size=20, retries=None, threads=None, timeout=None):
102 self.closed = False
103 self.reader = reader
104 self.stream = reader.stream
105 self.session = reader.session
106
107 if not retries:
108 retries = self.session.options.get("stream-segment-attempts")
109
110 if not threads:
111 threads = self.session.options.get("stream-segment-threads")
112
113 if not timeout:
114 timeout = self.session.options.get("stream-segment-timeout")
115
116 self.retries = retries
117 self.timeout = timeout
118 self.threads = threads
119 self.executor = CompatThreadPoolExecutor(max_workers=self.threads)
120 self.futures = queue.Queue(size)
121
122 super().__init__(daemon=True, name=f"Thread-{self.__class__.__name__}")
123
124 def close(self):
125 """Shuts down the thread, its executor and closes the reader (worker thread and buffer)."""
126 if self.closed: # pragma: no cover
127 return
128
129 log.debug("Closing writer thread")
130
131 self.closed = True
132 self.reader.close()
133 self.executor.shutdown(wait=True, cancel_futures=True)
134
135 def put(self, segment):
136 """Adds a segment to the download pool and write queue."""
137 if self.closed: # pragma: no cover
138 return
139
140 if segment is None:
141 future = None
142 else:
143 future = self.executor.submit(self.fetch, segment, retries=self.retries)
144
145 self.queue(segment, future)
146
147 def queue(self, segment: Any, future: Optional[Future], *data):
148 """Puts values into a queue but aborts if this thread is closed."""
149 while not self.closed: # pragma: no branch
150 try:
151 self._futures_put((segment, future, *data))
152 return
153 except queue.Full: # pragma: no cover
154 continue
155
156 def _futures_put(self, item):
157 self.futures.put(item, block=True, timeout=1)
158
159 def _futures_get(self):
160 return self.futures.get(block=True, timeout=0.5)
161
162 @staticmethod
163 def _future_result(future: Future):
164 return future.result(timeout=0.5)
165
166 def fetch(self, segment):
167 """Fetches a segment.
168
169 Should be overridden by the inheriting class.
170 """
171 pass
172
173 def write(self, segment, result, *data):
174 """Writes a segment to the buffer.
175
176 Should be overridden by the inheriting class.
177 """
178 pass
179
180 def run(self):
181 while not self.closed:
182 try:
183 segment, future, *data = self._futures_get()
184 except queue.Empty: # pragma: no cover
185 continue
186
187 # End of stream
188 if future is None:
189 break
190
191 while not self.closed: # pragma: no branch
192 try:
193 result = self._future_result(future)
194 except futures.TimeoutError: # pragma: no cover
195 continue
196 except futures.CancelledError: # pragma: no cover
197 break
198
199 if result is not None: # pragma: no branch
200 self.write(segment, result, *data)
201
202 break
203
204 self.close()
205
206
207 class SegmentedStreamReader(StreamIO):
208 __worker__ = SegmentedStreamWorker
209 __writer__ = SegmentedStreamWriter
210
211 def __init__(self, stream, timeout=None):
212 super().__init__()
213 self.session = stream.session
214 self.stream = stream
215 self.timeout = timeout or self.session.options.get("stream-timeout")
216
217 buffer_size = self.session.get_option("ringbuffer-size")
218 self.buffer = RingBuffer(buffer_size)
219 self.writer = self.__writer__(self)
220 self.worker = self.__worker__(self)
221
222 def open(self):
223 self.writer.start()
224 self.worker.start()
225
226 def close(self):
227 self.worker.close()
228 self.writer.close()
229 self.buffer.close()
230
231 def read(self, size):
232 return self.buffer.read(
233 size,
234 block=self.writer.is_alive(),
235 timeout=self.timeout
236 )
237
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/stream/segmented.py b/src/streamlink/stream/segmented.py
--- a/src/streamlink/stream/segmented.py
+++ b/src/streamlink/stream/segmented.py
@@ -3,7 +3,7 @@
from concurrent import futures
from concurrent.futures import Future, ThreadPoolExecutor
from sys import version_info
-from threading import Event, Thread
+from threading import Event, Thread, current_thread
from typing import Any, Optional
from streamlink.buffers import RingBuffer
@@ -228,6 +228,12 @@
self.writer.close()
self.buffer.close()
+ current = current_thread()
+ if current is not self.worker: # pragma: no branch
+ self.worker.join(timeout=self.timeout)
+ if current is not self.writer: # pragma: no branch
+ self.writer.join(timeout=self.timeout)
+
def read(self, size):
return self.buffer.read(
size,
| {"golden_diff": "diff --git a/src/streamlink/stream/segmented.py b/src/streamlink/stream/segmented.py\n--- a/src/streamlink/stream/segmented.py\n+++ b/src/streamlink/stream/segmented.py\n@@ -3,7 +3,7 @@\n from concurrent import futures\n from concurrent.futures import Future, ThreadPoolExecutor\n from sys import version_info\n-from threading import Event, Thread\n+from threading import Event, Thread, current_thread\n from typing import Any, Optional\n \n from streamlink.buffers import RingBuffer\n@@ -228,6 +228,12 @@\n self.writer.close()\n self.buffer.close()\n \n+ current = current_thread()\n+ if current is not self.worker: # pragma: no branch\n+ self.worker.join(timeout=self.timeout)\n+ if current is not self.writer: # pragma: no branch\n+ self.writer.join(timeout=self.timeout)\n+\n def read(self, size):\n return self.buffer.read(\n size,\n", "issue": "SegmentedStreamWriter.close() does not reliably finish before CLI exits (race condition)\n### Checklist\r\n\r\n- [X] This is a bug report and not a different kind of issue\r\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\r\n- [X] [I have checked the list of open and recently closed bug reports](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22bug%22)\r\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\r\n\r\n### Streamlink version\r\n\r\nLatest build from the master branch\r\n\r\n### Description\r\n\r\n### Background\r\nI was doing some works on latest `master` branch, adding some feature on Segmented/HLS streams, and it requires some cleanup, so I added them in the end of `SegmentedStreamWriter.close()`, like\r\n\r\n```python\r\n self.closed = True\r\n self.reader.close()\r\n self.executor.shutdown(wait=True, cancel_futures=True)\r\n\r\n __my_extra_cleanup()\r\n```\r\n\r\nAnd I found my cleanup code does not always being executed, when HLS streams end normally.\r\n\r\n### Problem\r\nWhen an HLS stream ends normally, the whole shutdown process is triggered by the last line of `SegmentedStreamWriter.run()`, `self.close()`. Then `SegmentedStreamWriter.close()` calls `SegmentedStreamReader.close()`, then the iteration loop of `stream_cli.main:read_stream()` will reach its end, then `main()` exits.\r\n\r\nThe problem is, `SegmentedStreamWriter.run() -> SegmentedStreamWriter.close()` was run in a separated thread, which means `SegmentedStreamWriter.close()` cannot reliably finish its work before main thread exits.\r\n\r\n### To reproduce\r\nTo reliably trigger the problem, adding a sleep to original `SegmentedStreamWriter.close()`, like\r\n\r\n```python\r\n self.closed = True\r\n self.reader.close()\r\n self.executor.shutdown(wait=True, cancel_futures=True)\r\n\r\n time.sleep(3)\r\n log.debug(\"SegmentedStreamWriter.close() ends\")\r\n```\r\n\r\nThen run the CLI with a short HLS stream, the `SegmentedStreamWriter.close() ends` message never appears.\r\n\r\n\r\n### Debug log\r\n\r\n```text\r\n......\r\n\r\n[cli][info] Stream ended\r\n[cli][info] Closing currently open stream...\r\n```\r\n\r\n### Expected result\r\n```text\r\n......\r\n\r\n[cli][info] Stream ended\r\n[cli][info] Closing currently open stream...\r\n[stream.segmented][debug] SegmentedStreamWriter.close() ends\r\n```\r\n\n", "before_files": [{"content": "import logging\nimport queue\nfrom concurrent import futures\nfrom concurrent.futures import Future, ThreadPoolExecutor\nfrom sys import version_info\nfrom threading import Event, Thread\nfrom typing import Any, Optional\n\nfrom streamlink.buffers import RingBuffer\nfrom streamlink.stream.stream import StreamIO\n\nlog = logging.getLogger(__name__)\n\n\nclass CompatThreadPoolExecutor(ThreadPoolExecutor):\n if version_info < (3, 9):\n def shutdown(self, wait=True, cancel_futures=False): # pragma: no cover\n with self._shutdown_lock:\n self._shutdown = True\n if cancel_futures:\n # Drain all work items from the queue, and then cancel their\n # associated futures.\n while True:\n try:\n work_item = self._work_queue.get_nowait()\n except queue.Empty:\n break\n if work_item is not None:\n work_item.future.cancel()\n\n # Send a wake-up to prevent threads calling\n # _work_queue.get(block=True) from permanently blocking.\n self._work_queue.put(None)\n if wait:\n for t in self._threads:\n t.join()\n\n\nclass SegmentedStreamWorker(Thread):\n \"\"\"The general worker thread.\n\n This thread is responsible for queueing up segments in the\n writer thread.\n \"\"\"\n\n def __init__(self, reader, **kwargs):\n self.closed = False\n self.reader = reader\n self.writer = reader.writer\n self.stream = reader.stream\n self.session = reader.session\n\n self._wait = Event()\n\n super().__init__(daemon=True, name=f\"Thread-{self.__class__.__name__}\")\n\n def close(self):\n \"\"\"Shuts down the thread.\"\"\"\n if self.closed: # pragma: no cover\n return\n\n log.debug(\"Closing worker thread\")\n\n self.closed = True\n self._wait.set()\n\n def wait(self, time):\n \"\"\"Pauses the thread for a specified time.\n\n Returns False if interrupted by another thread and True if the\n time runs out normally.\n \"\"\"\n return not self._wait.wait(time)\n\n def iter_segments(self):\n \"\"\"The iterator that generates segments for the worker thread.\n\n Should be overridden by the inheriting class.\n \"\"\"\n return\n yield\n\n def run(self):\n for segment in self.iter_segments():\n if self.closed: # pragma: no cover\n break\n self.writer.put(segment)\n\n # End of stream, tells the writer to exit\n self.writer.put(None)\n self.close()\n\n\nclass SegmentedStreamWriter(Thread):\n \"\"\"The writer thread.\n\n This thread is responsible for fetching segments, processing them\n and finally writing the data to the buffer.\n \"\"\"\n\n def __init__(self, reader, size=20, retries=None, threads=None, timeout=None):\n self.closed = False\n self.reader = reader\n self.stream = reader.stream\n self.session = reader.session\n\n if not retries:\n retries = self.session.options.get(\"stream-segment-attempts\")\n\n if not threads:\n threads = self.session.options.get(\"stream-segment-threads\")\n\n if not timeout:\n timeout = self.session.options.get(\"stream-segment-timeout\")\n\n self.retries = retries\n self.timeout = timeout\n self.threads = threads\n self.executor = CompatThreadPoolExecutor(max_workers=self.threads)\n self.futures = queue.Queue(size)\n\n super().__init__(daemon=True, name=f\"Thread-{self.__class__.__name__}\")\n\n def close(self):\n \"\"\"Shuts down the thread, its executor and closes the reader (worker thread and buffer).\"\"\"\n if self.closed: # pragma: no cover\n return\n\n log.debug(\"Closing writer thread\")\n\n self.closed = True\n self.reader.close()\n self.executor.shutdown(wait=True, cancel_futures=True)\n\n def put(self, segment):\n \"\"\"Adds a segment to the download pool and write queue.\"\"\"\n if self.closed: # pragma: no cover\n return\n\n if segment is None:\n future = None\n else:\n future = self.executor.submit(self.fetch, segment, retries=self.retries)\n\n self.queue(segment, future)\n\n def queue(self, segment: Any, future: Optional[Future], *data):\n \"\"\"Puts values into a queue but aborts if this thread is closed.\"\"\"\n while not self.closed: # pragma: no branch\n try:\n self._futures_put((segment, future, *data))\n return\n except queue.Full: # pragma: no cover\n continue\n\n def _futures_put(self, item):\n self.futures.put(item, block=True, timeout=1)\n\n def _futures_get(self):\n return self.futures.get(block=True, timeout=0.5)\n\n @staticmethod\n def _future_result(future: Future):\n return future.result(timeout=0.5)\n\n def fetch(self, segment):\n \"\"\"Fetches a segment.\n\n Should be overridden by the inheriting class.\n \"\"\"\n pass\n\n def write(self, segment, result, *data):\n \"\"\"Writes a segment to the buffer.\n\n Should be overridden by the inheriting class.\n \"\"\"\n pass\n\n def run(self):\n while not self.closed:\n try:\n segment, future, *data = self._futures_get()\n except queue.Empty: # pragma: no cover\n continue\n\n # End of stream\n if future is None:\n break\n\n while not self.closed: # pragma: no branch\n try:\n result = self._future_result(future)\n except futures.TimeoutError: # pragma: no cover\n continue\n except futures.CancelledError: # pragma: no cover\n break\n\n if result is not None: # pragma: no branch\n self.write(segment, result, *data)\n\n break\n\n self.close()\n\n\nclass SegmentedStreamReader(StreamIO):\n __worker__ = SegmentedStreamWorker\n __writer__ = SegmentedStreamWriter\n\n def __init__(self, stream, timeout=None):\n super().__init__()\n self.session = stream.session\n self.stream = stream\n self.timeout = timeout or self.session.options.get(\"stream-timeout\")\n\n buffer_size = self.session.get_option(\"ringbuffer-size\")\n self.buffer = RingBuffer(buffer_size)\n self.writer = self.__writer__(self)\n self.worker = self.__worker__(self)\n\n def open(self):\n self.writer.start()\n self.worker.start()\n\n def close(self):\n self.worker.close()\n self.writer.close()\n self.buffer.close()\n\n def read(self, size):\n return self.buffer.read(\n size,\n block=self.writer.is_alive(),\n timeout=self.timeout\n )\n", "path": "src/streamlink/stream/segmented.py"}], "after_files": [{"content": "import logging\nimport queue\nfrom concurrent import futures\nfrom concurrent.futures import Future, ThreadPoolExecutor\nfrom sys import version_info\nfrom threading import Event, Thread, current_thread\nfrom typing import Any, Optional\n\nfrom streamlink.buffers import RingBuffer\nfrom streamlink.stream.stream import StreamIO\n\nlog = logging.getLogger(__name__)\n\n\nclass CompatThreadPoolExecutor(ThreadPoolExecutor):\n if version_info < (3, 9):\n def shutdown(self, wait=True, cancel_futures=False): # pragma: no cover\n with self._shutdown_lock:\n self._shutdown = True\n if cancel_futures:\n # Drain all work items from the queue, and then cancel their\n # associated futures.\n while True:\n try:\n work_item = self._work_queue.get_nowait()\n except queue.Empty:\n break\n if work_item is not None:\n work_item.future.cancel()\n\n # Send a wake-up to prevent threads calling\n # _work_queue.get(block=True) from permanently blocking.\n self._work_queue.put(None)\n if wait:\n for t in self._threads:\n t.join()\n\n\nclass SegmentedStreamWorker(Thread):\n \"\"\"The general worker thread.\n\n This thread is responsible for queueing up segments in the\n writer thread.\n \"\"\"\n\n def __init__(self, reader, **kwargs):\n self.closed = False\n self.reader = reader\n self.writer = reader.writer\n self.stream = reader.stream\n self.session = reader.session\n\n self._wait = Event()\n\n super().__init__(daemon=True, name=f\"Thread-{self.__class__.__name__}\")\n\n def close(self):\n \"\"\"Shuts down the thread.\"\"\"\n if self.closed: # pragma: no cover\n return\n\n log.debug(\"Closing worker thread\")\n\n self.closed = True\n self._wait.set()\n\n def wait(self, time):\n \"\"\"Pauses the thread for a specified time.\n\n Returns False if interrupted by another thread and True if the\n time runs out normally.\n \"\"\"\n return not self._wait.wait(time)\n\n def iter_segments(self):\n \"\"\"The iterator that generates segments for the worker thread.\n\n Should be overridden by the inheriting class.\n \"\"\"\n return\n yield\n\n def run(self):\n for segment in self.iter_segments():\n if self.closed: # pragma: no cover\n break\n self.writer.put(segment)\n\n # End of stream, tells the writer to exit\n self.writer.put(None)\n self.close()\n\n\nclass SegmentedStreamWriter(Thread):\n \"\"\"The writer thread.\n\n This thread is responsible for fetching segments, processing them\n and finally writing the data to the buffer.\n \"\"\"\n\n def __init__(self, reader, size=20, retries=None, threads=None, timeout=None):\n self.closed = False\n self.reader = reader\n self.stream = reader.stream\n self.session = reader.session\n\n if not retries:\n retries = self.session.options.get(\"stream-segment-attempts\")\n\n if not threads:\n threads = self.session.options.get(\"stream-segment-threads\")\n\n if not timeout:\n timeout = self.session.options.get(\"stream-segment-timeout\")\n\n self.retries = retries\n self.timeout = timeout\n self.threads = threads\n self.executor = CompatThreadPoolExecutor(max_workers=self.threads)\n self.futures = queue.Queue(size)\n\n super().__init__(daemon=True, name=f\"Thread-{self.__class__.__name__}\")\n\n def close(self):\n \"\"\"Shuts down the thread, its executor and closes the reader (worker thread and buffer).\"\"\"\n if self.closed: # pragma: no cover\n return\n\n log.debug(\"Closing writer thread\")\n\n self.closed = True\n self.reader.close()\n self.executor.shutdown(wait=True, cancel_futures=True)\n\n def put(self, segment):\n \"\"\"Adds a segment to the download pool and write queue.\"\"\"\n if self.closed: # pragma: no cover\n return\n\n if segment is None:\n future = None\n else:\n future = self.executor.submit(self.fetch, segment, retries=self.retries)\n\n self.queue(segment, future)\n\n def queue(self, segment: Any, future: Optional[Future], *data):\n \"\"\"Puts values into a queue but aborts if this thread is closed.\"\"\"\n while not self.closed: # pragma: no branch\n try:\n self._futures_put((segment, future, *data))\n return\n except queue.Full: # pragma: no cover\n continue\n\n def _futures_put(self, item):\n self.futures.put(item, block=True, timeout=1)\n\n def _futures_get(self):\n return self.futures.get(block=True, timeout=0.5)\n\n @staticmethod\n def _future_result(future: Future):\n return future.result(timeout=0.5)\n\n def fetch(self, segment):\n \"\"\"Fetches a segment.\n\n Should be overridden by the inheriting class.\n \"\"\"\n pass\n\n def write(self, segment, result, *data):\n \"\"\"Writes a segment to the buffer.\n\n Should be overridden by the inheriting class.\n \"\"\"\n pass\n\n def run(self):\n while not self.closed:\n try:\n segment, future, *data = self._futures_get()\n except queue.Empty: # pragma: no cover\n continue\n\n # End of stream\n if future is None:\n break\n\n while not self.closed: # pragma: no branch\n try:\n result = self._future_result(future)\n except futures.TimeoutError: # pragma: no cover\n continue\n except futures.CancelledError: # pragma: no cover\n break\n\n if result is not None: # pragma: no branch\n self.write(segment, result, *data)\n\n break\n\n self.close()\n\n\nclass SegmentedStreamReader(StreamIO):\n __worker__ = SegmentedStreamWorker\n __writer__ = SegmentedStreamWriter\n\n def __init__(self, stream, timeout=None):\n super().__init__()\n self.session = stream.session\n self.stream = stream\n self.timeout = timeout or self.session.options.get(\"stream-timeout\")\n\n buffer_size = self.session.get_option(\"ringbuffer-size\")\n self.buffer = RingBuffer(buffer_size)\n self.writer = self.__writer__(self)\n self.worker = self.__worker__(self)\n\n def open(self):\n self.writer.start()\n self.worker.start()\n\n def close(self):\n self.worker.close()\n self.writer.close()\n self.buffer.close()\n\n current = current_thread()\n if current is not self.worker: # pragma: no branch\n self.worker.join(timeout=self.timeout)\n if current is not self.writer: # pragma: no branch\n self.writer.join(timeout=self.timeout)\n\n def read(self, size):\n return self.buffer.read(\n size,\n block=self.writer.is_alive(),\n timeout=self.timeout\n )\n", "path": "src/streamlink/stream/segmented.py"}]} | 2,883 | 209 |
gh_patches_debug_19890 | rasdani/github-patches | git_diff | ansible__molecule-645 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No openstack floating ip support
# Issue Type
- Bug report
# Molecule and Ansible details
```
molecule, version 1.16.1
```
- Molecule installation method: pip
# Desired Behaviour
The openstack provider creates an openstack instance, associates a floating ip to it and connects to the instance using the floating ip.
# Actual Behaviour (Bug report only)
The openstack provider creates an openstack instance and tries to connects to the private ip which obviously doesn't work.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `molecule/driver/openstackdriver.py`
Content:
```
1 # Copyright (c) 2015-2016 Cisco Systems, Inc.
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to
5 # deal in the Software without restriction, including without limitation the
6 # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
7 # sell copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
18 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
19 # DEALINGS IN THE SOFTWARE.
20
21 import collections
22 import hashlib
23 import os
24 import socket
25 import sys
26 import time
27
28 import paramiko
29 try:
30 import shade
31 except ImportError: # pragma: no cover
32 sys.exit('ERROR: Driver missing, install shade.')
33
34 from molecule import util
35 from molecule.driver import basedriver
36
37
38 class OpenstackDriver(basedriver.BaseDriver):
39 def __init__(self, molecule):
40 super(OpenstackDriver, self).__init__(molecule)
41 self._provider = self._get_provider()
42 self._platform = self._get_platform()
43 self._openstack = shade.openstack_cloud()
44
45 @property
46 def name(self):
47 return 'openstack'
48
49 @property
50 def instances(self):
51 return self.molecule.config.config['openstack']['instances']
52
53 @property
54 def default_provider(self):
55 return self._provider
56
57 @property
58 def default_platform(self):
59 return self._platform
60
61 @property
62 def provider(self):
63 return self._provider
64
65 @property
66 def platform(self):
67 return self._platform
68
69 @platform.setter
70 def platform(self, val):
71 self._platform = val
72
73 @property
74 def valid_providers(self):
75 return [{'name': self.provider}]
76
77 @property
78 def valid_platforms(self):
79 return [{'name': self.platform}]
80
81 @property
82 def ssh_config_file(self):
83 return
84
85 @property
86 def ansible_connection_params(self):
87 return {'connection': 'ssh'}
88
89 @property
90 def testinfra_args(self):
91 return {
92 'ansible_inventory':
93 self.molecule.config.config['ansible']['inventory_file'],
94 'connection': 'ansible'
95 }
96
97 @property
98 def serverspec_args(self):
99 return {}
100
101 def up(self, no_provision=True):
102 self.molecule.state.change_state('driver', self.name)
103 kpn = self._get_keypair()
104
105 active_instances = self._openstack.list_servers()
106 active_instance_names = {
107 instance['name']: instance['status']
108 for instance in active_instances
109 }
110
111 util.print_warn('Creating openstack instances...')
112 for instance in self.instances:
113 if instance['name'] not in active_instance_names:
114 msg = '\tBringing up {}...'.format(instance['name'])
115 util.print_info(msg)
116 server = self._openstack.create_server(
117 name=instance['name'],
118 image=self._openstack.get_image(instance['image']),
119 flavor=self._openstack.get_flavor(instance['flavor']),
120 auto_ip=True,
121 wait=True,
122 key_name=kpn,
123 security_groups=instance['security_groups']
124 if 'security_groups' in instance else None)
125 self._reset_known_host_key(server['interface_ip'])
126 instance['created'] = True
127 num_retries = 0
128 while not self._check_ssh_availability(
129 server['interface_ip'],
130 instance['sshuser'],
131 timeout=6,
132 sshkey_filename=self._get_keyfile(
133 )) or num_retries == 5:
134 util.print_info('\t Waiting for ssh availability...')
135 num_retries += 1
136
137 def destroy(self):
138 util.print_info('Deleting openstack instances...')
139
140 active_instances = self._openstack.list_servers()
141 active_instance_names = {
142 instance['name']: instance['id']
143 for instance in active_instances
144 }
145
146 for instance in self.instances:
147 util.print_warn('\tRemoving {}...'.format(instance['name']))
148 if instance['name'] in active_instance_names:
149 if not self._openstack.delete_server(
150 active_instance_names[instance['name']], wait=True):
151 msg = 'Unable to remove {}.'.format(instance['name'])
152 util.print_error(msg)
153 else:
154 util.print_success('\tRemoved {}.'.format(instance[
155 'name']))
156 instance['created'] = False
157
158 # cleanup any molecule generated ssh keysfiles
159 self._cleanup_temp_keypair()
160 self._cleanup_temp_keyfile()
161
162 def status(self):
163 Status = collections.namedtuple('Status',
164 ['name', 'state', 'provider'])
165 status_list = []
166 for instance in self.instances:
167 if self._instance_is_accessible(instance):
168 status_list.append(
169 Status(
170 name=instance['name'],
171 state='UP',
172 provider=self.provider))
173 else:
174 status_list.append(
175 Status(
176 name=instance['name'],
177 state='not_created',
178 provider=self.provider))
179
180 return status_list
181
182 def conf(self, name=None, ssh_config=False):
183 inventory_file = self.molecule.config.config['ansible'][
184 'inventory_file']
185 if os.path.exists(inventory_file):
186 with open(inventory_file) as stream:
187 for line in stream:
188 if len(line.split()) > 0 and line.split()[0] == name:
189 ansible_host = line.split()[1]
190 host_address = ansible_host.split('=')[1]
191 return host_address
192 return
193
194 def inventory_entry(self, instance):
195 template = self._host_template()
196
197 for server in self._openstack.list_servers(detailed=False):
198 if server['name'] == instance['name']:
199 server_config = {
200 'hostname': instance['name'],
201 'interface_ip_address': server['interface_ip'],
202 'ssh_username': instance['sshuser'],
203 'ssh_key_filename': self._get_keyfile()
204 }
205 return template.format(**server_config)
206 return ''
207
208 def login_cmd(self, instance_name):
209 return 'ssh {} -l {} -i {}'
210
211 def login_args(self, instance_name):
212 # Try to retrieve the SSH configuration of the host.
213 conf = self.conf(name=instance_name)
214 user = ''
215 keyfile = self._get_keyfile()
216
217 for instance in self.instances:
218 if instance_name == instance['name']:
219 user = instance['sshuser']
220
221 return [conf, user, keyfile]
222
223 def _get_provider(self):
224 return 'openstack'
225
226 def _get_platform(self):
227 return 'openstack'
228
229 def _get_keypair(self):
230 if ('keypair' in self.molecule.config.config['openstack']):
231 return self.molecule.config.config['openstack']['keypair']
232 else:
233 return self._get_temp_keypair()
234
235 def _get_keyfile(self):
236 if ('keyfile' in self.molecule.config.config['openstack']):
237 return os.path.expanduser(self.molecule.config.config['openstack'][
238 'keyfile'])
239 else:
240 return self._get_temp_keyfile()
241
242 def _get_temp_keypair(self):
243 kpn = self._get_temp_keyname()
244
245 if not self._openstack.search_keypairs(kpn):
246 msg = '\tCreating openstack keypair {}...'.format(kpn)
247 util.print_info(msg)
248 pub_key_file = self._get_keyfile() + '.pub'
249 self._openstack.create_keypair(kpn,
250 open(pub_key_file,
251 'r').read().strip())
252
253 return kpn
254
255 def _get_temp_keyfile(self):
256 kn = self._get_temp_keyname()
257 kl = self._get_temp_keylocation()
258 pvtloc = kl + '/' + kn
259 publoc = kl + '/' + kn + '.pub'
260
261 if not os.path.exists(pvtloc):
262 util.print_info('\tCreating local ssh key {}...'.format(pvtloc))
263 k = paramiko.RSAKey.generate(2048)
264 k.write_private_key_file(pvtloc)
265 # write the public key too
266 pub = paramiko.RSAKey(filename=pvtloc)
267 with open(publoc, 'w') as f:
268 f.write("%s %s" % (pub.get_name(), pub.get_base64()))
269
270 return pvtloc
271
272 def _get_temp_keyname(self):
273 mpath = os.path.abspath(self.molecule.config.config['molecule'][
274 'molecule_dir'])
275 return 'molecule_' + hashlib.sha256(mpath).hexdigest()[:10]
276
277 def _get_temp_keylocation(self):
278 loc = self.molecule.config.config['molecule']['molecule_dir'] + '/.ssh'
279 if not os.path.exists(loc):
280 os.makedirs(loc)
281 return os.path.abspath(loc)
282
283 def _reset_known_host_key(self, hostname):
284 return os.system('ssh-keygen -R {}'.format(hostname))
285
286 def _check_ssh_availability(self, hostip, user, timeout, sshkey_filename):
287 ssh = paramiko.SSHClient()
288 ssh.load_system_host_keys()
289 ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
290 try:
291 ssh.connect(hostip, username=user, key_filename=sshkey_filename)
292 return True
293 except (paramiko.BadHostKeyException, paramiko.AuthenticationException,
294 paramiko.SSHException, socket.error):
295 time.sleep(timeout)
296 return False
297
298 def _cleanup_temp_keypair(self):
299 # if we don't have a keypair config, delete the temp one
300 if ('keypair' not in self.molecule.config.config['openstack']):
301 kpn = self._get_temp_keyname()
302 if self._openstack.search_keypairs(kpn):
303 msg = '\tRemoving openstack keypair {}...'.format(kpn)
304 util.print_warn(msg)
305 if not self._openstack.delete_keypair(kpn):
306 msg = 'Unable to remove openstack keypair {}.'.format(kpn)
307 util.print_error(msg)
308 else:
309 msg = '\tRemoved openstack keypair {}.'.format(kpn)
310 util.print_success(msg)
311
312 def _cleanup_temp_keyfile(self):
313 # if we don't have a keyfile config, delete the temp one
314 if ('keyfile' not in self.molecule.config.config['openstack']):
315 kn = self._get_temp_keyname()
316 kl = self._get_temp_keylocation()
317 pvtloc = kl + '/' + kn
318 publoc = kl + '/' + kn + '.pub'
319 if os.path.exists(pvtloc):
320 util.print_warn('\tRemoving {}...'.format(pvtloc))
321 os.remove(pvtloc)
322 if os.path.exists(publoc):
323 util.print_warn('\tRemoving {}...'.format(publoc))
324 os.remove(publoc)
325
326 def _host_template(self):
327 return ('{hostname} ansible_ssh_host={interface_ip_address} '
328 'ansible_ssh_user={ssh_username} '
329 'ansible_ssh_private_key_file={ssh_key_filename} '
330 'ansible_ssh_extra_args="-o ConnectionAttempts=5"\n')
331
332 def _instance_is_accessible(self, instance):
333 instance_ip = self.conf(instance['name'])
334 if instance_ip is not None:
335 return self._check_ssh_availability(
336 instance_ip,
337 instance['sshuser'],
338 timeout=0,
339 sshkey_filename=self._get_keyfile())
340 return False
341
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/molecule/driver/openstackdriver.py b/molecule/driver/openstackdriver.py
--- a/molecule/driver/openstackdriver.py
+++ b/molecule/driver/openstackdriver.py
@@ -50,6 +50,10 @@
def instances(self):
return self.molecule.config.config['openstack']['instances']
+ @property
+ def ip_pool(self):
+ return self.molecule.config.config['openstack'].get('ip_pool')
+
@property
def default_provider(self):
return self._provider
@@ -120,6 +124,8 @@
auto_ip=True,
wait=True,
key_name=kpn,
+ ip_pool=instance.get('ip_pool')
+ if instance.get('ip_pool') else self.ip_pool,
security_groups=instance['security_groups']
if 'security_groups' in instance else None)
self._reset_known_host_key(server['interface_ip'])
| {"golden_diff": "diff --git a/molecule/driver/openstackdriver.py b/molecule/driver/openstackdriver.py\n--- a/molecule/driver/openstackdriver.py\n+++ b/molecule/driver/openstackdriver.py\n@@ -50,6 +50,10 @@\n def instances(self):\n return self.molecule.config.config['openstack']['instances']\n \n+ @property\n+ def ip_pool(self):\n+ return self.molecule.config.config['openstack'].get('ip_pool')\n+\n @property\n def default_provider(self):\n return self._provider\n@@ -120,6 +124,8 @@\n auto_ip=True,\n wait=True,\n key_name=kpn,\n+ ip_pool=instance.get('ip_pool')\n+ if instance.get('ip_pool') else self.ip_pool,\n security_groups=instance['security_groups']\n if 'security_groups' in instance else None)\n self._reset_known_host_key(server['interface_ip'])\n", "issue": "No openstack floating ip support\n# Issue Type\r\n\r\n- Bug report\r\n\r\n# Molecule and Ansible details\r\n\r\n```\r\nmolecule, version 1.16.1\r\n```\r\n\r\n- Molecule installation method: pip\r\n\r\n# Desired Behaviour\r\n\r\nThe openstack provider creates an openstack instance, associates a floating ip to it and connects to the instance using the floating ip.\r\n\r\n# Actual Behaviour (Bug report only)\r\n\r\nThe openstack provider creates an openstack instance and tries to connects to the private ip which obviously doesn't work.\n", "before_files": [{"content": "# Copyright (c) 2015-2016 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport collections\nimport hashlib\nimport os\nimport socket\nimport sys\nimport time\n\nimport paramiko\ntry:\n import shade\nexcept ImportError: # pragma: no cover\n sys.exit('ERROR: Driver missing, install shade.')\n\nfrom molecule import util\nfrom molecule.driver import basedriver\n\n\nclass OpenstackDriver(basedriver.BaseDriver):\n def __init__(self, molecule):\n super(OpenstackDriver, self).__init__(molecule)\n self._provider = self._get_provider()\n self._platform = self._get_platform()\n self._openstack = shade.openstack_cloud()\n\n @property\n def name(self):\n return 'openstack'\n\n @property\n def instances(self):\n return self.molecule.config.config['openstack']['instances']\n\n @property\n def default_provider(self):\n return self._provider\n\n @property\n def default_platform(self):\n return self._platform\n\n @property\n def provider(self):\n return self._provider\n\n @property\n def platform(self):\n return self._platform\n\n @platform.setter\n def platform(self, val):\n self._platform = val\n\n @property\n def valid_providers(self):\n return [{'name': self.provider}]\n\n @property\n def valid_platforms(self):\n return [{'name': self.platform}]\n\n @property\n def ssh_config_file(self):\n return\n\n @property\n def ansible_connection_params(self):\n return {'connection': 'ssh'}\n\n @property\n def testinfra_args(self):\n return {\n 'ansible_inventory':\n self.molecule.config.config['ansible']['inventory_file'],\n 'connection': 'ansible'\n }\n\n @property\n def serverspec_args(self):\n return {}\n\n def up(self, no_provision=True):\n self.molecule.state.change_state('driver', self.name)\n kpn = self._get_keypair()\n\n active_instances = self._openstack.list_servers()\n active_instance_names = {\n instance['name']: instance['status']\n for instance in active_instances\n }\n\n util.print_warn('Creating openstack instances...')\n for instance in self.instances:\n if instance['name'] not in active_instance_names:\n msg = '\\tBringing up {}...'.format(instance['name'])\n util.print_info(msg)\n server = self._openstack.create_server(\n name=instance['name'],\n image=self._openstack.get_image(instance['image']),\n flavor=self._openstack.get_flavor(instance['flavor']),\n auto_ip=True,\n wait=True,\n key_name=kpn,\n security_groups=instance['security_groups']\n if 'security_groups' in instance else None)\n self._reset_known_host_key(server['interface_ip'])\n instance['created'] = True\n num_retries = 0\n while not self._check_ssh_availability(\n server['interface_ip'],\n instance['sshuser'],\n timeout=6,\n sshkey_filename=self._get_keyfile(\n )) or num_retries == 5:\n util.print_info('\\t Waiting for ssh availability...')\n num_retries += 1\n\n def destroy(self):\n util.print_info('Deleting openstack instances...')\n\n active_instances = self._openstack.list_servers()\n active_instance_names = {\n instance['name']: instance['id']\n for instance in active_instances\n }\n\n for instance in self.instances:\n util.print_warn('\\tRemoving {}...'.format(instance['name']))\n if instance['name'] in active_instance_names:\n if not self._openstack.delete_server(\n active_instance_names[instance['name']], wait=True):\n msg = 'Unable to remove {}.'.format(instance['name'])\n util.print_error(msg)\n else:\n util.print_success('\\tRemoved {}.'.format(instance[\n 'name']))\n instance['created'] = False\n\n # cleanup any molecule generated ssh keysfiles\n self._cleanup_temp_keypair()\n self._cleanup_temp_keyfile()\n\n def status(self):\n Status = collections.namedtuple('Status',\n ['name', 'state', 'provider'])\n status_list = []\n for instance in self.instances:\n if self._instance_is_accessible(instance):\n status_list.append(\n Status(\n name=instance['name'],\n state='UP',\n provider=self.provider))\n else:\n status_list.append(\n Status(\n name=instance['name'],\n state='not_created',\n provider=self.provider))\n\n return status_list\n\n def conf(self, name=None, ssh_config=False):\n inventory_file = self.molecule.config.config['ansible'][\n 'inventory_file']\n if os.path.exists(inventory_file):\n with open(inventory_file) as stream:\n for line in stream:\n if len(line.split()) > 0 and line.split()[0] == name:\n ansible_host = line.split()[1]\n host_address = ansible_host.split('=')[1]\n return host_address\n return\n\n def inventory_entry(self, instance):\n template = self._host_template()\n\n for server in self._openstack.list_servers(detailed=False):\n if server['name'] == instance['name']:\n server_config = {\n 'hostname': instance['name'],\n 'interface_ip_address': server['interface_ip'],\n 'ssh_username': instance['sshuser'],\n 'ssh_key_filename': self._get_keyfile()\n }\n return template.format(**server_config)\n return ''\n\n def login_cmd(self, instance_name):\n return 'ssh {} -l {} -i {}'\n\n def login_args(self, instance_name):\n # Try to retrieve the SSH configuration of the host.\n conf = self.conf(name=instance_name)\n user = ''\n keyfile = self._get_keyfile()\n\n for instance in self.instances:\n if instance_name == instance['name']:\n user = instance['sshuser']\n\n return [conf, user, keyfile]\n\n def _get_provider(self):\n return 'openstack'\n\n def _get_platform(self):\n return 'openstack'\n\n def _get_keypair(self):\n if ('keypair' in self.molecule.config.config['openstack']):\n return self.molecule.config.config['openstack']['keypair']\n else:\n return self._get_temp_keypair()\n\n def _get_keyfile(self):\n if ('keyfile' in self.molecule.config.config['openstack']):\n return os.path.expanduser(self.molecule.config.config['openstack'][\n 'keyfile'])\n else:\n return self._get_temp_keyfile()\n\n def _get_temp_keypair(self):\n kpn = self._get_temp_keyname()\n\n if not self._openstack.search_keypairs(kpn):\n msg = '\\tCreating openstack keypair {}...'.format(kpn)\n util.print_info(msg)\n pub_key_file = self._get_keyfile() + '.pub'\n self._openstack.create_keypair(kpn,\n open(pub_key_file,\n 'r').read().strip())\n\n return kpn\n\n def _get_temp_keyfile(self):\n kn = self._get_temp_keyname()\n kl = self._get_temp_keylocation()\n pvtloc = kl + '/' + kn\n publoc = kl + '/' + kn + '.pub'\n\n if not os.path.exists(pvtloc):\n util.print_info('\\tCreating local ssh key {}...'.format(pvtloc))\n k = paramiko.RSAKey.generate(2048)\n k.write_private_key_file(pvtloc)\n # write the public key too\n pub = paramiko.RSAKey(filename=pvtloc)\n with open(publoc, 'w') as f:\n f.write(\"%s %s\" % (pub.get_name(), pub.get_base64()))\n\n return pvtloc\n\n def _get_temp_keyname(self):\n mpath = os.path.abspath(self.molecule.config.config['molecule'][\n 'molecule_dir'])\n return 'molecule_' + hashlib.sha256(mpath).hexdigest()[:10]\n\n def _get_temp_keylocation(self):\n loc = self.molecule.config.config['molecule']['molecule_dir'] + '/.ssh'\n if not os.path.exists(loc):\n os.makedirs(loc)\n return os.path.abspath(loc)\n\n def _reset_known_host_key(self, hostname):\n return os.system('ssh-keygen -R {}'.format(hostname))\n\n def _check_ssh_availability(self, hostip, user, timeout, sshkey_filename):\n ssh = paramiko.SSHClient()\n ssh.load_system_host_keys()\n ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())\n try:\n ssh.connect(hostip, username=user, key_filename=sshkey_filename)\n return True\n except (paramiko.BadHostKeyException, paramiko.AuthenticationException,\n paramiko.SSHException, socket.error):\n time.sleep(timeout)\n return False\n\n def _cleanup_temp_keypair(self):\n # if we don't have a keypair config, delete the temp one\n if ('keypair' not in self.molecule.config.config['openstack']):\n kpn = self._get_temp_keyname()\n if self._openstack.search_keypairs(kpn):\n msg = '\\tRemoving openstack keypair {}...'.format(kpn)\n util.print_warn(msg)\n if not self._openstack.delete_keypair(kpn):\n msg = 'Unable to remove openstack keypair {}.'.format(kpn)\n util.print_error(msg)\n else:\n msg = '\\tRemoved openstack keypair {}.'.format(kpn)\n util.print_success(msg)\n\n def _cleanup_temp_keyfile(self):\n # if we don't have a keyfile config, delete the temp one\n if ('keyfile' not in self.molecule.config.config['openstack']):\n kn = self._get_temp_keyname()\n kl = self._get_temp_keylocation()\n pvtloc = kl + '/' + kn\n publoc = kl + '/' + kn + '.pub'\n if os.path.exists(pvtloc):\n util.print_warn('\\tRemoving {}...'.format(pvtloc))\n os.remove(pvtloc)\n if os.path.exists(publoc):\n util.print_warn('\\tRemoving {}...'.format(publoc))\n os.remove(publoc)\n\n def _host_template(self):\n return ('{hostname} ansible_ssh_host={interface_ip_address} '\n 'ansible_ssh_user={ssh_username} '\n 'ansible_ssh_private_key_file={ssh_key_filename} '\n 'ansible_ssh_extra_args=\"-o ConnectionAttempts=5\"\\n')\n\n def _instance_is_accessible(self, instance):\n instance_ip = self.conf(instance['name'])\n if instance_ip is not None:\n return self._check_ssh_availability(\n instance_ip,\n instance['sshuser'],\n timeout=0,\n sshkey_filename=self._get_keyfile())\n return False\n", "path": "molecule/driver/openstackdriver.py"}], "after_files": [{"content": "# Copyright (c) 2015-2016 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport collections\nimport hashlib\nimport os\nimport socket\nimport sys\nimport time\n\nimport paramiko\ntry:\n import shade\nexcept ImportError: # pragma: no cover\n sys.exit('ERROR: Driver missing, install shade.')\n\nfrom molecule import util\nfrom molecule.driver import basedriver\n\n\nclass OpenstackDriver(basedriver.BaseDriver):\n def __init__(self, molecule):\n super(OpenstackDriver, self).__init__(molecule)\n self._provider = self._get_provider()\n self._platform = self._get_platform()\n self._openstack = shade.openstack_cloud()\n\n @property\n def name(self):\n return 'openstack'\n\n @property\n def instances(self):\n return self.molecule.config.config['openstack']['instances']\n\n @property\n def ip_pool(self):\n return self.molecule.config.config['openstack'].get('ip_pool')\n\n @property\n def default_provider(self):\n return self._provider\n\n @property\n def default_platform(self):\n return self._platform\n\n @property\n def provider(self):\n return self._provider\n\n @property\n def platform(self):\n return self._platform\n\n @platform.setter\n def platform(self, val):\n self._platform = val\n\n @property\n def valid_providers(self):\n return [{'name': self.provider}]\n\n @property\n def valid_platforms(self):\n return [{'name': self.platform}]\n\n @property\n def ssh_config_file(self):\n return\n\n @property\n def ansible_connection_params(self):\n return {'connection': 'ssh'}\n\n @property\n def testinfra_args(self):\n return {\n 'ansible_inventory':\n self.molecule.config.config['ansible']['inventory_file'],\n 'connection': 'ansible'\n }\n\n @property\n def serverspec_args(self):\n return {}\n\n def up(self, no_provision=True):\n self.molecule.state.change_state('driver', self.name)\n kpn = self._get_keypair()\n\n active_instances = self._openstack.list_servers()\n active_instance_names = {\n instance['name']: instance['status']\n for instance in active_instances\n }\n\n util.print_warn('Creating openstack instances...')\n for instance in self.instances:\n if instance['name'] not in active_instance_names:\n msg = '\\tBringing up {}...'.format(instance['name'])\n util.print_info(msg)\n server = self._openstack.create_server(\n name=instance['name'],\n image=self._openstack.get_image(instance['image']),\n flavor=self._openstack.get_flavor(instance['flavor']),\n auto_ip=True,\n wait=True,\n key_name=kpn,\n ip_pool=instance.get('ip_pool')\n if instance.get('ip_pool') else self.ip_pool,\n security_groups=instance['security_groups']\n if 'security_groups' in instance else None)\n self._reset_known_host_key(server['interface_ip'])\n instance['created'] = True\n num_retries = 0\n while not self._check_ssh_availability(\n server['interface_ip'],\n instance['sshuser'],\n timeout=6,\n sshkey_filename=self._get_keyfile(\n )) or num_retries == 5:\n util.print_info('\\t Waiting for ssh availability...')\n num_retries += 1\n\n def destroy(self):\n util.print_info('Deleting openstack instances...')\n\n active_instances = self._openstack.list_servers()\n active_instance_names = {\n instance['name']: instance['id']\n for instance in active_instances\n }\n\n for instance in self.instances:\n util.print_warn('\\tRemoving {}...'.format(instance['name']))\n if instance['name'] in active_instance_names:\n if not self._openstack.delete_server(\n active_instance_names[instance['name']], wait=True):\n msg = 'Unable to remove {}.'.format(instance['name'])\n util.print_error(msg)\n else:\n util.print_success('\\tRemoved {}.'.format(instance[\n 'name']))\n instance['created'] = False\n\n # cleanup any molecule generated ssh keysfiles\n self._cleanup_temp_keypair()\n self._cleanup_temp_keyfile()\n\n def status(self):\n Status = collections.namedtuple('Status',\n ['name', 'state', 'provider'])\n status_list = []\n for instance in self.instances:\n if self._instance_is_accessible(instance):\n status_list.append(\n Status(\n name=instance['name'],\n state='UP',\n provider=self.provider))\n else:\n status_list.append(\n Status(\n name=instance['name'],\n state='not_created',\n provider=self.provider))\n\n return status_list\n\n def conf(self, name=None, ssh_config=False):\n inventory_file = self.molecule.config.config['ansible'][\n 'inventory_file']\n if os.path.exists(inventory_file):\n with open(inventory_file) as stream:\n for line in stream:\n if len(line.split()) > 0 and line.split()[0] == name:\n ansible_host = line.split()[1]\n host_address = ansible_host.split('=')[1]\n return host_address\n return\n\n def inventory_entry(self, instance):\n template = self._host_template()\n\n for server in self._openstack.list_servers(detailed=False):\n if server['name'] == instance['name']:\n server_config = {\n 'hostname': instance['name'],\n 'interface_ip_address': server['interface_ip'],\n 'ssh_username': instance['sshuser'],\n 'ssh_key_filename': self._get_keyfile()\n }\n return template.format(**server_config)\n return ''\n\n def login_cmd(self, instance_name):\n return 'ssh {} -l {} -i {}'\n\n def login_args(self, instance_name):\n # Try to retrieve the SSH configuration of the host.\n conf = self.conf(name=instance_name)\n user = ''\n keyfile = self._get_keyfile()\n\n for instance in self.instances:\n if instance_name == instance['name']:\n user = instance['sshuser']\n\n return [conf, user, keyfile]\n\n def _get_provider(self):\n return 'openstack'\n\n def _get_platform(self):\n return 'openstack'\n\n def _get_keypair(self):\n if ('keypair' in self.molecule.config.config['openstack']):\n return self.molecule.config.config['openstack']['keypair']\n else:\n return self._get_temp_keypair()\n\n def _get_keyfile(self):\n if ('keyfile' in self.molecule.config.config['openstack']):\n return os.path.expanduser(self.molecule.config.config['openstack'][\n 'keyfile'])\n else:\n return self._get_temp_keyfile()\n\n def _get_temp_keypair(self):\n kpn = self._get_temp_keyname()\n\n if not self._openstack.search_keypairs(kpn):\n msg = '\\tCreating openstack keypair {}...'.format(kpn)\n util.print_info(msg)\n pub_key_file = self._get_keyfile() + '.pub'\n self._openstack.create_keypair(kpn,\n open(pub_key_file,\n 'r').read().strip())\n\n return kpn\n\n def _get_temp_keyfile(self):\n kn = self._get_temp_keyname()\n kl = self._get_temp_keylocation()\n pvtloc = kl + '/' + kn\n publoc = kl + '/' + kn + '.pub'\n\n if not os.path.exists(pvtloc):\n util.print_info('\\tCreating local ssh key {}...'.format(pvtloc))\n k = paramiko.RSAKey.generate(2048)\n k.write_private_key_file(pvtloc)\n # write the public key too\n pub = paramiko.RSAKey(filename=pvtloc)\n with open(publoc, 'w') as f:\n f.write(\"%s %s\" % (pub.get_name(), pub.get_base64()))\n\n return pvtloc\n\n def _get_temp_keyname(self):\n mpath = os.path.abspath(self.molecule.config.config['molecule'][\n 'molecule_dir'])\n return 'molecule_' + hashlib.sha256(mpath).hexdigest()[:10]\n\n def _get_temp_keylocation(self):\n loc = self.molecule.config.config['molecule']['molecule_dir'] + '/.ssh'\n if not os.path.exists(loc):\n os.makedirs(loc)\n return os.path.abspath(loc)\n\n def _reset_known_host_key(self, hostname):\n return os.system('ssh-keygen -R {}'.format(hostname))\n\n def _check_ssh_availability(self, hostip, user, timeout, sshkey_filename):\n ssh = paramiko.SSHClient()\n ssh.load_system_host_keys()\n ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())\n try:\n ssh.connect(hostip, username=user, key_filename=sshkey_filename)\n return True\n except (paramiko.BadHostKeyException, paramiko.AuthenticationException,\n paramiko.SSHException, socket.error):\n time.sleep(timeout)\n return False\n\n def _cleanup_temp_keypair(self):\n # if we don't have a keypair config, delete the temp one\n if ('keypair' not in self.molecule.config.config['openstack']):\n kpn = self._get_temp_keyname()\n if self._openstack.search_keypairs(kpn):\n msg = '\\tRemoving openstack keypair {}...'.format(kpn)\n util.print_warn(msg)\n if not self._openstack.delete_keypair(kpn):\n msg = 'Unable to remove openstack keypair {}.'.format(kpn)\n util.print_error(msg)\n else:\n msg = '\\tRemoved openstack keypair {}.'.format(kpn)\n util.print_success(msg)\n\n def _cleanup_temp_keyfile(self):\n # if we don't have a keyfile config, delete the temp one\n if ('keyfile' not in self.molecule.config.config['openstack']):\n kn = self._get_temp_keyname()\n kl = self._get_temp_keylocation()\n pvtloc = kl + '/' + kn\n publoc = kl + '/' + kn + '.pub'\n if os.path.exists(pvtloc):\n util.print_warn('\\tRemoving {}...'.format(pvtloc))\n os.remove(pvtloc)\n if os.path.exists(publoc):\n util.print_warn('\\tRemoving {}...'.format(publoc))\n os.remove(publoc)\n\n def _host_template(self):\n return ('{hostname} ansible_ssh_host={interface_ip_address} '\n 'ansible_ssh_user={ssh_username} '\n 'ansible_ssh_private_key_file={ssh_key_filename} '\n 'ansible_ssh_extra_args=\"-o ConnectionAttempts=5\"\\n')\n\n def _instance_is_accessible(self, instance):\n instance_ip = self.conf(instance['name'])\n if instance_ip is not None:\n return self._check_ssh_availability(\n instance_ip,\n instance['sshuser'],\n timeout=0,\n sshkey_filename=self._get_keyfile())\n return False\n", "path": "molecule/driver/openstackdriver.py"}]} | 3,892 | 208 |
gh_patches_debug_18668 | rasdani/github-patches | git_diff | celery__celery-7246 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Duplicate param 'uid' in CeleryDaemonCommand
# Checklist
- [ x] I have checked the [issues list](https://github.com/celery/celery/issues?utf8=%E2%9C%93&q=is%3Aissue+label%3A%22Category%3A+Documentation%22+)
for similar or identical bug reports.
- [x ] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [ x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [ x] I have included all related issues and possible duplicate issues in this issue
(If there are none, check this box anyway).
## Related Issues and Possible Duplicates
#### Related Issues
- None
#### Possible Duplicates
- None
# Description
CeleryDaemonCommand have duplicated CeleryOption for --uid which leads to all docs using it to have duplicated line (beat, multi, worker, etc).
# Suggestions
Remove the duplicate in CeleryDaemonCommand in celery/celery/bin/base.py
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `celery/bin/base.py`
Content:
```
1 """Click customizations for Celery."""
2 import json
3 import numbers
4 from collections import OrderedDict
5 from functools import update_wrapper
6 from pprint import pformat
7
8 import click
9 from click import ParamType
10 from kombu.utils.objects import cached_property
11
12 from celery._state import get_current_app
13 from celery.signals import user_preload_options
14 from celery.utils import text
15 from celery.utils.log import mlevel
16 from celery.utils.time import maybe_iso8601
17
18 try:
19 from pygments import highlight
20 from pygments.formatters import Terminal256Formatter
21 from pygments.lexers import PythonLexer
22 except ImportError:
23 def highlight(s, *args, **kwargs):
24 """Place holder function in case pygments is missing."""
25 return s
26 LEXER = None
27 FORMATTER = None
28 else:
29 LEXER = PythonLexer()
30 FORMATTER = Terminal256Formatter()
31
32
33 class CLIContext:
34 """Context Object for the CLI."""
35
36 def __init__(self, app, no_color, workdir, quiet=False):
37 """Initialize the CLI context."""
38 self.app = app or get_current_app()
39 self.no_color = no_color
40 self.quiet = quiet
41 self.workdir = workdir
42
43 @cached_property
44 def OK(self):
45 return self.style("OK", fg="green", bold=True)
46
47 @cached_property
48 def ERROR(self):
49 return self.style("ERROR", fg="red", bold=True)
50
51 def style(self, message=None, **kwargs):
52 if self.no_color:
53 return message
54 else:
55 return click.style(message, **kwargs)
56
57 def secho(self, message=None, **kwargs):
58 if self.no_color:
59 kwargs['color'] = False
60 click.echo(message, **kwargs)
61 else:
62 click.secho(message, **kwargs)
63
64 def echo(self, message=None, **kwargs):
65 if self.no_color:
66 kwargs['color'] = False
67 click.echo(message, **kwargs)
68 else:
69 click.echo(message, **kwargs)
70
71 def error(self, message=None, **kwargs):
72 kwargs['err'] = True
73 if self.no_color:
74 kwargs['color'] = False
75 click.echo(message, **kwargs)
76 else:
77 click.secho(message, **kwargs)
78
79 def pretty(self, n):
80 if isinstance(n, list):
81 return self.OK, self.pretty_list(n)
82 if isinstance(n, dict):
83 if 'ok' in n or 'error' in n:
84 return self.pretty_dict_ok_error(n)
85 else:
86 s = json.dumps(n, sort_keys=True, indent=4)
87 if not self.no_color:
88 s = highlight(s, LEXER, FORMATTER)
89 return self.OK, s
90 if isinstance(n, str):
91 return self.OK, n
92 return self.OK, pformat(n)
93
94 def pretty_list(self, n):
95 if not n:
96 return '- empty -'
97 return '\n'.join(
98 f'{self.style("*", fg="white")} {item}' for item in n
99 )
100
101 def pretty_dict_ok_error(self, n):
102 try:
103 return (self.OK,
104 text.indent(self.pretty(n['ok'])[1], 4))
105 except KeyError:
106 pass
107 return (self.ERROR,
108 text.indent(self.pretty(n['error'])[1], 4))
109
110 def say_chat(self, direction, title, body='', show_body=False):
111 if direction == '<-' and self.quiet:
112 return
113 dirstr = not self.quiet and f'{self.style(direction, fg="white", bold=True)} ' or ''
114 self.echo(f'{dirstr} {title}')
115 if body and show_body:
116 self.echo(body)
117
118
119 def handle_preload_options(f):
120 """Extract preload options and return a wrapped callable."""
121 def caller(ctx, *args, **kwargs):
122 app = ctx.obj.app
123
124 preload_options = [o.name for o in app.user_options.get('preload', [])]
125
126 if preload_options:
127 user_options = {
128 preload_option: kwargs[preload_option]
129 for preload_option in preload_options
130 }
131
132 user_preload_options.send(sender=f, app=app, options=user_options)
133
134 return f(ctx, *args, **kwargs)
135
136 return update_wrapper(caller, f)
137
138
139 class CeleryOption(click.Option):
140 """Customized option for Celery."""
141
142 def get_default(self, ctx, *args, **kwargs):
143 if self.default_value_from_context:
144 self.default = ctx.obj[self.default_value_from_context]
145 return super().get_default(ctx, *args, **kwargs)
146
147 def __init__(self, *args, **kwargs):
148 """Initialize a Celery option."""
149 self.help_group = kwargs.pop('help_group', None)
150 self.default_value_from_context = kwargs.pop('default_value_from_context', None)
151 super().__init__(*args, **kwargs)
152
153
154 class CeleryCommand(click.Command):
155 """Customized command for Celery."""
156
157 def format_options(self, ctx, formatter):
158 """Write all the options into the formatter if they exist."""
159 opts = OrderedDict()
160 for param in self.get_params(ctx):
161 rv = param.get_help_record(ctx)
162 if rv is not None:
163 if hasattr(param, 'help_group') and param.help_group:
164 opts.setdefault(str(param.help_group), []).append(rv)
165 else:
166 opts.setdefault('Options', []).append(rv)
167
168 for name, opts_group in opts.items():
169 with formatter.section(name):
170 formatter.write_dl(opts_group)
171
172
173 class CeleryDaemonCommand(CeleryCommand):
174 """Daemon commands."""
175
176 def __init__(self, *args, **kwargs):
177 """Initialize a Celery command with common daemon options."""
178 super().__init__(*args, **kwargs)
179 self.params.append(CeleryOption(('-f', '--logfile'), help_group="Daemonization Options"))
180 self.params.append(CeleryOption(('--pidfile',), help_group="Daemonization Options"))
181 self.params.append(CeleryOption(('--uid',), help_group="Daemonization Options"))
182 self.params.append(CeleryOption(('--uid',), help_group="Daemonization Options"))
183 self.params.append(CeleryOption(('--gid',), help_group="Daemonization Options"))
184 self.params.append(CeleryOption(('--umask',), help_group="Daemonization Options"))
185 self.params.append(CeleryOption(('--executable',), help_group="Daemonization Options"))
186
187
188 class CommaSeparatedList(ParamType):
189 """Comma separated list argument."""
190
191 name = "comma separated list"
192
193 def convert(self, value, param, ctx):
194 return text.str_to_list(value)
195
196
197 class JsonArray(ParamType):
198 """JSON formatted array argument."""
199
200 name = "json array"
201
202 def convert(self, value, param, ctx):
203 if isinstance(value, list):
204 return value
205
206 try:
207 v = json.loads(value)
208 except ValueError as e:
209 self.fail(str(e))
210
211 if not isinstance(v, list):
212 self.fail(f"{value} was not an array")
213
214 return v
215
216
217 class JsonObject(ParamType):
218 """JSON formatted object argument."""
219
220 name = "json object"
221
222 def convert(self, value, param, ctx):
223 if isinstance(value, dict):
224 return value
225
226 try:
227 v = json.loads(value)
228 except ValueError as e:
229 self.fail(str(e))
230
231 if not isinstance(v, dict):
232 self.fail(f"{value} was not an object")
233
234 return v
235
236
237 class ISO8601DateTime(ParamType):
238 """ISO 8601 Date Time argument."""
239
240 name = "iso-86091"
241
242 def convert(self, value, param, ctx):
243 try:
244 return maybe_iso8601(value)
245 except (TypeError, ValueError) as e:
246 self.fail(e)
247
248
249 class ISO8601DateTimeOrFloat(ParamType):
250 """ISO 8601 Date Time or float argument."""
251
252 name = "iso-86091 or float"
253
254 def convert(self, value, param, ctx):
255 try:
256 return float(value)
257 except (TypeError, ValueError):
258 pass
259
260 try:
261 return maybe_iso8601(value)
262 except (TypeError, ValueError) as e:
263 self.fail(e)
264
265
266 class LogLevel(click.Choice):
267 """Log level option."""
268
269 def __init__(self):
270 """Initialize the log level option with the relevant choices."""
271 super().__init__(('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL', 'FATAL'))
272
273 def convert(self, value, param, ctx):
274 if isinstance(value, numbers.Integral):
275 return value
276
277 value = value.upper()
278 value = super().convert(value, param, ctx)
279 return mlevel(value)
280
281
282 JSON_ARRAY = JsonArray()
283 JSON_OBJECT = JsonObject()
284 ISO8601 = ISO8601DateTime()
285 ISO8601_OR_FLOAT = ISO8601DateTimeOrFloat()
286 LOG_LEVEL = LogLevel()
287 COMMA_SEPARATED_LIST = CommaSeparatedList()
288
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/celery/bin/base.py b/celery/bin/base.py
--- a/celery/bin/base.py
+++ b/celery/bin/base.py
@@ -179,7 +179,6 @@
self.params.append(CeleryOption(('-f', '--logfile'), help_group="Daemonization Options"))
self.params.append(CeleryOption(('--pidfile',), help_group="Daemonization Options"))
self.params.append(CeleryOption(('--uid',), help_group="Daemonization Options"))
- self.params.append(CeleryOption(('--uid',), help_group="Daemonization Options"))
self.params.append(CeleryOption(('--gid',), help_group="Daemonization Options"))
self.params.append(CeleryOption(('--umask',), help_group="Daemonization Options"))
self.params.append(CeleryOption(('--executable',), help_group="Daemonization Options"))
| {"golden_diff": "diff --git a/celery/bin/base.py b/celery/bin/base.py\n--- a/celery/bin/base.py\n+++ b/celery/bin/base.py\n@@ -179,7 +179,6 @@\n self.params.append(CeleryOption(('-f', '--logfile'), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--pidfile',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--uid',), help_group=\"Daemonization Options\"))\n- self.params.append(CeleryOption(('--uid',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--gid',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--umask',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--executable',), help_group=\"Daemonization Options\"))\n", "issue": "Duplicate param 'uid' in CeleryDaemonCommand\n\r\n# Checklist\r\n\r\n- [ x] I have checked the [issues list](https://github.com/celery/celery/issues?utf8=%E2%9C%93&q=is%3Aissue+label%3A%22Category%3A+Documentation%22+)\r\n for similar or identical bug reports.\r\n- [x ] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22Category%3A+Documentation%22)\r\n for existing proposed fixes.\r\n- [ x] I have checked the [commit log](https://github.com/celery/celery/commits/master)\r\n to find out if the bug was already fixed in the master branch.\r\n- [ x] I have included all related issues and possible duplicate issues in this issue\r\n (If there are none, check this box anyway).\r\n\r\n## Related Issues and Possible Duplicates\r\n\r\n#### Related Issues\r\n\r\n- None\r\n\r\n#### Possible Duplicates\r\n\r\n- None\r\n\r\n# Description\r\n\r\nCeleryDaemonCommand have duplicated CeleryOption for --uid which leads to all docs using it to have duplicated line (beat, multi, worker, etc).\r\n\r\n# Suggestions\r\n\r\nRemove the duplicate in CeleryDaemonCommand in celery/celery/bin/base.py \r\n\n", "before_files": [{"content": "\"\"\"Click customizations for Celery.\"\"\"\nimport json\nimport numbers\nfrom collections import OrderedDict\nfrom functools import update_wrapper\nfrom pprint import pformat\n\nimport click\nfrom click import ParamType\nfrom kombu.utils.objects import cached_property\n\nfrom celery._state import get_current_app\nfrom celery.signals import user_preload_options\nfrom celery.utils import text\nfrom celery.utils.log import mlevel\nfrom celery.utils.time import maybe_iso8601\n\ntry:\n from pygments import highlight\n from pygments.formatters import Terminal256Formatter\n from pygments.lexers import PythonLexer\nexcept ImportError:\n def highlight(s, *args, **kwargs):\n \"\"\"Place holder function in case pygments is missing.\"\"\"\n return s\n LEXER = None\n FORMATTER = None\nelse:\n LEXER = PythonLexer()\n FORMATTER = Terminal256Formatter()\n\n\nclass CLIContext:\n \"\"\"Context Object for the CLI.\"\"\"\n\n def __init__(self, app, no_color, workdir, quiet=False):\n \"\"\"Initialize the CLI context.\"\"\"\n self.app = app or get_current_app()\n self.no_color = no_color\n self.quiet = quiet\n self.workdir = workdir\n\n @cached_property\n def OK(self):\n return self.style(\"OK\", fg=\"green\", bold=True)\n\n @cached_property\n def ERROR(self):\n return self.style(\"ERROR\", fg=\"red\", bold=True)\n\n def style(self, message=None, **kwargs):\n if self.no_color:\n return message\n else:\n return click.style(message, **kwargs)\n\n def secho(self, message=None, **kwargs):\n if self.no_color:\n kwargs['color'] = False\n click.echo(message, **kwargs)\n else:\n click.secho(message, **kwargs)\n\n def echo(self, message=None, **kwargs):\n if self.no_color:\n kwargs['color'] = False\n click.echo(message, **kwargs)\n else:\n click.echo(message, **kwargs)\n\n def error(self, message=None, **kwargs):\n kwargs['err'] = True\n if self.no_color:\n kwargs['color'] = False\n click.echo(message, **kwargs)\n else:\n click.secho(message, **kwargs)\n\n def pretty(self, n):\n if isinstance(n, list):\n return self.OK, self.pretty_list(n)\n if isinstance(n, dict):\n if 'ok' in n or 'error' in n:\n return self.pretty_dict_ok_error(n)\n else:\n s = json.dumps(n, sort_keys=True, indent=4)\n if not self.no_color:\n s = highlight(s, LEXER, FORMATTER)\n return self.OK, s\n if isinstance(n, str):\n return self.OK, n\n return self.OK, pformat(n)\n\n def pretty_list(self, n):\n if not n:\n return '- empty -'\n return '\\n'.join(\n f'{self.style(\"*\", fg=\"white\")} {item}' for item in n\n )\n\n def pretty_dict_ok_error(self, n):\n try:\n return (self.OK,\n text.indent(self.pretty(n['ok'])[1], 4))\n except KeyError:\n pass\n return (self.ERROR,\n text.indent(self.pretty(n['error'])[1], 4))\n\n def say_chat(self, direction, title, body='', show_body=False):\n if direction == '<-' and self.quiet:\n return\n dirstr = not self.quiet and f'{self.style(direction, fg=\"white\", bold=True)} ' or ''\n self.echo(f'{dirstr} {title}')\n if body and show_body:\n self.echo(body)\n\n\ndef handle_preload_options(f):\n \"\"\"Extract preload options and return a wrapped callable.\"\"\"\n def caller(ctx, *args, **kwargs):\n app = ctx.obj.app\n\n preload_options = [o.name for o in app.user_options.get('preload', [])]\n\n if preload_options:\n user_options = {\n preload_option: kwargs[preload_option]\n for preload_option in preload_options\n }\n\n user_preload_options.send(sender=f, app=app, options=user_options)\n\n return f(ctx, *args, **kwargs)\n\n return update_wrapper(caller, f)\n\n\nclass CeleryOption(click.Option):\n \"\"\"Customized option for Celery.\"\"\"\n\n def get_default(self, ctx, *args, **kwargs):\n if self.default_value_from_context:\n self.default = ctx.obj[self.default_value_from_context]\n return super().get_default(ctx, *args, **kwargs)\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initialize a Celery option.\"\"\"\n self.help_group = kwargs.pop('help_group', None)\n self.default_value_from_context = kwargs.pop('default_value_from_context', None)\n super().__init__(*args, **kwargs)\n\n\nclass CeleryCommand(click.Command):\n \"\"\"Customized command for Celery.\"\"\"\n\n def format_options(self, ctx, formatter):\n \"\"\"Write all the options into the formatter if they exist.\"\"\"\n opts = OrderedDict()\n for param in self.get_params(ctx):\n rv = param.get_help_record(ctx)\n if rv is not None:\n if hasattr(param, 'help_group') and param.help_group:\n opts.setdefault(str(param.help_group), []).append(rv)\n else:\n opts.setdefault('Options', []).append(rv)\n\n for name, opts_group in opts.items():\n with formatter.section(name):\n formatter.write_dl(opts_group)\n\n\nclass CeleryDaemonCommand(CeleryCommand):\n \"\"\"Daemon commands.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initialize a Celery command with common daemon options.\"\"\"\n super().__init__(*args, **kwargs)\n self.params.append(CeleryOption(('-f', '--logfile'), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--pidfile',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--uid',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--uid',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--gid',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--umask',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--executable',), help_group=\"Daemonization Options\"))\n\n\nclass CommaSeparatedList(ParamType):\n \"\"\"Comma separated list argument.\"\"\"\n\n name = \"comma separated list\"\n\n def convert(self, value, param, ctx):\n return text.str_to_list(value)\n\n\nclass JsonArray(ParamType):\n \"\"\"JSON formatted array argument.\"\"\"\n\n name = \"json array\"\n\n def convert(self, value, param, ctx):\n if isinstance(value, list):\n return value\n\n try:\n v = json.loads(value)\n except ValueError as e:\n self.fail(str(e))\n\n if not isinstance(v, list):\n self.fail(f\"{value} was not an array\")\n\n return v\n\n\nclass JsonObject(ParamType):\n \"\"\"JSON formatted object argument.\"\"\"\n\n name = \"json object\"\n\n def convert(self, value, param, ctx):\n if isinstance(value, dict):\n return value\n\n try:\n v = json.loads(value)\n except ValueError as e:\n self.fail(str(e))\n\n if not isinstance(v, dict):\n self.fail(f\"{value} was not an object\")\n\n return v\n\n\nclass ISO8601DateTime(ParamType):\n \"\"\"ISO 8601 Date Time argument.\"\"\"\n\n name = \"iso-86091\"\n\n def convert(self, value, param, ctx):\n try:\n return maybe_iso8601(value)\n except (TypeError, ValueError) as e:\n self.fail(e)\n\n\nclass ISO8601DateTimeOrFloat(ParamType):\n \"\"\"ISO 8601 Date Time or float argument.\"\"\"\n\n name = \"iso-86091 or float\"\n\n def convert(self, value, param, ctx):\n try:\n return float(value)\n except (TypeError, ValueError):\n pass\n\n try:\n return maybe_iso8601(value)\n except (TypeError, ValueError) as e:\n self.fail(e)\n\n\nclass LogLevel(click.Choice):\n \"\"\"Log level option.\"\"\"\n\n def __init__(self):\n \"\"\"Initialize the log level option with the relevant choices.\"\"\"\n super().__init__(('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL', 'FATAL'))\n\n def convert(self, value, param, ctx):\n if isinstance(value, numbers.Integral):\n return value\n\n value = value.upper()\n value = super().convert(value, param, ctx)\n return mlevel(value)\n\n\nJSON_ARRAY = JsonArray()\nJSON_OBJECT = JsonObject()\nISO8601 = ISO8601DateTime()\nISO8601_OR_FLOAT = ISO8601DateTimeOrFloat()\nLOG_LEVEL = LogLevel()\nCOMMA_SEPARATED_LIST = CommaSeparatedList()\n", "path": "celery/bin/base.py"}], "after_files": [{"content": "\"\"\"Click customizations for Celery.\"\"\"\nimport json\nimport numbers\nfrom collections import OrderedDict\nfrom functools import update_wrapper\nfrom pprint import pformat\n\nimport click\nfrom click import ParamType\nfrom kombu.utils.objects import cached_property\n\nfrom celery._state import get_current_app\nfrom celery.signals import user_preload_options\nfrom celery.utils import text\nfrom celery.utils.log import mlevel\nfrom celery.utils.time import maybe_iso8601\n\ntry:\n from pygments import highlight\n from pygments.formatters import Terminal256Formatter\n from pygments.lexers import PythonLexer\nexcept ImportError:\n def highlight(s, *args, **kwargs):\n \"\"\"Place holder function in case pygments is missing.\"\"\"\n return s\n LEXER = None\n FORMATTER = None\nelse:\n LEXER = PythonLexer()\n FORMATTER = Terminal256Formatter()\n\n\nclass CLIContext:\n \"\"\"Context Object for the CLI.\"\"\"\n\n def __init__(self, app, no_color, workdir, quiet=False):\n \"\"\"Initialize the CLI context.\"\"\"\n self.app = app or get_current_app()\n self.no_color = no_color\n self.quiet = quiet\n self.workdir = workdir\n\n @cached_property\n def OK(self):\n return self.style(\"OK\", fg=\"green\", bold=True)\n\n @cached_property\n def ERROR(self):\n return self.style(\"ERROR\", fg=\"red\", bold=True)\n\n def style(self, message=None, **kwargs):\n if self.no_color:\n return message\n else:\n return click.style(message, **kwargs)\n\n def secho(self, message=None, **kwargs):\n if self.no_color:\n kwargs['color'] = False\n click.echo(message, **kwargs)\n else:\n click.secho(message, **kwargs)\n\n def echo(self, message=None, **kwargs):\n if self.no_color:\n kwargs['color'] = False\n click.echo(message, **kwargs)\n else:\n click.echo(message, **kwargs)\n\n def error(self, message=None, **kwargs):\n kwargs['err'] = True\n if self.no_color:\n kwargs['color'] = False\n click.echo(message, **kwargs)\n else:\n click.secho(message, **kwargs)\n\n def pretty(self, n):\n if isinstance(n, list):\n return self.OK, self.pretty_list(n)\n if isinstance(n, dict):\n if 'ok' in n or 'error' in n:\n return self.pretty_dict_ok_error(n)\n else:\n s = json.dumps(n, sort_keys=True, indent=4)\n if not self.no_color:\n s = highlight(s, LEXER, FORMATTER)\n return self.OK, s\n if isinstance(n, str):\n return self.OK, n\n return self.OK, pformat(n)\n\n def pretty_list(self, n):\n if not n:\n return '- empty -'\n return '\\n'.join(\n f'{self.style(\"*\", fg=\"white\")} {item}' for item in n\n )\n\n def pretty_dict_ok_error(self, n):\n try:\n return (self.OK,\n text.indent(self.pretty(n['ok'])[1], 4))\n except KeyError:\n pass\n return (self.ERROR,\n text.indent(self.pretty(n['error'])[1], 4))\n\n def say_chat(self, direction, title, body='', show_body=False):\n if direction == '<-' and self.quiet:\n return\n dirstr = not self.quiet and f'{self.style(direction, fg=\"white\", bold=True)} ' or ''\n self.echo(f'{dirstr} {title}')\n if body and show_body:\n self.echo(body)\n\n\ndef handle_preload_options(f):\n \"\"\"Extract preload options and return a wrapped callable.\"\"\"\n def caller(ctx, *args, **kwargs):\n app = ctx.obj.app\n\n preload_options = [o.name for o in app.user_options.get('preload', [])]\n\n if preload_options:\n user_options = {\n preload_option: kwargs[preload_option]\n for preload_option in preload_options\n }\n\n user_preload_options.send(sender=f, app=app, options=user_options)\n\n return f(ctx, *args, **kwargs)\n\n return update_wrapper(caller, f)\n\n\nclass CeleryOption(click.Option):\n \"\"\"Customized option for Celery.\"\"\"\n\n def get_default(self, ctx, *args, **kwargs):\n if self.default_value_from_context:\n self.default = ctx.obj[self.default_value_from_context]\n return super().get_default(ctx, *args, **kwargs)\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initialize a Celery option.\"\"\"\n self.help_group = kwargs.pop('help_group', None)\n self.default_value_from_context = kwargs.pop('default_value_from_context', None)\n super().__init__(*args, **kwargs)\n\n\nclass CeleryCommand(click.Command):\n \"\"\"Customized command for Celery.\"\"\"\n\n def format_options(self, ctx, formatter):\n \"\"\"Write all the options into the formatter if they exist.\"\"\"\n opts = OrderedDict()\n for param in self.get_params(ctx):\n rv = param.get_help_record(ctx)\n if rv is not None:\n if hasattr(param, 'help_group') and param.help_group:\n opts.setdefault(str(param.help_group), []).append(rv)\n else:\n opts.setdefault('Options', []).append(rv)\n\n for name, opts_group in opts.items():\n with formatter.section(name):\n formatter.write_dl(opts_group)\n\n\nclass CeleryDaemonCommand(CeleryCommand):\n \"\"\"Daemon commands.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initialize a Celery command with common daemon options.\"\"\"\n super().__init__(*args, **kwargs)\n self.params.append(CeleryOption(('-f', '--logfile'), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--pidfile',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--uid',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--gid',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--umask',), help_group=\"Daemonization Options\"))\n self.params.append(CeleryOption(('--executable',), help_group=\"Daemonization Options\"))\n\n\nclass CommaSeparatedList(ParamType):\n \"\"\"Comma separated list argument.\"\"\"\n\n name = \"comma separated list\"\n\n def convert(self, value, param, ctx):\n return text.str_to_list(value)\n\n\nclass JsonArray(ParamType):\n \"\"\"JSON formatted array argument.\"\"\"\n\n name = \"json array\"\n\n def convert(self, value, param, ctx):\n if isinstance(value, list):\n return value\n\n try:\n v = json.loads(value)\n except ValueError as e:\n self.fail(str(e))\n\n if not isinstance(v, list):\n self.fail(f\"{value} was not an array\")\n\n return v\n\n\nclass JsonObject(ParamType):\n \"\"\"JSON formatted object argument.\"\"\"\n\n name = \"json object\"\n\n def convert(self, value, param, ctx):\n if isinstance(value, dict):\n return value\n\n try:\n v = json.loads(value)\n except ValueError as e:\n self.fail(str(e))\n\n if not isinstance(v, dict):\n self.fail(f\"{value} was not an object\")\n\n return v\n\n\nclass ISO8601DateTime(ParamType):\n \"\"\"ISO 8601 Date Time argument.\"\"\"\n\n name = \"iso-86091\"\n\n def convert(self, value, param, ctx):\n try:\n return maybe_iso8601(value)\n except (TypeError, ValueError) as e:\n self.fail(e)\n\n\nclass ISO8601DateTimeOrFloat(ParamType):\n \"\"\"ISO 8601 Date Time or float argument.\"\"\"\n\n name = \"iso-86091 or float\"\n\n def convert(self, value, param, ctx):\n try:\n return float(value)\n except (TypeError, ValueError):\n pass\n\n try:\n return maybe_iso8601(value)\n except (TypeError, ValueError) as e:\n self.fail(e)\n\n\nclass LogLevel(click.Choice):\n \"\"\"Log level option.\"\"\"\n\n def __init__(self):\n \"\"\"Initialize the log level option with the relevant choices.\"\"\"\n super().__init__(('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL', 'FATAL'))\n\n def convert(self, value, param, ctx):\n if isinstance(value, numbers.Integral):\n return value\n\n value = value.upper()\n value = super().convert(value, param, ctx)\n return mlevel(value)\n\n\nJSON_ARRAY = JsonArray()\nJSON_OBJECT = JsonObject()\nISO8601 = ISO8601DateTime()\nISO8601_OR_FLOAT = ISO8601DateTimeOrFloat()\nLOG_LEVEL = LogLevel()\nCOMMA_SEPARATED_LIST = CommaSeparatedList()\n", "path": "celery/bin/base.py"}]} | 3,289 | 195 |
gh_patches_debug_6113 | rasdani/github-patches | git_diff | hi-primus__optimus-782 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix simple typo: ouput -> output
# Issue Type
[x] Bug (Typo)
# Steps to Replicate
1. Examine optimus/ml/encoding.py.
2. Search for ouput.
# Expected Behaviour
1. Should read output.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `optimus/ml/encoding.py`
Content:
```
1 from pyspark.ml import feature, Pipeline
2 from pyspark.ml.feature import StringIndexer, IndexToString, OneHotEncoder, VectorAssembler, Normalizer
3
4 from optimus.helpers.check import is_dataframe, is_, is_str
5 from optimus.helpers.columns import parse_columns, name_col, get_output_cols
6 from optimus.helpers.constants import Actions
7 from optimus.helpers.raiseit import RaiseIt
8
9
10 def n_gram(df, input_col, n=2):
11 """
12 Converts the input array of strings inside of a Spark DF into an array of n-grams.
13 :param df: Pyspark dataframe to analyze
14 :param input_col: Column to analyzer.
15 :param n: number of elements per n-gram >=1.
16 :return: Spark DataFrame with n-grams calculated.
17 """
18
19 is_dataframe(df)
20
21 tokenizer = feature.Tokenizer().setInputCol(input_col) | feature.StopWordsRemover()
22 count = feature.CountVectorizer()
23 gram = feature.NGram(n=n) | feature.CountVectorizer()
24 tf = tokenizer | (count, gram) | feature.VectorAssembler()
25 tfidf = tf | feature.IDF().setOutputCol('features')
26
27 tfidf_model = tfidf.fit(df)
28 df_model = tfidf_model.transform(df)
29 return df_model, tfidf_model
30
31
32 def string_to_index(df, input_cols, output_cols=None, columns=None, **kargs):
33 """
34 Maps a string column of labels to an ML column of label indices. If the input column is
35 numeric, we cast it to string and index the string values.
36 :param df: Dataframe to be transformed
37 :param input_cols: Columns to be indexed.
38 :param output_cols:Column where the ouput is going to be saved
39 :return: Dataframe with indexed columns.
40 """
41 df_actual = df
42
43 if columns is None:
44 input_cols = parse_columns(df, input_cols)
45 if output_cols is None:
46 output_cols = [name_col(input_col, "index_to_string") for input_col in input_cols]
47 output_cols = get_output_cols(input_cols, output_cols)
48 else:
49 input_cols, output_cols = zip(*columns)
50
51 indexers = [StringIndexer(inputCol=input_col, outputCol=output_col, **kargs).fit(df) for input_col, output_col
52 in zip(list(set(input_cols)), list(set(output_cols)))]
53
54 pipeline = Pipeline(stages=indexers)
55 df = pipeline.fit(df).transform(df)
56
57 df = df.preserve_meta(df_actual, Actions.STRING_TO_INDEX.value, output_cols)
58
59 return df
60
61
62 def index_to_string(df, input_cols, output_col=None, **kargs):
63 """
64 Maps a column of indices back to a new column of corresponding string values. The index-string mapping is
65 either from the ML attributes of the input column, or from user-supplied labels (which take precedence over
66 ML attributes).
67 :param df: Dataframe to be transformed.
68 :param input_cols: Columns to be indexed.
69 :param output_col: Column where the output is going to be saved.
70 :return: Dataframe with indexed columns.
71 """
72
73 input_cols = parse_columns(df, input_cols)
74 if output_col is None:
75 output_col = name_col(input_cols, "index_to_string")
76
77 indexers = [IndexToString(inputCol=column, outputCol=output_col, **kargs) for column in
78 list(set(input_cols))]
79
80 pipeline = Pipeline(stages=indexers)
81 df = pipeline.fit(df).transform(df)
82
83 return df
84
85
86 def one_hot_encoder(df, input_cols, output_col=None, **kargs):
87 """
88 Maps a column of label indices to a column of binary vectors, with at most a single one-value.
89 :param df: Dataframe to be transformed.
90 :param input_cols: Columns to be encoded.
91 :param output_col: Column where the output is going to be saved.
92 :return: Dataframe with encoded columns.
93 """
94
95 input_cols = parse_columns(df, input_cols)
96
97 if output_col is None:
98 output_col = name_col(input_cols, "one_hot_encoder")
99
100 encode = [OneHotEncoder(inputCol=column, outputCol=output_col, **kargs) for column in
101 list(set(input_cols))]
102
103 pipeline = Pipeline(stages=encode)
104 df = pipeline.fit(df).transform(df)
105
106 return df
107
108
109 # TODO: Must we use the pipeline version?
110 def vector_assembler(df, input_cols, output_col=None):
111 """
112 Combines a given list of columns into a single vector column.
113 :param df: Dataframe to be transformed.
114 :param input_cols: Columns to be assembled.
115 :param output_col: Column where the output is going to be saved.
116 :return: Dataframe with assembled column.
117 """
118
119 input_cols = parse_columns(df, input_cols)
120
121 if output_col is None:
122 output_col = name_col(input_cols, "vector_assembler")
123
124 assembler = [VectorAssembler(inputCols=input_cols, outputCol=output_col)]
125
126 pipeline = Pipeline(stages=assembler)
127 df = pipeline.fit(df).transform(df)
128
129 return df
130
131
132 def normalizer(df, input_cols, output_col=None, p=2.0):
133 """
134 Transforms a dataset of Vector rows, normalizing each Vector to have unit norm. It takes parameter p, which
135 specifies the p-norm used for normalization. (p=2) by default.
136 :param df: Dataframe to be transformed
137 :param input_cols: Columns to be normalized.
138 :param output_col: Column where the output is going to be saved.
139 :param p: p-norm used for normalization.
140 :return: Dataframe with normalized columns.
141 """
142
143 # Check if columns argument must be a string or list datat ype:
144 if not is_(input_cols, (str, list)):
145 RaiseIt.type_error(input_cols, ["str", "list"])
146
147 if is_str(input_cols):
148 input_cols = [input_cols]
149
150 if is_(input_cols, (float, int)):
151 RaiseIt.type_error(input_cols, ["float", "int"])
152
153 # Try to create a vector
154 if len(input_cols) > 1:
155 df = df.cols.cast(input_cols, "vector")
156
157 if output_col is None:
158 output_col = name_col(input_cols, "normalizer")
159
160 # TODO https://developer.ibm.com/code/2018/04/10/improve-performance-ml-pipelines-wide-dataframes-apache-spark-2-3/
161 normal = [Normalizer(inputCol=col_name, outputCol=output_col, p=p) for col_name in
162 list(set(input_cols))]
163
164 pipeline = Pipeline(stages=normal)
165
166 df = pipeline.fit(df).transform(df)
167
168 return df
169
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/optimus/ml/encoding.py b/optimus/ml/encoding.py
--- a/optimus/ml/encoding.py
+++ b/optimus/ml/encoding.py
@@ -35,7 +35,7 @@
numeric, we cast it to string and index the string values.
:param df: Dataframe to be transformed
:param input_cols: Columns to be indexed.
- :param output_cols:Column where the ouput is going to be saved
+ :param output_cols:Column where the output is going to be saved
:return: Dataframe with indexed columns.
"""
df_actual = df
| {"golden_diff": "diff --git a/optimus/ml/encoding.py b/optimus/ml/encoding.py\n--- a/optimus/ml/encoding.py\n+++ b/optimus/ml/encoding.py\n@@ -35,7 +35,7 @@\n numeric, we cast it to string and index the string values.\n :param df: Dataframe to be transformed\n :param input_cols: Columns to be indexed.\n- :param output_cols:Column where the ouput is going to be saved\n+ :param output_cols:Column where the output is going to be saved\n :return: Dataframe with indexed columns.\n \"\"\"\n df_actual = df\n", "issue": "Fix simple typo: ouput -> output\n# Issue Type\n\n[x] Bug (Typo)\n\n# Steps to Replicate\n\n1. Examine optimus/ml/encoding.py.\n2. Search for ouput.\n\n# Expected Behaviour\n\n1. Should read output.\n\n\n", "before_files": [{"content": "from pyspark.ml import feature, Pipeline\nfrom pyspark.ml.feature import StringIndexer, IndexToString, OneHotEncoder, VectorAssembler, Normalizer\n\nfrom optimus.helpers.check import is_dataframe, is_, is_str\nfrom optimus.helpers.columns import parse_columns, name_col, get_output_cols\nfrom optimus.helpers.constants import Actions\nfrom optimus.helpers.raiseit import RaiseIt\n\n\ndef n_gram(df, input_col, n=2):\n \"\"\"\n Converts the input array of strings inside of a Spark DF into an array of n-grams.\n :param df: Pyspark dataframe to analyze\n :param input_col: Column to analyzer.\n :param n: number of elements per n-gram >=1.\n :return: Spark DataFrame with n-grams calculated.\n \"\"\"\n\n is_dataframe(df)\n\n tokenizer = feature.Tokenizer().setInputCol(input_col) | feature.StopWordsRemover()\n count = feature.CountVectorizer()\n gram = feature.NGram(n=n) | feature.CountVectorizer()\n tf = tokenizer | (count, gram) | feature.VectorAssembler()\n tfidf = tf | feature.IDF().setOutputCol('features')\n\n tfidf_model = tfidf.fit(df)\n df_model = tfidf_model.transform(df)\n return df_model, tfidf_model\n\n\ndef string_to_index(df, input_cols, output_cols=None, columns=None, **kargs):\n \"\"\"\n Maps a string column of labels to an ML column of label indices. If the input column is\n numeric, we cast it to string and index the string values.\n :param df: Dataframe to be transformed\n :param input_cols: Columns to be indexed.\n :param output_cols:Column where the ouput is going to be saved\n :return: Dataframe with indexed columns.\n \"\"\"\n df_actual = df\n\n if columns is None:\n input_cols = parse_columns(df, input_cols)\n if output_cols is None:\n output_cols = [name_col(input_col, \"index_to_string\") for input_col in input_cols]\n output_cols = get_output_cols(input_cols, output_cols)\n else:\n input_cols, output_cols = zip(*columns)\n\n indexers = [StringIndexer(inputCol=input_col, outputCol=output_col, **kargs).fit(df) for input_col, output_col\n in zip(list(set(input_cols)), list(set(output_cols)))]\n\n pipeline = Pipeline(stages=indexers)\n df = pipeline.fit(df).transform(df)\n\n df = df.preserve_meta(df_actual, Actions.STRING_TO_INDEX.value, output_cols)\n\n return df\n\n\ndef index_to_string(df, input_cols, output_col=None, **kargs):\n \"\"\"\n Maps a column of indices back to a new column of corresponding string values. The index-string mapping is\n either from the ML attributes of the input column, or from user-supplied labels (which take precedence over\n ML attributes).\n :param df: Dataframe to be transformed.\n :param input_cols: Columns to be indexed.\n :param output_col: Column where the output is going to be saved.\n :return: Dataframe with indexed columns.\n \"\"\"\n\n input_cols = parse_columns(df, input_cols)\n if output_col is None:\n output_col = name_col(input_cols, \"index_to_string\")\n\n indexers = [IndexToString(inputCol=column, outputCol=output_col, **kargs) for column in\n list(set(input_cols))]\n\n pipeline = Pipeline(stages=indexers)\n df = pipeline.fit(df).transform(df)\n\n return df\n\n\ndef one_hot_encoder(df, input_cols, output_col=None, **kargs):\n \"\"\"\n Maps a column of label indices to a column of binary vectors, with at most a single one-value.\n :param df: Dataframe to be transformed.\n :param input_cols: Columns to be encoded.\n :param output_col: Column where the output is going to be saved.\n :return: Dataframe with encoded columns.\n \"\"\"\n\n input_cols = parse_columns(df, input_cols)\n\n if output_col is None:\n output_col = name_col(input_cols, \"one_hot_encoder\")\n\n encode = [OneHotEncoder(inputCol=column, outputCol=output_col, **kargs) for column in\n list(set(input_cols))]\n\n pipeline = Pipeline(stages=encode)\n df = pipeline.fit(df).transform(df)\n\n return df\n\n\n# TODO: Must we use the pipeline version?\ndef vector_assembler(df, input_cols, output_col=None):\n \"\"\"\n Combines a given list of columns into a single vector column.\n :param df: Dataframe to be transformed.\n :param input_cols: Columns to be assembled.\n :param output_col: Column where the output is going to be saved.\n :return: Dataframe with assembled column.\n \"\"\"\n\n input_cols = parse_columns(df, input_cols)\n\n if output_col is None:\n output_col = name_col(input_cols, \"vector_assembler\")\n\n assembler = [VectorAssembler(inputCols=input_cols, outputCol=output_col)]\n\n pipeline = Pipeline(stages=assembler)\n df = pipeline.fit(df).transform(df)\n\n return df\n\n\ndef normalizer(df, input_cols, output_col=None, p=2.0):\n \"\"\"\n Transforms a dataset of Vector rows, normalizing each Vector to have unit norm. It takes parameter p, which\n specifies the p-norm used for normalization. (p=2) by default.\n :param df: Dataframe to be transformed\n :param input_cols: Columns to be normalized.\n :param output_col: Column where the output is going to be saved.\n :param p: p-norm used for normalization.\n :return: Dataframe with normalized columns.\n \"\"\"\n\n # Check if columns argument must be a string or list datat ype:\n if not is_(input_cols, (str, list)):\n RaiseIt.type_error(input_cols, [\"str\", \"list\"])\n\n if is_str(input_cols):\n input_cols = [input_cols]\n\n if is_(input_cols, (float, int)):\n RaiseIt.type_error(input_cols, [\"float\", \"int\"])\n\n # Try to create a vector\n if len(input_cols) > 1:\n df = df.cols.cast(input_cols, \"vector\")\n\n if output_col is None:\n output_col = name_col(input_cols, \"normalizer\")\n\n # TODO https://developer.ibm.com/code/2018/04/10/improve-performance-ml-pipelines-wide-dataframes-apache-spark-2-3/\n normal = [Normalizer(inputCol=col_name, outputCol=output_col, p=p) for col_name in\n list(set(input_cols))]\n\n pipeline = Pipeline(stages=normal)\n\n df = pipeline.fit(df).transform(df)\n\n return df\n", "path": "optimus/ml/encoding.py"}], "after_files": [{"content": "from pyspark.ml import feature, Pipeline\nfrom pyspark.ml.feature import StringIndexer, IndexToString, OneHotEncoder, VectorAssembler, Normalizer\n\nfrom optimus.helpers.check import is_dataframe, is_, is_str\nfrom optimus.helpers.columns import parse_columns, name_col, get_output_cols\nfrom optimus.helpers.constants import Actions\nfrom optimus.helpers.raiseit import RaiseIt\n\n\ndef n_gram(df, input_col, n=2):\n \"\"\"\n Converts the input array of strings inside of a Spark DF into an array of n-grams.\n :param df: Pyspark dataframe to analyze\n :param input_col: Column to analyzer.\n :param n: number of elements per n-gram >=1.\n :return: Spark DataFrame with n-grams calculated.\n \"\"\"\n\n is_dataframe(df)\n\n tokenizer = feature.Tokenizer().setInputCol(input_col) | feature.StopWordsRemover()\n count = feature.CountVectorizer()\n gram = feature.NGram(n=n) | feature.CountVectorizer()\n tf = tokenizer | (count, gram) | feature.VectorAssembler()\n tfidf = tf | feature.IDF().setOutputCol('features')\n\n tfidf_model = tfidf.fit(df)\n df_model = tfidf_model.transform(df)\n return df_model, tfidf_model\n\n\ndef string_to_index(df, input_cols, output_cols=None, columns=None, **kargs):\n \"\"\"\n Maps a string column of labels to an ML column of label indices. If the input column is\n numeric, we cast it to string and index the string values.\n :param df: Dataframe to be transformed\n :param input_cols: Columns to be indexed.\n :param output_cols:Column where the output is going to be saved\n :return: Dataframe with indexed columns.\n \"\"\"\n df_actual = df\n\n if columns is None:\n input_cols = parse_columns(df, input_cols)\n if output_cols is None:\n output_cols = [name_col(input_col, \"index_to_string\") for input_col in input_cols]\n output_cols = get_output_cols(input_cols, output_cols)\n else:\n input_cols, output_cols = zip(*columns)\n\n indexers = [StringIndexer(inputCol=input_col, outputCol=output_col, **kargs).fit(df) for input_col, output_col\n in zip(list(set(input_cols)), list(set(output_cols)))]\n\n pipeline = Pipeline(stages=indexers)\n df = pipeline.fit(df).transform(df)\n\n df = df.preserve_meta(df_actual, Actions.STRING_TO_INDEX.value, output_cols)\n\n return df\n\n\ndef index_to_string(df, input_cols, output_col=None, **kargs):\n \"\"\"\n Maps a column of indices back to a new column of corresponding string values. The index-string mapping is\n either from the ML attributes of the input column, or from user-supplied labels (which take precedence over\n ML attributes).\n :param df: Dataframe to be transformed.\n :param input_cols: Columns to be indexed.\n :param output_col: Column where the output is going to be saved.\n :return: Dataframe with indexed columns.\n \"\"\"\n\n input_cols = parse_columns(df, input_cols)\n if output_col is None:\n output_col = name_col(input_cols, \"index_to_string\")\n\n indexers = [IndexToString(inputCol=column, outputCol=output_col, **kargs) for column in\n list(set(input_cols))]\n\n pipeline = Pipeline(stages=indexers)\n df = pipeline.fit(df).transform(df)\n\n return df\n\n\ndef one_hot_encoder(df, input_cols, output_col=None, **kargs):\n \"\"\"\n Maps a column of label indices to a column of binary vectors, with at most a single one-value.\n :param df: Dataframe to be transformed.\n :param input_cols: Columns to be encoded.\n :param output_col: Column where the output is going to be saved.\n :return: Dataframe with encoded columns.\n \"\"\"\n\n input_cols = parse_columns(df, input_cols)\n\n if output_col is None:\n output_col = name_col(input_cols, \"one_hot_encoder\")\n\n encode = [OneHotEncoder(inputCol=column, outputCol=output_col, **kargs) for column in\n list(set(input_cols))]\n\n pipeline = Pipeline(stages=encode)\n df = pipeline.fit(df).transform(df)\n\n return df\n\n\n# TODO: Must we use the pipeline version?\ndef vector_assembler(df, input_cols, output_col=None):\n \"\"\"\n Combines a given list of columns into a single vector column.\n :param df: Dataframe to be transformed.\n :param input_cols: Columns to be assembled.\n :param output_col: Column where the output is going to be saved.\n :return: Dataframe with assembled column.\n \"\"\"\n\n input_cols = parse_columns(df, input_cols)\n\n if output_col is None:\n output_col = name_col(input_cols, \"vector_assembler\")\n\n assembler = [VectorAssembler(inputCols=input_cols, outputCol=output_col)]\n\n pipeline = Pipeline(stages=assembler)\n df = pipeline.fit(df).transform(df)\n\n return df\n\n\ndef normalizer(df, input_cols, output_col=None, p=2.0):\n \"\"\"\n Transforms a dataset of Vector rows, normalizing each Vector to have unit norm. It takes parameter p, which\n specifies the p-norm used for normalization. (p=2) by default.\n :param df: Dataframe to be transformed\n :param input_cols: Columns to be normalized.\n :param output_col: Column where the output is going to be saved.\n :param p: p-norm used for normalization.\n :return: Dataframe with normalized columns.\n \"\"\"\n\n # Check if columns argument must be a string or list datat ype:\n if not is_(input_cols, (str, list)):\n RaiseIt.type_error(input_cols, [\"str\", \"list\"])\n\n if is_str(input_cols):\n input_cols = [input_cols]\n\n if is_(input_cols, (float, int)):\n RaiseIt.type_error(input_cols, [\"float\", \"int\"])\n\n # Try to create a vector\n if len(input_cols) > 1:\n df = df.cols.cast(input_cols, \"vector\")\n\n if output_col is None:\n output_col = name_col(input_cols, \"normalizer\")\n\n # TODO https://developer.ibm.com/code/2018/04/10/improve-performance-ml-pipelines-wide-dataframes-apache-spark-2-3/\n normal = [Normalizer(inputCol=col_name, outputCol=output_col, p=p) for col_name in\n list(set(input_cols))]\n\n pipeline = Pipeline(stages=normal)\n\n df = pipeline.fit(df).transform(df)\n\n return df\n", "path": "optimus/ml/encoding.py"}]} | 2,200 | 141 |
gh_patches_debug_8649 | rasdani/github-patches | git_diff | mlcommons__GaNDLF-390 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
GaNDLF is not running on macOS
**Describe the bug**
Currently, we are requiring `torch==1.8.2`:
https://github.com/CBICA/GaNDLF/blob/e8f922266ec7af1c3fac36439290d22a5e63866d/setup.py#L56
Which is not supported by PyTorch on macOS[[ref](https://pytorch.org/get-started/locally/)].
**To Reproduce**
Steps to reproduce the behavior: https://cbica.github.io/GaNDLF/setup
**Expected behavior**
The only reason for us to drop support of an OS should be if something major is breaking.
**Screenshots**
N.A.
**GaNDLF Version**
<!-- Put the output of the following command:
python -c 'import GANDLF as g;print(g.__version__)'
-->
0.0.14-dev
**Desktop (please complete the following information):**
- OS: macOS
- Version: N.A.
**Additional context**
Reported by @Sofia-Mouchtaris
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 """The setup script."""
4
5
6 import os
7 from setuptools import setup, find_packages
8 from setuptools.command.install import install
9 from setuptools.command.develop import develop
10 from setuptools.command.egg_info import egg_info
11
12 with open("README.md") as readme_file:
13 readme = readme_file.read()
14
15
16 def git_submodule_update():
17 ## submodule update
18 os.system("git submodule update --init --recursive")
19
20
21 class CustomInstallCommand(install):
22 def run(self):
23 install.run(self)
24 git_submodule_update()
25
26
27 class CustomDevelopCommand(develop):
28 def run(self):
29 develop.run(self)
30 git_submodule_update()
31
32
33 class CustomEggInfoCommand(egg_info):
34 def run(self):
35 egg_info.run(self)
36 git_submodule_update()
37
38
39 # read version.py
40 import sys, re
41
42 try:
43 filepath = "GANDLF/version.py"
44 version_file = open(filepath)
45 (__version__,) = re.findall('__version__ = "(.*)"', version_file.read())
46
47 except Exception as error:
48 __version__ = "0.0.1"
49 sys.stderr.write("Warning: Could not open '%s' due %s\n" % (filepath, error))
50
51 requirements = [
52 "black",
53 "numpy==1.21.0",
54 "scipy",
55 "SimpleITK!=2.0.*",
56 "torch==1.8.2",
57 "torchvision",
58 "tqdm",
59 "torchio==0.18.57",
60 "pandas",
61 "pylint",
62 "scikit-learn>=0.23.2",
63 "pickle5>=0.0.11",
64 "setuptools",
65 "seaborn",
66 "pyyaml",
67 "tiffslide",
68 "scikit-image",
69 "matplotlib",
70 "requests>=2.25.0",
71 "pyvips",
72 "pytest",
73 "coverage",
74 "pytest-cov",
75 "psutil",
76 "medcam",
77 "opencv-python",
78 "torchmetrics",
79 "OpenPatchMiner==0.1.6",
80 "zarr==2.10.3",
81 "pydicom",
82 "onnx",
83 ]
84
85 setup(
86 name="GANDLF",
87 version=__version__,
88 author="Jose Agraz, Vinayak Ahluwalia, Bhakti Baheti, Spyridon Bakas, Ujjwal Baid, Megh Bhalerao, Brandon Edwards, Karol Gotkowski, Caleb Grenko, Orhun Güley, Ibrahim Ethem Hamamci, Sarthak Pati, Micah Sheller, Juliia Skobleva, Siddhesh Thakur, Spiros Thermos", # alphabetical order
89 author_email="[email protected]",
90 python_requires=">=3.7",
91 packages=find_packages(),
92 cmdclass={ # this ensures git_submodule_update is called during install
93 "install": CustomInstallCommand,
94 "develop": CustomDevelopCommand,
95 "egg_info": CustomEggInfoCommand,
96 },
97 scripts=[
98 "gandlf_run",
99 "gandlf_constructCSV",
100 "gandlf_collectStats",
101 "gandlf_patchMiner",
102 "gandlf_preprocess",
103 "gandlf_anonymizer",
104 "gandlf_verifyInstall",
105 ],
106 classifiers=[
107 "Development Status :: 3 - Alpha",
108 "Intended Audience :: Science/Research",
109 "License :: OSI Approved :: BSD License",
110 "Natural Language :: English",
111 "Operating System :: OS Independent",
112 "Programming Language :: Python :: 3.7",
113 "Programming Language :: Python :: 3.8",
114 "Programming Language :: Python :: 3.9",
115 "Topic :: Scientific/Engineering :: Medical Science Apps",
116 ],
117 description=(
118 "PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging."
119 ),
120 install_requires=requirements,
121 license="BSD-3-Clause License",
122 long_description=readme,
123 long_description_content_type="text/markdown",
124 include_package_data=True,
125 keywords="semantic, segmentation, regression, classification, data-augmentation, medical-imaging",
126 zip_safe=False,
127 )
128
129 ## windows vips installation
130 if os.name == "nt": # proceed for windows
131 from pathlib import Path
132
133 # download and extract if main dll is absent
134 if not Path("./vips/vips-dev-8.10/bin/libvips-42.dll").exists():
135 print("Downloading and extracting VIPS for Windows")
136 url = "https://github.com/libvips/libvips/releases/download/v8.10.2/vips-dev-w64-all-8.10.2.zip"
137 zip_to_extract = "./vips.zip"
138 import urllib.request, zipfile
139
140 urllib.request.urlretrieve(url, zip_to_extract)
141 z = zipfile.ZipFile(zip_to_extract)
142 z.extractall("./vips")
143 z.close()
144 os.remove(zip_to_extract)
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -53,7 +53,6 @@
"numpy==1.21.0",
"scipy",
"SimpleITK!=2.0.*",
- "torch==1.8.2",
"torchvision",
"tqdm",
"torchio==0.18.57",
@@ -82,6 +81,12 @@
"onnx",
]
+# pytorch doesn't have LTS support on OSX - https://github.com/CBICA/GaNDLF/issues/389
+if sys.platform == "darwin":
+ requirements.append("torch==1.9.0")
+else:
+ requirements.append("torch==1.8.2")
+
setup(
name="GANDLF",
version=__version__,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -53,7 +53,6 @@\n \"numpy==1.21.0\",\n \"scipy\",\n \"SimpleITK!=2.0.*\",\n- \"torch==1.8.2\",\n \"torchvision\",\n \"tqdm\",\n \"torchio==0.18.57\",\n@@ -82,6 +81,12 @@\n \"onnx\",\n ]\n \n+# pytorch doesn't have LTS support on OSX - https://github.com/CBICA/GaNDLF/issues/389\n+if sys.platform == \"darwin\":\n+ requirements.append(\"torch==1.9.0\")\n+else:\n+ requirements.append(\"torch==1.8.2\")\n+\n setup(\n name=\"GANDLF\",\n version=__version__,\n", "issue": "GaNDLF is not running on macOS\n**Describe the bug**\r\nCurrently, we are requiring `torch==1.8.2`:\r\nhttps://github.com/CBICA/GaNDLF/blob/e8f922266ec7af1c3fac36439290d22a5e63866d/setup.py#L56\r\nWhich is not supported by PyTorch on macOS[[ref](https://pytorch.org/get-started/locally/)].\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior: https://cbica.github.io/GaNDLF/setup\r\n\r\n**Expected behavior**\r\nThe only reason for us to drop support of an OS should be if something major is breaking.\r\n\r\n**Screenshots**\r\nN.A.\r\n\r\n**GaNDLF Version**\r\n<!-- Put the output of the following command:\r\npython -c 'import GANDLF as g;print(g.__version__)'\r\n-->\r\n0.0.14-dev\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: macOS\r\n - Version: N.A.\r\n\r\n**Additional context**\r\nReported by @Sofia-Mouchtaris \r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"The setup script.\"\"\"\n\n\nimport os\nfrom setuptools import setup, find_packages\nfrom setuptools.command.install import install\nfrom setuptools.command.develop import develop\nfrom setuptools.command.egg_info import egg_info\n\nwith open(\"README.md\") as readme_file:\n readme = readme_file.read()\n\n\ndef git_submodule_update():\n ## submodule update\n os.system(\"git submodule update --init --recursive\")\n\n\nclass CustomInstallCommand(install):\n def run(self):\n install.run(self)\n git_submodule_update()\n\n\nclass CustomDevelopCommand(develop):\n def run(self):\n develop.run(self)\n git_submodule_update()\n\n\nclass CustomEggInfoCommand(egg_info):\n def run(self):\n egg_info.run(self)\n git_submodule_update()\n\n\n# read version.py\nimport sys, re\n\ntry:\n filepath = \"GANDLF/version.py\"\n version_file = open(filepath)\n (__version__,) = re.findall('__version__ = \"(.*)\"', version_file.read())\n\nexcept Exception as error:\n __version__ = \"0.0.1\"\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (filepath, error))\n\nrequirements = [\n \"black\",\n \"numpy==1.21.0\",\n \"scipy\",\n \"SimpleITK!=2.0.*\",\n \"torch==1.8.2\",\n \"torchvision\",\n \"tqdm\",\n \"torchio==0.18.57\",\n \"pandas\",\n \"pylint\",\n \"scikit-learn>=0.23.2\",\n \"pickle5>=0.0.11\",\n \"setuptools\",\n \"seaborn\",\n \"pyyaml\",\n \"tiffslide\",\n \"scikit-image\",\n \"matplotlib\",\n \"requests>=2.25.0\",\n \"pyvips\",\n \"pytest\",\n \"coverage\",\n \"pytest-cov\",\n \"psutil\",\n \"medcam\",\n \"opencv-python\",\n \"torchmetrics\",\n \"OpenPatchMiner==0.1.6\",\n \"zarr==2.10.3\",\n \"pydicom\",\n \"onnx\",\n]\n\nsetup(\n name=\"GANDLF\",\n version=__version__,\n author=\"Jose Agraz, Vinayak Ahluwalia, Bhakti Baheti, Spyridon Bakas, Ujjwal Baid, Megh Bhalerao, Brandon Edwards, Karol Gotkowski, Caleb Grenko, Orhun G\u00fcley, Ibrahim Ethem Hamamci, Sarthak Pati, Micah Sheller, Juliia Skobleva, Siddhesh Thakur, Spiros Thermos\", # alphabetical order\n author_email=\"[email protected]\",\n python_requires=\">=3.7\",\n packages=find_packages(),\n cmdclass={ # this ensures git_submodule_update is called during install\n \"install\": CustomInstallCommand,\n \"develop\": CustomDevelopCommand,\n \"egg_info\": CustomEggInfoCommand,\n },\n scripts=[\n \"gandlf_run\",\n \"gandlf_constructCSV\",\n \"gandlf_collectStats\",\n \"gandlf_patchMiner\",\n \"gandlf_preprocess\",\n \"gandlf_anonymizer\",\n \"gandlf_verifyInstall\",\n ],\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Scientific/Engineering :: Medical Science Apps\",\n ],\n description=(\n \"PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging.\"\n ),\n install_requires=requirements,\n license=\"BSD-3-Clause License\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n keywords=\"semantic, segmentation, regression, classification, data-augmentation, medical-imaging\",\n zip_safe=False,\n)\n\n## windows vips installation\nif os.name == \"nt\": # proceed for windows\n from pathlib import Path\n\n # download and extract if main dll is absent\n if not Path(\"./vips/vips-dev-8.10/bin/libvips-42.dll\").exists():\n print(\"Downloading and extracting VIPS for Windows\")\n url = \"https://github.com/libvips/libvips/releases/download/v8.10.2/vips-dev-w64-all-8.10.2.zip\"\n zip_to_extract = \"./vips.zip\"\n import urllib.request, zipfile\n\n urllib.request.urlretrieve(url, zip_to_extract)\n z = zipfile.ZipFile(zip_to_extract)\n z.extractall(\"./vips\")\n z.close()\n os.remove(zip_to_extract)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"The setup script.\"\"\"\n\n\nimport os\nfrom setuptools import setup, find_packages\nfrom setuptools.command.install import install\nfrom setuptools.command.develop import develop\nfrom setuptools.command.egg_info import egg_info\n\nwith open(\"README.md\") as readme_file:\n readme = readme_file.read()\n\n\ndef git_submodule_update():\n ## submodule update\n os.system(\"git submodule update --init --recursive\")\n\n\nclass CustomInstallCommand(install):\n def run(self):\n install.run(self)\n git_submodule_update()\n\n\nclass CustomDevelopCommand(develop):\n def run(self):\n develop.run(self)\n git_submodule_update()\n\n\nclass CustomEggInfoCommand(egg_info):\n def run(self):\n egg_info.run(self)\n git_submodule_update()\n\n\n# read version.py\nimport sys, re\n\ntry:\n filepath = \"GANDLF/version.py\"\n version_file = open(filepath)\n (__version__,) = re.findall('__version__ = \"(.*)\"', version_file.read())\n\nexcept Exception as error:\n __version__ = \"0.0.1\"\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (filepath, error))\n\nrequirements = [\n \"black\",\n \"numpy==1.21.0\",\n \"scipy\",\n \"SimpleITK!=2.0.*\",\n \"torchvision\",\n \"tqdm\",\n \"torchio==0.18.57\",\n \"pandas\",\n \"pylint\",\n \"scikit-learn>=0.23.2\",\n \"pickle5>=0.0.11\",\n \"setuptools\",\n \"seaborn\",\n \"pyyaml\",\n \"tiffslide\",\n \"scikit-image\",\n \"matplotlib\",\n \"requests>=2.25.0\",\n \"pyvips\",\n \"pytest\",\n \"coverage\",\n \"pytest-cov\",\n \"psutil\",\n \"medcam\",\n \"opencv-python\",\n \"torchmetrics\",\n \"OpenPatchMiner==0.1.6\",\n \"zarr==2.10.3\",\n \"pydicom\",\n \"onnx\",\n]\n\n# pytorch doesn't have LTS support on OSX - https://github.com/CBICA/GaNDLF/issues/389\nif sys.platform == \"darwin\":\n requirements.append(\"torch==1.9.0\")\nelse:\n requirements.append(\"torch==1.8.2\")\n\nsetup(\n name=\"GANDLF\",\n version=__version__,\n author=\"Jose Agraz, Vinayak Ahluwalia, Bhakti Baheti, Spyridon Bakas, Ujjwal Baid, Megh Bhalerao, Brandon Edwards, Karol Gotkowski, Caleb Grenko, Orhun G\u00fcley, Ibrahim Ethem Hamamci, Sarthak Pati, Micah Sheller, Juliia Skobleva, Siddhesh Thakur, Spiros Thermos\", # alphabetical order\n author_email=\"[email protected]\",\n python_requires=\">=3.7\",\n packages=find_packages(),\n cmdclass={ # this ensures git_submodule_update is called during install\n \"install\": CustomInstallCommand,\n \"develop\": CustomDevelopCommand,\n \"egg_info\": CustomEggInfoCommand,\n },\n scripts=[\n \"gandlf_run\",\n \"gandlf_constructCSV\",\n \"gandlf_collectStats\",\n \"gandlf_patchMiner\",\n \"gandlf_preprocess\",\n \"gandlf_anonymizer\",\n \"gandlf_verifyInstall\",\n ],\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Scientific/Engineering :: Medical Science Apps\",\n ],\n description=(\n \"PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging.\"\n ),\n install_requires=requirements,\n license=\"BSD-3-Clause License\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n keywords=\"semantic, segmentation, regression, classification, data-augmentation, medical-imaging\",\n zip_safe=False,\n)\n\n## windows vips installation\nif os.name == \"nt\": # proceed for windows\n from pathlib import Path\n\n # download and extract if main dll is absent\n if not Path(\"./vips/vips-dev-8.10/bin/libvips-42.dll\").exists():\n print(\"Downloading and extracting VIPS for Windows\")\n url = \"https://github.com/libvips/libvips/releases/download/v8.10.2/vips-dev-w64-all-8.10.2.zip\"\n zip_to_extract = \"./vips.zip\"\n import urllib.request, zipfile\n\n urllib.request.urlretrieve(url, zip_to_extract)\n z = zipfile.ZipFile(zip_to_extract)\n z.extractall(\"./vips\")\n z.close()\n os.remove(zip_to_extract)\n", "path": "setup.py"}]} | 1,927 | 194 |
gh_patches_debug_38715 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-75 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError: 'NPlusOneCallSet' object has no attribute 'should_capture_bracktrace'
When trying to upgrade to 1.1.8, I ran into this exception:
```
File "celery/app/trace.py", line 374, in trace_task
R = retval = fun(*args, **kwargs)
File "myapp/celery/worker.py", line 49, in __call__
return super().__call__(*args, **kwargs)
File "celery/app/trace.py", line 629, in __protected_call__
return self.run(*args, **kwargs)
File "myapp/celery/tasks.py", line 129, in sync_backend_wagers
retry = sync.race_wagers_sync(race_id, backend)
File "myapp/libs/sync.py", line 52, in race_wagers_sync
race = ents.Race.query.get(race_id)
File "sqlalchemy/orm/query.py", line 871, in get
ident, loading.load_on_ident)
File "sqlalchemy/orm/query.py", line 905, in _get_impl
return fallback_fn(self, key)
File "sqlalchemy/orm/loading.py", line 231, in load_on_ident
return q.one()
File "sqlalchemy/orm/query.py", line 2837, in one
ret = self.one_or_none()
File "sqlalchemy/orm/query.py", line 2807, in one_or_none
ret = list(self)
File "sqlalchemy/orm/query.py", line 2878, in __iter__
return self._execute_and_instances(context)
File "sqlalchemy/orm/query.py", line 2901, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "sqlalchemy/engine/base.py", line 948, in execute
return meth(self, multiparams, params)
File "sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
compiled_sql, distilled_params
File "sqlalchemy/engine/base.py", line 1207, in _execute_context
context.executemany)
File "sqlalchemy/event/attr.py", line 256, in __call__
fn(*args, **kw)
File "scout_apm/sqlalchemy/__init__.py", line 16, in after_cursor_execute
if tr.callset.should_capture_bracktrace(statement) is True:
AttributeError: 'NPlusOneCallSet' object has no attribute 'should_capture_bracktrace'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/scout_apm/sqlalchemy/__init__.py`
Content:
```
1 from scout_apm.core.tracked_request import TrackedRequest
2
3 from sqlalchemy import event
4
5 def instrument_sqlalchemy(engine):
6 def before_cursor_execute(conn, cursor, statement, parameters, context, executemany):
7 tr = TrackedRequest.instance()
8 span = tr.start_span(operation='SQL/Query')
9 span.tag('db.statement', statement)
10
11 def after_cursor_execute(conn, cursor, statement, parameters, context, executemany):
12 tr = TrackedRequest.instance()
13 span = tr.current_span()
14 if span is not None:
15 tr.callset.update(statement, 1, span.duration())
16 if tr.callset.should_capture_bracktrace(statement) is True:
17 span.capture_backtrace()
18 tr.stop_span()
19
20 if getattr(engine, "_scout_instrumented", False) != True:
21 event.listen(engine, 'before_cursor_execute', before_cursor_execute)
22 event.listen(engine, 'after_cursor_execute', after_cursor_execute)
23 setattr(engine, "_scout_instrumented", True)
24
```
Path: `setup.py`
Content:
```
1 from glob import glob
2 from os.path import basename, splitext
3
4 from setuptools import find_packages, setup
5
6 setup(name='scout_apm',
7 version='1.1.8',
8 description='Scout Application Performance Monitoring Agent',
9 long_description='Scout Application Performance Monitoring Agent',
10 url='https://github.com/scoutapp/scout_apm_python',
11 author='Scout',
12 author_email='[email protected]',
13 license='MIT',
14 zip_safe=False,
15 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4',
16 packages=find_packages('src'),
17 package_dir={'': 'src'},
18 py_modules=[splitext(basename(path))[0] for path in glob('src/*.py')],
19 entry_points={
20 'console_scripts': [
21 'core-agent-manager = scout_apm.core.cli.core_agent_manager:main'
22 ]
23 },
24 install_requires=['psutil', 'PyYAML', 'requests'],
25 keywords='apm performance monitoring development',
26 classifiers=[
27 'Development Status :: 3 - Alpha',
28 'Intended Audience :: Developers',
29 'Topic :: System :: Monitoring',
30 'License :: Other/Proprietary License',
31 'Operating System :: MacOS',
32 'Operating System :: POSIX',
33 'Operating System :: POSIX :: Linux',
34 'Programming Language :: Python :: 2',
35 'Programming Language :: Python :: 2.7',
36 'Programming Language :: Python :: 3',
37 'Programming Language :: Python :: 3.4',
38 'Programming Language :: Python :: 3.5',
39 'Programming Language :: Python :: 3.6',
40 'Programming Language :: Python :: 3.7',
41 ])
42
```
Path: `src/scout_apm/core/tracked_request.py`
Content:
```
1 from __future__ import absolute_import
2
3 import logging
4 from datetime import datetime
5 from uuid import uuid4
6
7 from scout_apm.core.samplers import Samplers
8 from scout_apm.core.request_manager import RequestManager
9 from scout_apm.core.thread_local import ThreadLocalSingleton
10 from scout_apm.core.n_plus_one_call_set import NPlusOneCallSet
11 import scout_apm.core.backtrace
12
13 # Logging
14 logger = logging.getLogger(__name__)
15
16
17 class TrackedRequest(ThreadLocalSingleton):
18 """
19 This is a container which keeps track of all module instances for a single
20 request. For convenience they are made available as attributes based on
21 their keyname
22 """
23 def __init__(self, *args, **kwargs):
24 self.req_id = 'req-' + str(uuid4())
25 self.start_time = kwargs.get('start_time', datetime.utcnow())
26 self.end_time = kwargs.get('end_time', None)
27 self.active_spans = kwargs.get('active_spans', [])
28 self.complete_spans = kwargs.get('complete_spans', [])
29 self.tags = kwargs.get('tags', {})
30 self.real_request = kwargs.get('real_request', False)
31 self.callset = NPlusOneCallSet()
32 logger.debug('Starting request: %s', self.req_id)
33
34 def mark_real_request(self):
35 self.real_request = True
36
37 def is_real_request(self):
38 return self.real_request
39
40 def tag(self, key, value):
41 if key in self.tags:
42 logger.debug('Overwriting previously set tag for request %s: %s' % self.req_id, key)
43 self.tags[key] = value
44
45 def start_span(self, operation=None):
46 maybe_parent = self.current_span()
47
48 if maybe_parent is not None:
49 parent_id = maybe_parent.span_id
50 else:
51 parent_id = None
52
53 new_span = Span(
54 request_id=self.req_id,
55 operation=operation,
56 parent=parent_id)
57 self.active_spans.append(new_span)
58 return new_span
59
60 def stop_span(self):
61 stopping_span = None
62 try:
63 stopping_span = self.active_spans.pop()
64 except IndexError as e:
65 logger.debug('Exception when stopping span: %s' % repr(e))
66
67 if stopping_span is not None:
68 stopping_span.stop()
69 stopping_span.annotate()
70 self.complete_spans.append(stopping_span)
71
72 if len(self.active_spans) == 0:
73 self.finish()
74
75 def current_span(self):
76 if len(self.active_spans) > 0:
77 return self.active_spans[-1]
78 else:
79 return None
80
81 # Request is done, release any info we have about it.
82 def finish(self):
83 logger.debug('Stopping request: %s', self.req_id)
84 if self.end_time is None:
85 self.end_time = datetime.utcnow()
86 RequestManager.instance().add_request(self)
87 if self.is_real_request():
88 Samplers.ensure_running()
89
90 # This can fail if the Tracked Request was created directly, not through instance()
91 try:
92 self.release()
93 except:
94 pass
95
96
97 class Span:
98 def __init__(self, *args, **kwargs):
99 self.span_id = kwargs.get('span_id', 'span-' + str(uuid4()))
100 self.start_time = kwargs.get('start_time', datetime.utcnow())
101 self.end_time = kwargs.get('end_time', None)
102 self.request_id = kwargs.get('request_id', None)
103 self.operation = kwargs.get('operation', None)
104 self.parent = kwargs.get('parent', None)
105 self.tags = kwargs.get('tags', {})
106
107 def stop(self):
108 self.end_time = datetime.utcnow()
109
110 def tag(self, key, value):
111 if key in self.tags:
112 logger.debug('Overwriting previously set tag for span %s: %s' % self.span_id, key)
113 self.tags[key] = value
114
115 # In seconds
116 def duration(self):
117 if self.end_time is not None:
118 return (self.end_time - self.start_time).total_seconds()
119 else:
120 # Current, running duration
121 return (datetime.utcnow() - self.start_time).total_seconds()
122
123 def duration_in_ms(self):
124 return self.duration() / 1000
125
126 # Add any interesting annotations to the span. Assumes that we are in the
127 # process of stopping this span.
128 def annotate(self):
129 slow_threshold = 0.500
130 if self.duration() > slow_threshold:
131 self.capture_backtrace()
132
133 def capture_backtrace(self):
134 stack = scout_apm.core.backtrace.capture()
135 self.tag('stack', stack)
136
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -4,7 +4,7 @@
from setuptools import find_packages, setup
setup(name='scout_apm',
- version='1.1.8',
+ version='1.1.9',
description='Scout Application Performance Monitoring Agent',
long_description='Scout Application Performance Monitoring Agent',
url='https://github.com/scoutapp/scout_apm_python',
@@ -12,7 +12,7 @@
author_email='[email protected]',
license='MIT',
zip_safe=False,
- python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4',
+ python_requires='>=3.4, <4',
packages=find_packages('src'),
package_dir={'': 'src'},
py_modules=[splitext(basename(path))[0] for path in glob('src/*.py')],
@@ -31,8 +31,6 @@
'Operating System :: MacOS',
'Operating System :: POSIX',
'Operating System :: POSIX :: Linux',
- 'Programming Language :: Python :: 2',
- 'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
diff --git a/src/scout_apm/core/tracked_request.py b/src/scout_apm/core/tracked_request.py
--- a/src/scout_apm/core/tracked_request.py
+++ b/src/scout_apm/core/tracked_request.py
@@ -39,7 +39,7 @@
def tag(self, key, value):
if key in self.tags:
- logger.debug('Overwriting previously set tag for request %s: %s' % self.req_id, key)
+ logger.debug('Overwriting previously set tag for request %s: %s' % (self.req_id, key))
self.tags[key] = value
def start_span(self, operation=None):
@@ -109,7 +109,7 @@
def tag(self, key, value):
if key in self.tags:
- logger.debug('Overwriting previously set tag for span %s: %s' % self.span_id, key)
+ logger.debug('Overwriting previously set tag for span %s: %s' % (self.span_id, key))
self.tags[key] = value
# In seconds
diff --git a/src/scout_apm/sqlalchemy/__init__.py b/src/scout_apm/sqlalchemy/__init__.py
--- a/src/scout_apm/sqlalchemy/__init__.py
+++ b/src/scout_apm/sqlalchemy/__init__.py
@@ -13,7 +13,7 @@
span = tr.current_span()
if span is not None:
tr.callset.update(statement, 1, span.duration())
- if tr.callset.should_capture_bracktrace(statement) is True:
+ if tr.callset.should_capture_backtrace(statement) is True:
span.capture_backtrace()
tr.stop_span()
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -4,7 +4,7 @@\n from setuptools import find_packages, setup\n \n setup(name='scout_apm',\n- version='1.1.8',\n+ version='1.1.9',\n description='Scout Application Performance Monitoring Agent',\n long_description='Scout Application Performance Monitoring Agent',\n url='https://github.com/scoutapp/scout_apm_python',\n@@ -12,7 +12,7 @@\n author_email='[email protected]',\n license='MIT',\n zip_safe=False,\n- python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4',\n+ python_requires='>=3.4, <4',\n packages=find_packages('src'),\n package_dir={'': 'src'},\n py_modules=[splitext(basename(path))[0] for path in glob('src/*.py')],\n@@ -31,8 +31,6 @@\n 'Operating System :: MacOS',\n 'Operating System :: POSIX',\n 'Operating System :: POSIX :: Linux',\n- 'Programming Language :: Python :: 2',\n- 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\ndiff --git a/src/scout_apm/core/tracked_request.py b/src/scout_apm/core/tracked_request.py\n--- a/src/scout_apm/core/tracked_request.py\n+++ b/src/scout_apm/core/tracked_request.py\n@@ -39,7 +39,7 @@\n \n def tag(self, key, value):\n if key in self.tags:\n- logger.debug('Overwriting previously set tag for request %s: %s' % self.req_id, key)\n+ logger.debug('Overwriting previously set tag for request %s: %s' % (self.req_id, key))\n self.tags[key] = value\n \n def start_span(self, operation=None):\n@@ -109,7 +109,7 @@\n \n def tag(self, key, value):\n if key in self.tags:\n- logger.debug('Overwriting previously set tag for span %s: %s' % self.span_id, key)\n+ logger.debug('Overwriting previously set tag for span %s: %s' % (self.span_id, key))\n self.tags[key] = value\n \n # In seconds\ndiff --git a/src/scout_apm/sqlalchemy/__init__.py b/src/scout_apm/sqlalchemy/__init__.py\n--- a/src/scout_apm/sqlalchemy/__init__.py\n+++ b/src/scout_apm/sqlalchemy/__init__.py\n@@ -13,7 +13,7 @@\n span = tr.current_span()\n if span is not None:\n tr.callset.update(statement, 1, span.duration())\n- if tr.callset.should_capture_bracktrace(statement) is True:\n+ if tr.callset.should_capture_backtrace(statement) is True:\n span.capture_backtrace()\n tr.stop_span()\n", "issue": "AttributeError: 'NPlusOneCallSet' object has no attribute 'should_capture_bracktrace'\nWhen trying to upgrade to 1.1.8, I ran into this exception:\r\n\r\n```\r\n File \"celery/app/trace.py\", line 374, in trace_task\r\n R = retval = fun(*args, **kwargs)\r\n File \"myapp/celery/worker.py\", line 49, in __call__\r\n return super().__call__(*args, **kwargs)\r\n File \"celery/app/trace.py\", line 629, in __protected_call__\r\n return self.run(*args, **kwargs)\r\n File \"myapp/celery/tasks.py\", line 129, in sync_backend_wagers\r\n retry = sync.race_wagers_sync(race_id, backend)\r\n File \"myapp/libs/sync.py\", line 52, in race_wagers_sync\r\n race = ents.Race.query.get(race_id)\r\n File \"sqlalchemy/orm/query.py\", line 871, in get\r\n ident, loading.load_on_ident)\r\n File \"sqlalchemy/orm/query.py\", line 905, in _get_impl\r\n return fallback_fn(self, key)\r\n File \"sqlalchemy/orm/loading.py\", line 231, in load_on_ident\r\n return q.one()\r\n File \"sqlalchemy/orm/query.py\", line 2837, in one\r\n ret = self.one_or_none()\r\n File \"sqlalchemy/orm/query.py\", line 2807, in one_or_none\r\n ret = list(self)\r\n File \"sqlalchemy/orm/query.py\", line 2878, in __iter__\r\n return self._execute_and_instances(context)\r\n File \"sqlalchemy/orm/query.py\", line 2901, in _execute_and_instances\r\n result = conn.execute(querycontext.statement, self._params)\r\n File \"sqlalchemy/engine/base.py\", line 948, in execute\r\n return meth(self, multiparams, params)\r\n File \"sqlalchemy/sql/elements.py\", line 269, in _execute_on_connection\r\n return connection._execute_clauseelement(self, multiparams, params)\r\n File \"sqlalchemy/engine/base.py\", line 1060, in _execute_clauseelement\r\n compiled_sql, distilled_params\r\n File \"sqlalchemy/engine/base.py\", line 1207, in _execute_context\r\n context.executemany)\r\n File \"sqlalchemy/event/attr.py\", line 256, in __call__\r\n fn(*args, **kw)\r\n File \"scout_apm/sqlalchemy/__init__.py\", line 16, in after_cursor_execute\r\n if tr.callset.should_capture_bracktrace(statement) is True:\r\n AttributeError: 'NPlusOneCallSet' object has no attribute 'should_capture_bracktrace'\r\n```\n", "before_files": [{"content": "from scout_apm.core.tracked_request import TrackedRequest\n\nfrom sqlalchemy import event\n\ndef instrument_sqlalchemy(engine):\n def before_cursor_execute(conn, cursor, statement, parameters, context, executemany):\n tr = TrackedRequest.instance()\n span = tr.start_span(operation='SQL/Query')\n span.tag('db.statement', statement)\n\n def after_cursor_execute(conn, cursor, statement, parameters, context, executemany):\n tr = TrackedRequest.instance()\n span = tr.current_span()\n if span is not None:\n tr.callset.update(statement, 1, span.duration())\n if tr.callset.should_capture_bracktrace(statement) is True:\n span.capture_backtrace()\n tr.stop_span()\n\n if getattr(engine, \"_scout_instrumented\", False) != True:\n event.listen(engine, 'before_cursor_execute', before_cursor_execute)\n event.listen(engine, 'after_cursor_execute', after_cursor_execute)\n setattr(engine, \"_scout_instrumented\", True)\n", "path": "src/scout_apm/sqlalchemy/__init__.py"}, {"content": "from glob import glob\nfrom os.path import basename, splitext\n\nfrom setuptools import find_packages, setup\n\nsetup(name='scout_apm',\n version='1.1.8',\n description='Scout Application Performance Monitoring Agent',\n long_description='Scout Application Performance Monitoring Agent',\n url='https://github.com/scoutapp/scout_apm_python',\n author='Scout',\n author_email='[email protected]',\n license='MIT',\n zip_safe=False,\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4',\n packages=find_packages('src'),\n package_dir={'': 'src'},\n py_modules=[splitext(basename(path))[0] for path in glob('src/*.py')],\n entry_points={\n 'console_scripts': [\n 'core-agent-manager = scout_apm.core.cli.core_agent_manager:main'\n ]\n },\n install_requires=['psutil', 'PyYAML', 'requests'],\n keywords='apm performance monitoring development',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'Topic :: System :: Monitoring',\n 'License :: Other/Proprietary License',\n 'Operating System :: MacOS',\n 'Operating System :: POSIX',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ])\n", "path": "setup.py"}, {"content": "from __future__ import absolute_import\n\nimport logging\nfrom datetime import datetime\nfrom uuid import uuid4\n\nfrom scout_apm.core.samplers import Samplers\nfrom scout_apm.core.request_manager import RequestManager\nfrom scout_apm.core.thread_local import ThreadLocalSingleton\nfrom scout_apm.core.n_plus_one_call_set import NPlusOneCallSet\nimport scout_apm.core.backtrace\n\n# Logging\nlogger = logging.getLogger(__name__)\n\n\nclass TrackedRequest(ThreadLocalSingleton):\n \"\"\"\n This is a container which keeps track of all module instances for a single\n request. For convenience they are made available as attributes based on\n their keyname\n \"\"\"\n def __init__(self, *args, **kwargs):\n self.req_id = 'req-' + str(uuid4())\n self.start_time = kwargs.get('start_time', datetime.utcnow())\n self.end_time = kwargs.get('end_time', None)\n self.active_spans = kwargs.get('active_spans', [])\n self.complete_spans = kwargs.get('complete_spans', [])\n self.tags = kwargs.get('tags', {})\n self.real_request = kwargs.get('real_request', False)\n self.callset = NPlusOneCallSet()\n logger.debug('Starting request: %s', self.req_id)\n\n def mark_real_request(self):\n self.real_request = True\n\n def is_real_request(self):\n return self.real_request\n\n def tag(self, key, value):\n if key in self.tags:\n logger.debug('Overwriting previously set tag for request %s: %s' % self.req_id, key)\n self.tags[key] = value\n\n def start_span(self, operation=None):\n maybe_parent = self.current_span()\n\n if maybe_parent is not None:\n parent_id = maybe_parent.span_id\n else:\n parent_id = None\n\n new_span = Span(\n request_id=self.req_id,\n operation=operation,\n parent=parent_id)\n self.active_spans.append(new_span)\n return new_span\n\n def stop_span(self):\n stopping_span = None\n try:\n stopping_span = self.active_spans.pop()\n except IndexError as e:\n logger.debug('Exception when stopping span: %s' % repr(e))\n\n if stopping_span is not None:\n stopping_span.stop()\n stopping_span.annotate()\n self.complete_spans.append(stopping_span)\n\n if len(self.active_spans) == 0:\n self.finish()\n\n def current_span(self):\n if len(self.active_spans) > 0:\n return self.active_spans[-1]\n else:\n return None\n\n # Request is done, release any info we have about it.\n def finish(self):\n logger.debug('Stopping request: %s', self.req_id)\n if self.end_time is None:\n self.end_time = datetime.utcnow()\n RequestManager.instance().add_request(self)\n if self.is_real_request():\n Samplers.ensure_running()\n\n # This can fail if the Tracked Request was created directly, not through instance()\n try:\n self.release()\n except:\n pass\n\n\nclass Span:\n def __init__(self, *args, **kwargs):\n self.span_id = kwargs.get('span_id', 'span-' + str(uuid4()))\n self.start_time = kwargs.get('start_time', datetime.utcnow())\n self.end_time = kwargs.get('end_time', None)\n self.request_id = kwargs.get('request_id', None)\n self.operation = kwargs.get('operation', None)\n self.parent = kwargs.get('parent', None)\n self.tags = kwargs.get('tags', {})\n\n def stop(self):\n self.end_time = datetime.utcnow()\n\n def tag(self, key, value):\n if key in self.tags:\n logger.debug('Overwriting previously set tag for span %s: %s' % self.span_id, key)\n self.tags[key] = value\n\n # In seconds\n def duration(self):\n if self.end_time is not None:\n return (self.end_time - self.start_time).total_seconds()\n else:\n # Current, running duration\n return (datetime.utcnow() - self.start_time).total_seconds()\n\n def duration_in_ms(self):\n return self.duration() / 1000\n\n # Add any interesting annotations to the span. Assumes that we are in the\n # process of stopping this span.\n def annotate(self):\n slow_threshold = 0.500\n if self.duration() > slow_threshold:\n self.capture_backtrace()\n\n def capture_backtrace(self):\n stack = scout_apm.core.backtrace.capture()\n self.tag('stack', stack)\n", "path": "src/scout_apm/core/tracked_request.py"}], "after_files": [{"content": "from scout_apm.core.tracked_request import TrackedRequest\n\nfrom sqlalchemy import event\n\ndef instrument_sqlalchemy(engine):\n def before_cursor_execute(conn, cursor, statement, parameters, context, executemany):\n tr = TrackedRequest.instance()\n span = tr.start_span(operation='SQL/Query')\n span.tag('db.statement', statement)\n\n def after_cursor_execute(conn, cursor, statement, parameters, context, executemany):\n tr = TrackedRequest.instance()\n span = tr.current_span()\n if span is not None:\n tr.callset.update(statement, 1, span.duration())\n if tr.callset.should_capture_backtrace(statement) is True:\n span.capture_backtrace()\n tr.stop_span()\n\n if getattr(engine, \"_scout_instrumented\", False) != True:\n event.listen(engine, 'before_cursor_execute', before_cursor_execute)\n event.listen(engine, 'after_cursor_execute', after_cursor_execute)\n setattr(engine, \"_scout_instrumented\", True)\n", "path": "src/scout_apm/sqlalchemy/__init__.py"}, {"content": "from glob import glob\nfrom os.path import basename, splitext\n\nfrom setuptools import find_packages, setup\n\nsetup(name='scout_apm',\n version='1.1.9',\n description='Scout Application Performance Monitoring Agent',\n long_description='Scout Application Performance Monitoring Agent',\n url='https://github.com/scoutapp/scout_apm_python',\n author='Scout',\n author_email='[email protected]',\n license='MIT',\n zip_safe=False,\n python_requires='>=3.4, <4',\n packages=find_packages('src'),\n package_dir={'': 'src'},\n py_modules=[splitext(basename(path))[0] for path in glob('src/*.py')],\n entry_points={\n 'console_scripts': [\n 'core-agent-manager = scout_apm.core.cli.core_agent_manager:main'\n ]\n },\n install_requires=['psutil', 'PyYAML', 'requests'],\n keywords='apm performance monitoring development',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'Topic :: System :: Monitoring',\n 'License :: Other/Proprietary License',\n 'Operating System :: MacOS',\n 'Operating System :: POSIX',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ])\n", "path": "setup.py"}, {"content": "from __future__ import absolute_import\n\nimport logging\nfrom datetime import datetime\nfrom uuid import uuid4\n\nfrom scout_apm.core.samplers import Samplers\nfrom scout_apm.core.request_manager import RequestManager\nfrom scout_apm.core.thread_local import ThreadLocalSingleton\nfrom scout_apm.core.n_plus_one_call_set import NPlusOneCallSet\nimport scout_apm.core.backtrace\n\n# Logging\nlogger = logging.getLogger(__name__)\n\n\nclass TrackedRequest(ThreadLocalSingleton):\n \"\"\"\n This is a container which keeps track of all module instances for a single\n request. For convenience they are made available as attributes based on\n their keyname\n \"\"\"\n def __init__(self, *args, **kwargs):\n self.req_id = 'req-' + str(uuid4())\n self.start_time = kwargs.get('start_time', datetime.utcnow())\n self.end_time = kwargs.get('end_time', None)\n self.active_spans = kwargs.get('active_spans', [])\n self.complete_spans = kwargs.get('complete_spans', [])\n self.tags = kwargs.get('tags', {})\n self.real_request = kwargs.get('real_request', False)\n self.callset = NPlusOneCallSet()\n logger.debug('Starting request: %s', self.req_id)\n\n def mark_real_request(self):\n self.real_request = True\n\n def is_real_request(self):\n return self.real_request\n\n def tag(self, key, value):\n if key in self.tags:\n logger.debug('Overwriting previously set tag for request %s: %s' % (self.req_id, key))\n self.tags[key] = value\n\n def start_span(self, operation=None):\n maybe_parent = self.current_span()\n\n if maybe_parent is not None:\n parent_id = maybe_parent.span_id\n else:\n parent_id = None\n\n new_span = Span(\n request_id=self.req_id,\n operation=operation,\n parent=parent_id)\n self.active_spans.append(new_span)\n return new_span\n\n def stop_span(self):\n stopping_span = None\n try:\n stopping_span = self.active_spans.pop()\n except IndexError as e:\n logger.debug('Exception when stopping span: %s' % repr(e))\n\n if stopping_span is not None:\n stopping_span.stop()\n stopping_span.annotate()\n self.complete_spans.append(stopping_span)\n\n if len(self.active_spans) == 0:\n self.finish()\n\n def current_span(self):\n if len(self.active_spans) > 0:\n return self.active_spans[-1]\n else:\n return None\n\n # Request is done, release any info we have about it.\n def finish(self):\n logger.debug('Stopping request: %s', self.req_id)\n if self.end_time is None:\n self.end_time = datetime.utcnow()\n RequestManager.instance().add_request(self)\n if self.is_real_request():\n Samplers.ensure_running()\n\n # This can fail if the Tracked Request was created directly, not through instance()\n try:\n self.release()\n except:\n pass\n\n\nclass Span:\n def __init__(self, *args, **kwargs):\n self.span_id = kwargs.get('span_id', 'span-' + str(uuid4()))\n self.start_time = kwargs.get('start_time', datetime.utcnow())\n self.end_time = kwargs.get('end_time', None)\n self.request_id = kwargs.get('request_id', None)\n self.operation = kwargs.get('operation', None)\n self.parent = kwargs.get('parent', None)\n self.tags = kwargs.get('tags', {})\n\n def stop(self):\n self.end_time = datetime.utcnow()\n\n def tag(self, key, value):\n if key in self.tags:\n logger.debug('Overwriting previously set tag for span %s: %s' % (self.span_id, key))\n self.tags[key] = value\n\n # In seconds\n def duration(self):\n if self.end_time is not None:\n return (self.end_time - self.start_time).total_seconds()\n else:\n # Current, running duration\n return (datetime.utcnow() - self.start_time).total_seconds()\n\n def duration_in_ms(self):\n return self.duration() / 1000\n\n # Add any interesting annotations to the span. Assumes that we are in the\n # process of stopping this span.\n def annotate(self):\n slow_threshold = 0.500\n if self.duration() > slow_threshold:\n self.capture_backtrace()\n\n def capture_backtrace(self):\n stack = scout_apm.core.backtrace.capture()\n self.tag('stack', stack)\n", "path": "src/scout_apm/core/tracked_request.py"}]} | 2,927 | 692 |
gh_patches_debug_22726 | rasdani/github-patches | git_diff | mkdocs__mkdocs-1582 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
README.md -> index.md should be optional
This is a really useful feature, but it should be optional.
I have a `README.md` file in my repository exclusively to render it on GitHub, indicating that the documentation uses format not compatible with GitHub, and that it should be viewed on the generated site.
See: https://github.com/Galarzaa90/NabBot/tree/master/docs
Now, I also have an `index.md` file in the `docs` folder, and that is the actual homepage of my site, and now it is getting overriden by `README.md` and there's no way to see the homepage anymore.
See:
- https://galarzaa90.github.io/NabBot/ - Generated Site
- https://github.com/Galarzaa90/NabBot/blob/master/docs/index.md - My index.md file
And this is my `pages`/`nav` section:
```yml
nav:
- Home: index.md
- Changelog: changelog.md
- Install Guide: install.md
- Permissions: permissions.md
- FAQ: faq.md
- Features:
- Overview: features/index.md
- Autoroles: features/autoroles.md
- Groups: features/groups.md
- Commands:
- Overview: commands/index.md
- Admin commands: commands/admin.md
- General commands: commands/general.md
- Info commands: commands/info.md
- Loot commands: commands/loot.md
- Mod commands: commands/mod.md
- Owner commands: commands/owner.md
- Roles commands: commands/roles.md
- Settings commands: commands/settings.md
- Tibia commands: commands/tibia.md
- TibiaWiki commands: commands/tibiawiki.md
- Tracking commands: commands/tracking.md
- Hosting:
- Configuration: hosting/config.md
- Messages: hosting/messages.md
- Cogs: hosting/cogs.md
```
Full file: https://github.com/Galarzaa90/NabBot/blob/master/mkdocs.yml
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkdocs/structure/files.py`
Content:
```
1 # coding: utf-8
2
3 from __future__ import unicode_literals
4 import fnmatch
5 import os
6 import logging
7 from functools import cmp_to_key
8
9 from mkdocs import utils
10
11
12 log = logging.getLogger(__name__)
13
14
15 class Files(object):
16 """ A collection of File objects. """
17 def __init__(self, files):
18 self._files = files
19 self.src_paths = {file.src_path: file for file in files}
20
21 def __iter__(self):
22 return iter(self._files)
23
24 def __len__(self):
25 return len(self._files)
26
27 def __contains__(self, path):
28 return path in self.src_paths
29
30 def get_file_from_path(self, path):
31 """ Return a File instance with File.src_path equal to path. """
32 return self.src_paths.get(os.path.normpath(path))
33
34 def append(self, file):
35 """ Append file to Files collection. """
36 self._files.append(file)
37 self.src_paths[file.src_path] = file
38
39 def copy_static_files(self, dirty=False):
40 """ Copy static files from source to destination. """
41 for file in self:
42 if not file.is_documentation_page():
43 file.copy_file(dirty)
44
45 def documentation_pages(self):
46 """ Return iterable of all Markdown page file objects. """
47 return [file for file in self if file.is_documentation_page()]
48
49 def static_pages(self):
50 """ Return iterable of all static page file objects. """
51 return [file for file in self if file.is_static_page()]
52
53 def media_files(self):
54 """ Return iterable of all file objects which are not documentation or static pages. """
55 return [file for file in self if file.is_media_file()]
56
57 def javascript_files(self):
58 """ Return iterable of all javascript file objects. """
59 return [file for file in self if file.is_javascript()]
60
61 def css_files(self):
62 """ Return iterable of all CSS file objects. """
63 return [file for file in self if file.is_css()]
64
65 def add_files_from_theme(self, env, config):
66 """ Retrieve static files from Jinja environment and add to collection. """
67 def filter(name):
68 patterns = ['.*', '*.py', '*.pyc', '*.html', 'mkdocs_theme.yml']
69 patterns.extend(config['theme'].static_templates)
70 for pattern in patterns:
71 if fnmatch.fnmatch(name, pattern):
72 return False
73 return True
74 for path in env.list_templates(filter_func=filter):
75 for dir in config['theme'].dirs:
76 # Find the first theme dir which contains path
77 if os.path.isfile(os.path.join(dir, path)):
78 self.append(File(path, dir, config['site_dir'], config['use_directory_urls']))
79 break
80
81
82 class File(object):
83 """
84 A MkDocs File object.
85
86 Points to the source and destination locations of a file.
87
88 The `path` argument must be a path that exists relative to `src_dir`.
89
90 The `src_dir` and `dest_dir` must be absolute paths on the local file system.
91
92 The `use_directory_urls` argument controls how destination paths are generated. If `False`, a Markdown file is
93 mapped to an HTML file of the same name (the file extension is changed to `.html`). If True, a Markdown file is
94 mapped to an HTML index file (`index.html`) nested in a directory using the "name" of the file in `path`. The
95 `use_directory_urls` argument has no effect on non-Markdown files.
96
97 File objects have the following properties, which are Unicode strings:
98
99 File.src_path
100 The pure path of the source file relative to the source directory.
101
102 File.abs_src_path
103 The absolute concrete path of the source file.
104
105 File.dest_path
106 The pure path of the destination file relative to the destination directory.
107
108 File.abs_dest_path
109 The absolute concrete path of the destination file.
110
111 File.url
112 The url of the destination file relative to the destination directory as a string.
113 """
114 def __init__(self, path, src_dir, dest_dir, use_directory_urls):
115 self.page = None
116 self.src_path = os.path.normpath(path)
117 self.abs_src_path = os.path.normpath(os.path.join(src_dir, self.src_path))
118 self.name = self._get_stem()
119 self.dest_path = self._get_dest_path(use_directory_urls)
120 self.abs_dest_path = os.path.normpath(os.path.join(dest_dir, self.dest_path))
121 self.url = self._get_url(use_directory_urls)
122
123 def __eq__(self, other):
124
125 def sub_dict(d):
126 return dict((key, value) for key, value in d.items() if key in ['src_path', 'abs_src_path', 'url'])
127
128 return (isinstance(other, self.__class__) and sub_dict(self.__dict__) == sub_dict(other.__dict__))
129
130 def __ne__(self, other):
131 return not self.__eq__(other)
132
133 def _get_stem(self):
134 """ Return the name of the file without it's extension. """
135 filename = os.path.basename(self.src_path)
136 stem, ext = os.path.splitext(filename)
137 return 'index' if stem in ('index', 'README') else stem
138
139 def _get_dest_path(self, use_directory_urls):
140 """ Return destination path based on source path. """
141 if self.is_documentation_page():
142 if use_directory_urls:
143 parent, filename = os.path.split(self.src_path)
144 if self.name == 'index':
145 # index.md or README.md => index.html
146 return os.path.join(parent, 'index.html')
147 else:
148 # foo.md => foo/index.html
149 return os.path.join(parent, self.name, 'index.html')
150 else:
151 # foo.md => foo.html
152 root, ext = os.path.splitext(self.src_path)
153 return root + '.html'
154 return self.src_path
155
156 def _get_url(self, use_directory_urls):
157 """ Return url based in destination path. """
158 url = self.dest_path.replace(os.path.sep, '/')
159 dirname, filename = os.path.split(url)
160 if use_directory_urls and filename == 'index.html':
161 if dirname == '':
162 url = '.'
163 else:
164 url = dirname + '/'
165 return url
166
167 def url_relative_to(self, other):
168 """ Return url for file relative to other file. """
169 return utils.get_relative_url(self.url, other.url if isinstance(other, File) else other)
170
171 def copy_file(self, dirty=False):
172 """ Copy source file to destination, ensuring parent directories exist. """
173 if dirty and not self.is_modified():
174 log.debug("Skip copying unmodified file: '{}'".format(self.src_path))
175 else:
176 log.debug("Copying media file: '{}'".format(self.src_path))
177 utils.copy_file(self.abs_src_path, self.abs_dest_path)
178
179 def is_modified(self):
180 if os.path.isfile(self.abs_dest_path):
181 return os.path.getmtime(self.abs_dest_path) < os.path.getmtime(self.abs_src_path)
182 return True
183
184 def is_documentation_page(self):
185 """ Return True if file is a Markdown page. """
186 return os.path.splitext(self.src_path)[1] in utils.markdown_extensions
187
188 def is_static_page(self):
189 """ Return True if file is a static page (html, xml, json). """
190 return os.path.splitext(self.src_path)[1] in (
191 '.html',
192 '.htm',
193 '.xml',
194 '.json',
195 )
196
197 def is_media_file(self):
198 """ Return True if file is not a documentation or static page. """
199 return not (self.is_documentation_page() or self.is_static_page())
200
201 def is_javascript(self):
202 """ Return True if file is a JavaScript file. """
203 return os.path.splitext(self.src_path)[1] in (
204 '.js',
205 '.javascript',
206 )
207
208 def is_css(self):
209 """ Return True if file is a CSS file. """
210 return os.path.splitext(self.src_path)[1] in (
211 '.css',
212 )
213
214
215 def get_files(config):
216 """ Walk the `docs_dir` and return a Files collection. """
217 files = []
218 exclude = ['.*', '/templates']
219
220 for source_dir, dirnames, filenames in os.walk(config['docs_dir'], followlinks=True):
221 relative_dir = os.path.relpath(source_dir, config['docs_dir'])
222
223 for dirname in list(dirnames):
224 path = os.path.normpath(os.path.join(relative_dir, dirname))
225 # Skip any excluded directories
226 if _filter_paths(basename=dirname, path=path, is_dir=True, exclude=exclude):
227 dirnames.remove(dirname)
228 dirnames.sort()
229
230 for filename in _sort_files(filenames):
231 path = os.path.normpath(os.path.join(relative_dir, filename))
232 # Skip any excluded files
233 if _filter_paths(basename=filename, path=path, is_dir=False, exclude=exclude):
234 continue
235 files.append(File(path, config['docs_dir'], config['site_dir'], config['use_directory_urls']))
236
237 return Files(files)
238
239
240 def _sort_files(filenames):
241 """ Always sort `index` as first filename in list. """
242
243 def compare(x, y):
244 if x == y:
245 return 0
246 if os.path.splitext(y)[0] == 'index':
247 return 1
248 if os.path.splitext(x)[0] == 'index' or x < y:
249 return -1
250 return 1
251
252 return sorted(filenames, key=cmp_to_key(compare))
253
254
255 def _filter_paths(basename, path, is_dir, exclude):
256 """ .gitignore style file filtering. """
257 for item in exclude:
258 # Items ending in '/' apply only to directories.
259 if item.endswith('/') and not is_dir:
260 continue
261 # Items starting with '/' apply to the whole path.
262 # In any other cases just the basename is used.
263 match = path if item.startswith('/') else basename
264 if fnmatch.fnmatch(match, item.strip('/')):
265 return True
266 return False
267
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mkdocs/structure/files.py b/mkdocs/structure/files.py
--- a/mkdocs/structure/files.py
+++ b/mkdocs/structure/files.py
@@ -232,20 +232,23 @@
# Skip any excluded files
if _filter_paths(basename=filename, path=path, is_dir=False, exclude=exclude):
continue
+ # Skip README.md is an index file also exists in dir
+ if filename.lower() == 'readme.md' and 'index.md' in filenames:
+ continue
files.append(File(path, config['docs_dir'], config['site_dir'], config['use_directory_urls']))
return Files(files)
def _sort_files(filenames):
- """ Always sort `index` as first filename in list. """
+ """ Always sort `index` or `README` as first filename in list. """
def compare(x, y):
if x == y:
return 0
- if os.path.splitext(y)[0] == 'index':
+ if os.path.splitext(y)[0] in ['index', 'README']:
return 1
- if os.path.splitext(x)[0] == 'index' or x < y:
+ if os.path.splitext(x)[0] in ['index', 'README'] or x < y:
return -1
return 1
| {"golden_diff": "diff --git a/mkdocs/structure/files.py b/mkdocs/structure/files.py\n--- a/mkdocs/structure/files.py\n+++ b/mkdocs/structure/files.py\n@@ -232,20 +232,23 @@\n # Skip any excluded files\n if _filter_paths(basename=filename, path=path, is_dir=False, exclude=exclude):\n continue\n+ # Skip README.md is an index file also exists in dir\n+ if filename.lower() == 'readme.md' and 'index.md' in filenames:\n+ continue\n files.append(File(path, config['docs_dir'], config['site_dir'], config['use_directory_urls']))\n \n return Files(files)\n \n \n def _sort_files(filenames):\n- \"\"\" Always sort `index` as first filename in list. \"\"\"\n+ \"\"\" Always sort `index` or `README` as first filename in list. \"\"\"\n \n def compare(x, y):\n if x == y:\n return 0\n- if os.path.splitext(y)[0] == 'index':\n+ if os.path.splitext(y)[0] in ['index', 'README']:\n return 1\n- if os.path.splitext(x)[0] == 'index' or x < y:\n+ if os.path.splitext(x)[0] in ['index', 'README'] or x < y:\n return -1\n return 1\n", "issue": "README.md -> index.md should be optional\nThis is a really useful feature, but it should be optional.\r\n\r\nI have a `README.md` file in my repository exclusively to render it on GitHub, indicating that the documentation uses format not compatible with GitHub, and that it should be viewed on the generated site.\r\n\r\nSee: https://github.com/Galarzaa90/NabBot/tree/master/docs\r\n\r\nNow, I also have an `index.md` file in the `docs` folder, and that is the actual homepage of my site, and now it is getting overriden by `README.md` and there's no way to see the homepage anymore.\r\n\r\nSee:\r\n- https://galarzaa90.github.io/NabBot/ - Generated Site\r\n- https://github.com/Galarzaa90/NabBot/blob/master/docs/index.md - My index.md file\r\n\r\nAnd this is my `pages`/`nav` section:\r\n```yml\r\nnav:\r\n - Home: index.md\r\n - Changelog: changelog.md\r\n - Install Guide: install.md\r\n - Permissions: permissions.md\r\n - FAQ: faq.md\r\n - Features:\r\n - Overview: features/index.md\r\n - Autoroles: features/autoroles.md\r\n - Groups: features/groups.md\r\n - Commands:\r\n - Overview: commands/index.md\r\n - Admin commands: commands/admin.md\r\n - General commands: commands/general.md\r\n - Info commands: commands/info.md\r\n - Loot commands: commands/loot.md\r\n - Mod commands: commands/mod.md\r\n - Owner commands: commands/owner.md\r\n - Roles commands: commands/roles.md\r\n - Settings commands: commands/settings.md\r\n - Tibia commands: commands/tibia.md\r\n - TibiaWiki commands: commands/tibiawiki.md\r\n - Tracking commands: commands/tracking.md\r\n - Hosting:\r\n - Configuration: hosting/config.md\r\n - Messages: hosting/messages.md\r\n - Cogs: hosting/cogs.md\r\n```\r\n\r\nFull file: https://github.com/Galarzaa90/NabBot/blob/master/mkdocs.yml\n", "before_files": [{"content": "# coding: utf-8\n\nfrom __future__ import unicode_literals\nimport fnmatch\nimport os\nimport logging\nfrom functools import cmp_to_key\n\nfrom mkdocs import utils\n\n\nlog = logging.getLogger(__name__)\n\n\nclass Files(object):\n \"\"\" A collection of File objects. \"\"\"\n def __init__(self, files):\n self._files = files\n self.src_paths = {file.src_path: file for file in files}\n\n def __iter__(self):\n return iter(self._files)\n\n def __len__(self):\n return len(self._files)\n\n def __contains__(self, path):\n return path in self.src_paths\n\n def get_file_from_path(self, path):\n \"\"\" Return a File instance with File.src_path equal to path. \"\"\"\n return self.src_paths.get(os.path.normpath(path))\n\n def append(self, file):\n \"\"\" Append file to Files collection. \"\"\"\n self._files.append(file)\n self.src_paths[file.src_path] = file\n\n def copy_static_files(self, dirty=False):\n \"\"\" Copy static files from source to destination. \"\"\"\n for file in self:\n if not file.is_documentation_page():\n file.copy_file(dirty)\n\n def documentation_pages(self):\n \"\"\" Return iterable of all Markdown page file objects. \"\"\"\n return [file for file in self if file.is_documentation_page()]\n\n def static_pages(self):\n \"\"\" Return iterable of all static page file objects. \"\"\"\n return [file for file in self if file.is_static_page()]\n\n def media_files(self):\n \"\"\" Return iterable of all file objects which are not documentation or static pages. \"\"\"\n return [file for file in self if file.is_media_file()]\n\n def javascript_files(self):\n \"\"\" Return iterable of all javascript file objects. \"\"\"\n return [file for file in self if file.is_javascript()]\n\n def css_files(self):\n \"\"\" Return iterable of all CSS file objects. \"\"\"\n return [file for file in self if file.is_css()]\n\n def add_files_from_theme(self, env, config):\n \"\"\" Retrieve static files from Jinja environment and add to collection. \"\"\"\n def filter(name):\n patterns = ['.*', '*.py', '*.pyc', '*.html', 'mkdocs_theme.yml']\n patterns.extend(config['theme'].static_templates)\n for pattern in patterns:\n if fnmatch.fnmatch(name, pattern):\n return False\n return True\n for path in env.list_templates(filter_func=filter):\n for dir in config['theme'].dirs:\n # Find the first theme dir which contains path\n if os.path.isfile(os.path.join(dir, path)):\n self.append(File(path, dir, config['site_dir'], config['use_directory_urls']))\n break\n\n\nclass File(object):\n \"\"\"\n A MkDocs File object.\n\n Points to the source and destination locations of a file.\n\n The `path` argument must be a path that exists relative to `src_dir`.\n\n The `src_dir` and `dest_dir` must be absolute paths on the local file system.\n\n The `use_directory_urls` argument controls how destination paths are generated. If `False`, a Markdown file is\n mapped to an HTML file of the same name (the file extension is changed to `.html`). If True, a Markdown file is\n mapped to an HTML index file (`index.html`) nested in a directory using the \"name\" of the file in `path`. The\n `use_directory_urls` argument has no effect on non-Markdown files.\n\n File objects have the following properties, which are Unicode strings:\n\n File.src_path\n The pure path of the source file relative to the source directory.\n\n File.abs_src_path\n The absolute concrete path of the source file.\n\n File.dest_path\n The pure path of the destination file relative to the destination directory.\n\n File.abs_dest_path\n The absolute concrete path of the destination file.\n\n File.url\n The url of the destination file relative to the destination directory as a string.\n \"\"\"\n def __init__(self, path, src_dir, dest_dir, use_directory_urls):\n self.page = None\n self.src_path = os.path.normpath(path)\n self.abs_src_path = os.path.normpath(os.path.join(src_dir, self.src_path))\n self.name = self._get_stem()\n self.dest_path = self._get_dest_path(use_directory_urls)\n self.abs_dest_path = os.path.normpath(os.path.join(dest_dir, self.dest_path))\n self.url = self._get_url(use_directory_urls)\n\n def __eq__(self, other):\n\n def sub_dict(d):\n return dict((key, value) for key, value in d.items() if key in ['src_path', 'abs_src_path', 'url'])\n\n return (isinstance(other, self.__class__) and sub_dict(self.__dict__) == sub_dict(other.__dict__))\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def _get_stem(self):\n \"\"\" Return the name of the file without it's extension. \"\"\"\n filename = os.path.basename(self.src_path)\n stem, ext = os.path.splitext(filename)\n return 'index' if stem in ('index', 'README') else stem\n\n def _get_dest_path(self, use_directory_urls):\n \"\"\" Return destination path based on source path. \"\"\"\n if self.is_documentation_page():\n if use_directory_urls:\n parent, filename = os.path.split(self.src_path)\n if self.name == 'index':\n # index.md or README.md => index.html\n return os.path.join(parent, 'index.html')\n else:\n # foo.md => foo/index.html\n return os.path.join(parent, self.name, 'index.html')\n else:\n # foo.md => foo.html\n root, ext = os.path.splitext(self.src_path)\n return root + '.html'\n return self.src_path\n\n def _get_url(self, use_directory_urls):\n \"\"\" Return url based in destination path. \"\"\"\n url = self.dest_path.replace(os.path.sep, '/')\n dirname, filename = os.path.split(url)\n if use_directory_urls and filename == 'index.html':\n if dirname == '':\n url = '.'\n else:\n url = dirname + '/'\n return url\n\n def url_relative_to(self, other):\n \"\"\" Return url for file relative to other file. \"\"\"\n return utils.get_relative_url(self.url, other.url if isinstance(other, File) else other)\n\n def copy_file(self, dirty=False):\n \"\"\" Copy source file to destination, ensuring parent directories exist. \"\"\"\n if dirty and not self.is_modified():\n log.debug(\"Skip copying unmodified file: '{}'\".format(self.src_path))\n else:\n log.debug(\"Copying media file: '{}'\".format(self.src_path))\n utils.copy_file(self.abs_src_path, self.abs_dest_path)\n\n def is_modified(self):\n if os.path.isfile(self.abs_dest_path):\n return os.path.getmtime(self.abs_dest_path) < os.path.getmtime(self.abs_src_path)\n return True\n\n def is_documentation_page(self):\n \"\"\" Return True if file is a Markdown page. \"\"\"\n return os.path.splitext(self.src_path)[1] in utils.markdown_extensions\n\n def is_static_page(self):\n \"\"\" Return True if file is a static page (html, xml, json). \"\"\"\n return os.path.splitext(self.src_path)[1] in (\n '.html',\n '.htm',\n '.xml',\n '.json',\n )\n\n def is_media_file(self):\n \"\"\" Return True if file is not a documentation or static page. \"\"\"\n return not (self.is_documentation_page() or self.is_static_page())\n\n def is_javascript(self):\n \"\"\" Return True if file is a JavaScript file. \"\"\"\n return os.path.splitext(self.src_path)[1] in (\n '.js',\n '.javascript',\n )\n\n def is_css(self):\n \"\"\" Return True if file is a CSS file. \"\"\"\n return os.path.splitext(self.src_path)[1] in (\n '.css',\n )\n\n\ndef get_files(config):\n \"\"\" Walk the `docs_dir` and return a Files collection. \"\"\"\n files = []\n exclude = ['.*', '/templates']\n\n for source_dir, dirnames, filenames in os.walk(config['docs_dir'], followlinks=True):\n relative_dir = os.path.relpath(source_dir, config['docs_dir'])\n\n for dirname in list(dirnames):\n path = os.path.normpath(os.path.join(relative_dir, dirname))\n # Skip any excluded directories\n if _filter_paths(basename=dirname, path=path, is_dir=True, exclude=exclude):\n dirnames.remove(dirname)\n dirnames.sort()\n\n for filename in _sort_files(filenames):\n path = os.path.normpath(os.path.join(relative_dir, filename))\n # Skip any excluded files\n if _filter_paths(basename=filename, path=path, is_dir=False, exclude=exclude):\n continue\n files.append(File(path, config['docs_dir'], config['site_dir'], config['use_directory_urls']))\n\n return Files(files)\n\n\ndef _sort_files(filenames):\n \"\"\" Always sort `index` as first filename in list. \"\"\"\n\n def compare(x, y):\n if x == y:\n return 0\n if os.path.splitext(y)[0] == 'index':\n return 1\n if os.path.splitext(x)[0] == 'index' or x < y:\n return -1\n return 1\n\n return sorted(filenames, key=cmp_to_key(compare))\n\n\ndef _filter_paths(basename, path, is_dir, exclude):\n \"\"\" .gitignore style file filtering. \"\"\"\n for item in exclude:\n # Items ending in '/' apply only to directories.\n if item.endswith('/') and not is_dir:\n continue\n # Items starting with '/' apply to the whole path.\n # In any other cases just the basename is used.\n match = path if item.startswith('/') else basename\n if fnmatch.fnmatch(match, item.strip('/')):\n return True\n return False\n", "path": "mkdocs/structure/files.py"}], "after_files": [{"content": "# coding: utf-8\n\nfrom __future__ import unicode_literals\nimport fnmatch\nimport os\nimport logging\nfrom functools import cmp_to_key\n\nfrom mkdocs import utils\n\n\nlog = logging.getLogger(__name__)\n\n\nclass Files(object):\n \"\"\" A collection of File objects. \"\"\"\n def __init__(self, files):\n self._files = files\n self.src_paths = {file.src_path: file for file in files}\n\n def __iter__(self):\n return iter(self._files)\n\n def __len__(self):\n return len(self._files)\n\n def __contains__(self, path):\n return path in self.src_paths\n\n def get_file_from_path(self, path):\n \"\"\" Return a File instance with File.src_path equal to path. \"\"\"\n return self.src_paths.get(os.path.normpath(path))\n\n def append(self, file):\n \"\"\" Append file to Files collection. \"\"\"\n self._files.append(file)\n self.src_paths[file.src_path] = file\n\n def copy_static_files(self, dirty=False):\n \"\"\" Copy static files from source to destination. \"\"\"\n for file in self:\n if not file.is_documentation_page():\n file.copy_file(dirty)\n\n def documentation_pages(self):\n \"\"\" Return iterable of all Markdown page file objects. \"\"\"\n return [file for file in self if file.is_documentation_page()]\n\n def static_pages(self):\n \"\"\" Return iterable of all static page file objects. \"\"\"\n return [file for file in self if file.is_static_page()]\n\n def media_files(self):\n \"\"\" Return iterable of all file objects which are not documentation or static pages. \"\"\"\n return [file for file in self if file.is_media_file()]\n\n def javascript_files(self):\n \"\"\" Return iterable of all javascript file objects. \"\"\"\n return [file for file in self if file.is_javascript()]\n\n def css_files(self):\n \"\"\" Return iterable of all CSS file objects. \"\"\"\n return [file for file in self if file.is_css()]\n\n def add_files_from_theme(self, env, config):\n \"\"\" Retrieve static files from Jinja environment and add to collection. \"\"\"\n def filter(name):\n patterns = ['.*', '*.py', '*.pyc', '*.html', 'mkdocs_theme.yml']\n patterns.extend(config['theme'].static_templates)\n for pattern in patterns:\n if fnmatch.fnmatch(name, pattern):\n return False\n return True\n for path in env.list_templates(filter_func=filter):\n for dir in config['theme'].dirs:\n # Find the first theme dir which contains path\n if os.path.isfile(os.path.join(dir, path)):\n self.append(File(path, dir, config['site_dir'], config['use_directory_urls']))\n break\n\n\nclass File(object):\n \"\"\"\n A MkDocs File object.\n\n Points to the source and destination locations of a file.\n\n The `path` argument must be a path that exists relative to `src_dir`.\n\n The `src_dir` and `dest_dir` must be absolute paths on the local file system.\n\n The `use_directory_urls` argument controls how destination paths are generated. If `False`, a Markdown file is\n mapped to an HTML file of the same name (the file extension is changed to `.html`). If True, a Markdown file is\n mapped to an HTML index file (`index.html`) nested in a directory using the \"name\" of the file in `path`. The\n `use_directory_urls` argument has no effect on non-Markdown files.\n\n File objects have the following properties, which are Unicode strings:\n\n File.src_path\n The pure path of the source file relative to the source directory.\n\n File.abs_src_path\n The absolute concrete path of the source file.\n\n File.dest_path\n The pure path of the destination file relative to the destination directory.\n\n File.abs_dest_path\n The absolute concrete path of the destination file.\n\n File.url\n The url of the destination file relative to the destination directory as a string.\n \"\"\"\n def __init__(self, path, src_dir, dest_dir, use_directory_urls):\n self.page = None\n self.src_path = os.path.normpath(path)\n self.abs_src_path = os.path.normpath(os.path.join(src_dir, self.src_path))\n self.name = self._get_stem()\n self.dest_path = self._get_dest_path(use_directory_urls)\n self.abs_dest_path = os.path.normpath(os.path.join(dest_dir, self.dest_path))\n self.url = self._get_url(use_directory_urls)\n\n def __eq__(self, other):\n\n def sub_dict(d):\n return dict((key, value) for key, value in d.items() if key in ['src_path', 'abs_src_path', 'url'])\n\n return (isinstance(other, self.__class__) and sub_dict(self.__dict__) == sub_dict(other.__dict__))\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def _get_stem(self):\n \"\"\" Return the name of the file without it's extension. \"\"\"\n filename = os.path.basename(self.src_path)\n stem, ext = os.path.splitext(filename)\n return 'index' if stem in ('index', 'README') else stem\n\n def _get_dest_path(self, use_directory_urls):\n \"\"\" Return destination path based on source path. \"\"\"\n if self.is_documentation_page():\n if use_directory_urls:\n parent, filename = os.path.split(self.src_path)\n if self.name == 'index':\n # index.md or README.md => index.html\n return os.path.join(parent, 'index.html')\n else:\n # foo.md => foo/index.html\n return os.path.join(parent, self.name, 'index.html')\n else:\n # foo.md => foo.html\n root, ext = os.path.splitext(self.src_path)\n return root + '.html'\n return self.src_path\n\n def _get_url(self, use_directory_urls):\n \"\"\" Return url based in destination path. \"\"\"\n url = self.dest_path.replace(os.path.sep, '/')\n dirname, filename = os.path.split(url)\n if use_directory_urls and filename == 'index.html':\n if dirname == '':\n url = '.'\n else:\n url = dirname + '/'\n return url\n\n def url_relative_to(self, other):\n \"\"\" Return url for file relative to other file. \"\"\"\n return utils.get_relative_url(self.url, other.url if isinstance(other, File) else other)\n\n def copy_file(self, dirty=False):\n \"\"\" Copy source file to destination, ensuring parent directories exist. \"\"\"\n if dirty and not self.is_modified():\n log.debug(\"Skip copying unmodified file: '{}'\".format(self.src_path))\n else:\n log.debug(\"Copying media file: '{}'\".format(self.src_path))\n utils.copy_file(self.abs_src_path, self.abs_dest_path)\n\n def is_modified(self):\n if os.path.isfile(self.abs_dest_path):\n return os.path.getmtime(self.abs_dest_path) < os.path.getmtime(self.abs_src_path)\n return True\n\n def is_documentation_page(self):\n \"\"\" Return True if file is a Markdown page. \"\"\"\n return os.path.splitext(self.src_path)[1] in utils.markdown_extensions\n\n def is_static_page(self):\n \"\"\" Return True if file is a static page (html, xml, json). \"\"\"\n return os.path.splitext(self.src_path)[1] in (\n '.html',\n '.htm',\n '.xml',\n '.json',\n )\n\n def is_media_file(self):\n \"\"\" Return True if file is not a documentation or static page. \"\"\"\n return not (self.is_documentation_page() or self.is_static_page())\n\n def is_javascript(self):\n \"\"\" Return True if file is a JavaScript file. \"\"\"\n return os.path.splitext(self.src_path)[1] in (\n '.js',\n '.javascript',\n )\n\n def is_css(self):\n \"\"\" Return True if file is a CSS file. \"\"\"\n return os.path.splitext(self.src_path)[1] in (\n '.css',\n )\n\n\ndef get_files(config):\n \"\"\" Walk the `docs_dir` and return a Files collection. \"\"\"\n files = []\n exclude = ['.*', '/templates']\n\n for source_dir, dirnames, filenames in os.walk(config['docs_dir'], followlinks=True):\n relative_dir = os.path.relpath(source_dir, config['docs_dir'])\n\n for dirname in list(dirnames):\n path = os.path.normpath(os.path.join(relative_dir, dirname))\n # Skip any excluded directories\n if _filter_paths(basename=dirname, path=path, is_dir=True, exclude=exclude):\n dirnames.remove(dirname)\n dirnames.sort()\n\n for filename in _sort_files(filenames):\n path = os.path.normpath(os.path.join(relative_dir, filename))\n # Skip any excluded files\n if _filter_paths(basename=filename, path=path, is_dir=False, exclude=exclude):\n continue\n # Skip README.md is an index file also exists in dir\n if filename.lower() == 'readme.md' and 'index.md' in filenames:\n continue\n files.append(File(path, config['docs_dir'], config['site_dir'], config['use_directory_urls']))\n\n return Files(files)\n\n\ndef _sort_files(filenames):\n \"\"\" Always sort `index` or `README` as first filename in list. \"\"\"\n\n def compare(x, y):\n if x == y:\n return 0\n if os.path.splitext(y)[0] in ['index', 'README']:\n return 1\n if os.path.splitext(x)[0] in ['index', 'README'] or x < y:\n return -1\n return 1\n\n return sorted(filenames, key=cmp_to_key(compare))\n\n\ndef _filter_paths(basename, path, is_dir, exclude):\n \"\"\" .gitignore style file filtering. \"\"\"\n for item in exclude:\n # Items ending in '/' apply only to directories.\n if item.endswith('/') and not is_dir:\n continue\n # Items starting with '/' apply to the whole path.\n # In any other cases just the basename is used.\n match = path if item.startswith('/') else basename\n if fnmatch.fnmatch(match, item.strip('/')):\n return True\n return False\n", "path": "mkdocs/structure/files.py"}]} | 3,565 | 302 |
gh_patches_debug_18920 | rasdani/github-patches | git_diff | google__turbinia-1098 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Set a default file size limit for PlasoTask hashers
Currently, all PlasoTask instances will attempt to hash files of any size, potentially very large ones .This could lead to unusually long processing times.
This is a small part of a larger effort to try to optimize how Turbinia configures Plaso tasks to better utilize inherent parallel processing capabilities.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `turbinia/workers/plaso.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2015 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Task for running Plaso."""
16
17 from __future__ import unicode_literals
18
19 import os
20 import logging
21
22 from turbinia import config
23 from turbinia.evidence import EvidenceState as state
24 from turbinia.evidence import PlasoFile
25 from turbinia.workers import TurbiniaTask
26 from turbinia.lib import file_helpers
27
28
29 class PlasoTask(TurbiniaTask):
30 """Task to run Plaso (log2timeline)."""
31
32 # Plaso requires the Disk to be attached, but doesn't require it be mounted.
33 REQUIRED_STATES = [
34 state.ATTACHED, state.DECOMPRESSED, state.CONTAINER_MOUNTED
35 ]
36
37 TASK_CONFIG = {
38 # 'none' as indicated in the options for status_view within
39 # the Plaso documentation
40 'status_view': 'none',
41 'hashers': 'all',
42 'partitions': 'all',
43 'vss_stores': 'none',
44 # artifact_filters and file_filter are mutually exclusive
45 # parameters and Plaso will error out if both parameters are used.
46 'artifact_filters': None,
47 'file_filter': None,
48 'custom_artifact_definitions': None,
49 'parsers': None,
50 'yara_rules': None
51 }
52
53 def build_plaso_command(self, base_command, conf):
54 """Builds a typical plaso command, contains logic specific to log2timeline.
55
56 Args:
57 base_command (str): Command to invoke log2timeline (e.g. log2timeline.py)
58 conf (dict): Dynamic config containing the parameters for the command.
59
60 Returns:
61 String for valid Log2timeline command.
62 """
63 self.result.log(
64 'Generating Plaso command line from arguments: {0!s}'.format(conf),
65 level=logging.DEBUG)
66 cmd = [base_command]
67 for k, v in conf.items():
68 cli_args = [
69 'status_view', 'hashers', 'partitions', 'vss_stores',
70 'custom_artifact_definitions', 'parsers', 'artifact_filters',
71 'file_filter', 'yara_rules'
72 ]
73 if (k not in cli_args or not v):
74 continue
75 prepend = '-'
76 if len(k) > 1:
77 prepend = '--'
78 if k == 'file_filter':
79 file_path = file_helpers.write_list_to_temp_file(
80 v, preferred_dir=self.tmp_dir)
81 cmd.extend(['-f', file_path])
82 elif k == 'yara_rules':
83 file_path = file_helpers.write_str_to_temp_file(
84 v, preferred_dir=self.tmp_dir)
85 cmd.extend(['--yara_rules', file_path])
86 elif isinstance(v, list):
87 cmd.extend([prepend + k, ','.join(v)])
88 elif isinstance(v, bool):
89 cmd.append(prepend + k)
90 elif isinstance(v, str):
91 cmd.extend([prepend + k, v])
92 return cmd
93
94 def run(self, evidence, result):
95 """Task that process data with Plaso.
96
97 Args:
98 evidence (Evidence object): The evidence we will process.
99 result (TurbiniaTaskResult): The object to place task results into.
100
101 Returns:
102 TurbiniaTaskResult object.
103 """
104
105 config.LoadConfig()
106
107 # Write plaso file into tmp_dir because sqlite has issues with some shared
108 # filesystems (e.g NFS).
109 plaso_file = os.path.join(self.tmp_dir, '{0:s}.plaso'.format(self.id))
110 plaso_evidence = PlasoFile(source_path=plaso_file)
111 plaso_log = os.path.join(self.output_dir, '{0:s}.log'.format(self.id))
112
113 cmd = self.build_plaso_command('log2timeline.py', self.task_config)
114
115 if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):
116 cmd.append('-d')
117
118 if evidence.credentials:
119 for credential_type, credential_data in evidence.credentials:
120 cmd.extend([
121 '--credential', '{0:s}:{1:s}'.format(
122 credential_type, credential_data)
123 ])
124
125 cmd.extend(['--temporary_directory', self.tmp_dir])
126 cmd.extend(['--logfile', plaso_log])
127 cmd.extend(['--unattended'])
128 cmd.extend(['--storage_file', plaso_file])
129 cmd.extend([evidence.local_path])
130
131 result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))
132 self.execute(
133 cmd, result, log_files=[plaso_log], new_evidence=[plaso_evidence],
134 close=True)
135
136 return result
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/turbinia/workers/plaso.py b/turbinia/workers/plaso.py
--- a/turbinia/workers/plaso.py
+++ b/turbinia/workers/plaso.py
@@ -39,6 +39,7 @@
# the Plaso documentation
'status_view': 'none',
'hashers': 'all',
+ 'hasher_file_size_limit': '1073741824',
'partitions': 'all',
'vss_stores': 'none',
# artifact_filters and file_filter are mutually exclusive
@@ -66,9 +67,9 @@
cmd = [base_command]
for k, v in conf.items():
cli_args = [
- 'status_view', 'hashers', 'partitions', 'vss_stores',
- 'custom_artifact_definitions', 'parsers', 'artifact_filters',
- 'file_filter', 'yara_rules'
+ 'status_view', 'hashers', 'hasher_file_size_limit', 'partitions',
+ 'vss_stores', 'custom_artifact_definitions', 'parsers',
+ 'artifact_filters', 'file_filter', 'yara_rules'
]
if (k not in cli_args or not v):
continue
| {"golden_diff": "diff --git a/turbinia/workers/plaso.py b/turbinia/workers/plaso.py\n--- a/turbinia/workers/plaso.py\n+++ b/turbinia/workers/plaso.py\n@@ -39,6 +39,7 @@\n # the Plaso documentation\n 'status_view': 'none',\n 'hashers': 'all',\n+ 'hasher_file_size_limit': '1073741824',\n 'partitions': 'all',\n 'vss_stores': 'none',\n # artifact_filters and file_filter are mutually exclusive\n@@ -66,9 +67,9 @@\n cmd = [base_command]\n for k, v in conf.items():\n cli_args = [\n- 'status_view', 'hashers', 'partitions', 'vss_stores',\n- 'custom_artifact_definitions', 'parsers', 'artifact_filters',\n- 'file_filter', 'yara_rules'\n+ 'status_view', 'hashers', 'hasher_file_size_limit', 'partitions',\n+ 'vss_stores', 'custom_artifact_definitions', 'parsers',\n+ 'artifact_filters', 'file_filter', 'yara_rules'\n ]\n if (k not in cli_args or not v):\n continue\n", "issue": "Set a default file size limit for PlasoTask hashers\nCurrently, all PlasoTask instances will attempt to hash files of any size, potentially very large ones .This could lead to unusually long processing times.\r\n\r\nThis is a small part of a larger effort to try to optimize how Turbinia configures Plaso tasks to better utilize inherent parallel processing capabilities.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task for running Plaso.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nimport logging\n\nfrom turbinia import config\nfrom turbinia.evidence import EvidenceState as state\nfrom turbinia.evidence import PlasoFile\nfrom turbinia.workers import TurbiniaTask\nfrom turbinia.lib import file_helpers\n\n\nclass PlasoTask(TurbiniaTask):\n \"\"\"Task to run Plaso (log2timeline).\"\"\"\n\n # Plaso requires the Disk to be attached, but doesn't require it be mounted.\n REQUIRED_STATES = [\n state.ATTACHED, state.DECOMPRESSED, state.CONTAINER_MOUNTED\n ]\n\n TASK_CONFIG = {\n # 'none' as indicated in the options for status_view within\n # the Plaso documentation\n 'status_view': 'none',\n 'hashers': 'all',\n 'partitions': 'all',\n 'vss_stores': 'none',\n # artifact_filters and file_filter are mutually exclusive\n # parameters and Plaso will error out if both parameters are used.\n 'artifact_filters': None,\n 'file_filter': None,\n 'custom_artifact_definitions': None,\n 'parsers': None,\n 'yara_rules': None\n }\n\n def build_plaso_command(self, base_command, conf):\n \"\"\"Builds a typical plaso command, contains logic specific to log2timeline.\n\n Args:\n base_command (str): Command to invoke log2timeline (e.g. log2timeline.py)\n conf (dict): Dynamic config containing the parameters for the command.\n\n Returns:\n String for valid Log2timeline command.\n \"\"\"\n self.result.log(\n 'Generating Plaso command line from arguments: {0!s}'.format(conf),\n level=logging.DEBUG)\n cmd = [base_command]\n for k, v in conf.items():\n cli_args = [\n 'status_view', 'hashers', 'partitions', 'vss_stores',\n 'custom_artifact_definitions', 'parsers', 'artifact_filters',\n 'file_filter', 'yara_rules'\n ]\n if (k not in cli_args or not v):\n continue\n prepend = '-'\n if len(k) > 1:\n prepend = '--'\n if k == 'file_filter':\n file_path = file_helpers.write_list_to_temp_file(\n v, preferred_dir=self.tmp_dir)\n cmd.extend(['-f', file_path])\n elif k == 'yara_rules':\n file_path = file_helpers.write_str_to_temp_file(\n v, preferred_dir=self.tmp_dir)\n cmd.extend(['--yara_rules', file_path])\n elif isinstance(v, list):\n cmd.extend([prepend + k, ','.join(v)])\n elif isinstance(v, bool):\n cmd.append(prepend + k)\n elif isinstance(v, str):\n cmd.extend([prepend + k, v])\n return cmd\n\n def run(self, evidence, result):\n \"\"\"Task that process data with Plaso.\n\n Args:\n evidence (Evidence object): The evidence we will process.\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n\n config.LoadConfig()\n\n # Write plaso file into tmp_dir because sqlite has issues with some shared\n # filesystems (e.g NFS).\n plaso_file = os.path.join(self.tmp_dir, '{0:s}.plaso'.format(self.id))\n plaso_evidence = PlasoFile(source_path=plaso_file)\n plaso_log = os.path.join(self.output_dir, '{0:s}.log'.format(self.id))\n\n cmd = self.build_plaso_command('log2timeline.py', self.task_config)\n\n if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):\n cmd.append('-d')\n\n if evidence.credentials:\n for credential_type, credential_data in evidence.credentials:\n cmd.extend([\n '--credential', '{0:s}:{1:s}'.format(\n credential_type, credential_data)\n ])\n\n cmd.extend(['--temporary_directory', self.tmp_dir])\n cmd.extend(['--logfile', plaso_log])\n cmd.extend(['--unattended'])\n cmd.extend(['--storage_file', plaso_file])\n cmd.extend([evidence.local_path])\n\n result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))\n self.execute(\n cmd, result, log_files=[plaso_log], new_evidence=[plaso_evidence],\n close=True)\n\n return result\n", "path": "turbinia/workers/plaso.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task for running Plaso.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nimport logging\n\nfrom turbinia import config\nfrom turbinia.evidence import EvidenceState as state\nfrom turbinia.evidence import PlasoFile\nfrom turbinia.workers import TurbiniaTask\nfrom turbinia.lib import file_helpers\n\n\nclass PlasoTask(TurbiniaTask):\n \"\"\"Task to run Plaso (log2timeline).\"\"\"\n\n # Plaso requires the Disk to be attached, but doesn't require it be mounted.\n REQUIRED_STATES = [\n state.ATTACHED, state.DECOMPRESSED, state.CONTAINER_MOUNTED\n ]\n\n TASK_CONFIG = {\n # 'none' as indicated in the options for status_view within\n # the Plaso documentation\n 'status_view': 'none',\n 'hashers': 'all',\n 'hasher_file_size_limit': '1073741824',\n 'partitions': 'all',\n 'vss_stores': 'none',\n # artifact_filters and file_filter are mutually exclusive\n # parameters and Plaso will error out if both parameters are used.\n 'artifact_filters': None,\n 'file_filter': None,\n 'custom_artifact_definitions': None,\n 'parsers': None,\n 'yara_rules': None\n }\n\n def build_plaso_command(self, base_command, conf):\n \"\"\"Builds a typical plaso command, contains logic specific to log2timeline.\n\n Args:\n base_command (str): Command to invoke log2timeline (e.g. log2timeline.py)\n conf (dict): Dynamic config containing the parameters for the command.\n\n Returns:\n String for valid Log2timeline command.\n \"\"\"\n self.result.log(\n 'Generating Plaso command line from arguments: {0!s}'.format(conf),\n level=logging.DEBUG)\n cmd = [base_command]\n for k, v in conf.items():\n cli_args = [\n 'status_view', 'hashers', 'hasher_file_size_limit', 'partitions',\n 'vss_stores', 'custom_artifact_definitions', 'parsers',\n 'artifact_filters', 'file_filter', 'yara_rules'\n ]\n if (k not in cli_args or not v):\n continue\n prepend = '-'\n if len(k) > 1:\n prepend = '--'\n if k == 'file_filter':\n file_path = file_helpers.write_list_to_temp_file(\n v, preferred_dir=self.tmp_dir)\n cmd.extend(['-f', file_path])\n elif k == 'yara_rules':\n file_path = file_helpers.write_str_to_temp_file(\n v, preferred_dir=self.tmp_dir)\n cmd.extend(['--yara_rules', file_path])\n elif isinstance(v, list):\n cmd.extend([prepend + k, ','.join(v)])\n elif isinstance(v, bool):\n cmd.append(prepend + k)\n elif isinstance(v, str):\n cmd.extend([prepend + k, v])\n return cmd\n\n def run(self, evidence, result):\n \"\"\"Task that process data with Plaso.\n\n Args:\n evidence (Evidence object): The evidence we will process.\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n\n config.LoadConfig()\n\n # Write plaso file into tmp_dir because sqlite has issues with some shared\n # filesystems (e.g NFS).\n plaso_file = os.path.join(self.tmp_dir, '{0:s}.plaso'.format(self.id))\n plaso_evidence = PlasoFile(source_path=plaso_file)\n plaso_log = os.path.join(self.output_dir, '{0:s}.log'.format(self.id))\n\n cmd = self.build_plaso_command('log2timeline.py', self.task_config)\n\n if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):\n cmd.append('-d')\n\n if evidence.credentials:\n for credential_type, credential_data in evidence.credentials:\n cmd.extend([\n '--credential', '{0:s}:{1:s}'.format(\n credential_type, credential_data)\n ])\n\n cmd.extend(['--temporary_directory', self.tmp_dir])\n cmd.extend(['--logfile', plaso_log])\n cmd.extend(['--unattended'])\n cmd.extend(['--storage_file', plaso_file])\n cmd.extend([evidence.local_path])\n\n result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))\n self.execute(\n cmd, result, log_files=[plaso_log], new_evidence=[plaso_evidence],\n close=True)\n\n return result\n", "path": "turbinia/workers/plaso.py"}]} | 1,751 | 282 |
gh_patches_debug_27728 | rasdani/github-patches | git_diff | getmoto__moto-2398 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mock_s3 gives "ConnectionError: Connection refused..." error when Key used in put_object call contains leading forward slash
This is very specific situation, however the behavior of moto in this situation doesn't match what happens when you use the actual boto3 connecting to AWS.
The situation is if you have a Key for a S3 object that you are putting into a S3 bucket that has a leading forward slash ("/").
So if your Key is "/abc" and you try putting an object into a S3 bucket with the key as follows:
```
def do_something():
my_bucket = boto3.resource('s3', region_name='us-west-2').Bucket('my_bucket')
my_bucket.put_object(Key='/abc', Body='ABCD')
```
If you run this snippet of code (along with required imports and so on), then an object with the key "/abc" will be placed in the bucket with the content specified.
When you view this on the AWS Console, it will seem a bit strange because at the "root" level of the S3 bucket, you'll find a "folder" with no name. Upon clicking this no-named folder, you'll see an object named "abc" with the content specified. So this is the expected behavior when boto3 is being used without moto.
When moto is used in a similar situation, lets say as follows:
```
@moto_s3
def do_something_moto()
conn = botot3.resource('s3', region_name='us-west-2')
conn.create_bucket(Bucket='my_bucket')
bucket.put_object(Key='/abc', Body='ABCD')
```
If you run this code, you will get an error with a traceback ending with:
`ConnectionError: Connection refused: PUT https://foobar.s3.us-west-2.amazonaws.com//abc`
If you remove the leading forward slash from the Key in the put_object call:
`bucket.put_object(Key='abc', Body='ABCD')`
Then this code will run without an error. As pointed out above when using the boto3 without moto, a Key with a leading forward slash is a valid key and should not cause an error. This is really an edge case, but I think moto should be made to handle this situation correctly.
As for versions of boto3 and moto being used:
boto3: 1.7.19
moto: 1.3.3
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `moto/server.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import argparse
4 import json
5 import re
6 import sys
7 from threading import Lock
8
9 import six
10 from flask import Flask
11 from flask.testing import FlaskClient
12
13 from six.moves.urllib.parse import urlencode
14 from werkzeug.routing import BaseConverter
15 from werkzeug.serving import run_simple
16
17 from moto.backends import BACKENDS
18 from moto.core.utils import convert_flask_to_httpretty_response
19
20
21 HTTP_METHODS = ["GET", "POST", "PUT", "DELETE", "HEAD", "PATCH"]
22
23
24 DEFAULT_SERVICE_REGION = ('s3', 'us-east-1')
25
26 # Map of unsigned calls to service-region as per AWS API docs
27 # https://docs.aws.amazon.com/cognito/latest/developerguide/resource-permissions.html#amazon-cognito-signed-versus-unsigned-apis
28 UNSIGNED_REQUESTS = {
29 'AWSCognitoIdentityService': ('cognito-identity', 'us-east-1'),
30 'AWSCognitoIdentityProviderService': ('cognito-idp', 'us-east-1'),
31 }
32
33
34 class DomainDispatcherApplication(object):
35 """
36 Dispatch requests to different applications based on the "Host:" header
37 value. We'll match the host header value with the url_bases of each backend.
38 """
39
40 def __init__(self, create_app, service=None):
41 self.create_app = create_app
42 self.lock = Lock()
43 self.app_instances = {}
44 self.service = service
45
46 def get_backend_for_host(self, host):
47 if host == 'moto_api':
48 return host
49
50 if self.service:
51 return self.service
52
53 if host in BACKENDS:
54 return host
55
56 for backend_name, backend in BACKENDS.items():
57 for url_base in list(backend.values())[0].url_bases:
58 if re.match(url_base, 'http://%s' % host):
59 return backend_name
60
61 def infer_service_region_host(self, environ):
62 auth = environ.get('HTTP_AUTHORIZATION')
63 if auth:
64 # Signed request
65 # Parse auth header to find service assuming a SigV4 request
66 # https://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html
67 # ['Credential=sdffdsa', '20170220', 'us-east-1', 'sns', 'aws4_request']
68 try:
69 credential_scope = auth.split(",")[0].split()[1]
70 _, _, region, service, _ = credential_scope.split("/")
71 except ValueError:
72 # Signature format does not match, this is exceptional and we can't
73 # infer a service-region. A reduced set of services still use
74 # the deprecated SigV2, ergo prefer S3 as most likely default.
75 # https://docs.aws.amazon.com/general/latest/gr/signature-version-2.html
76 service, region = DEFAULT_SERVICE_REGION
77 else:
78 # Unsigned request
79 target = environ.get('HTTP_X_AMZ_TARGET')
80 if target:
81 service, _ = target.split('.', 1)
82 service, region = UNSIGNED_REQUESTS.get(service, DEFAULT_SERVICE_REGION)
83 else:
84 # S3 is the last resort when the target is also unknown
85 service, region = DEFAULT_SERVICE_REGION
86
87 if service == 'dynamodb':
88 if environ['HTTP_X_AMZ_TARGET'].startswith('DynamoDBStreams'):
89 host = 'dynamodbstreams'
90 else:
91 dynamo_api_version = environ['HTTP_X_AMZ_TARGET'].split("_")[1].split(".")[0]
92 # If Newer API version, use dynamodb2
93 if dynamo_api_version > "20111205":
94 host = "dynamodb2"
95 else:
96 host = "{service}.{region}.amazonaws.com".format(
97 service=service, region=region)
98
99 return host
100
101 def get_application(self, environ):
102 path_info = environ.get('PATH_INFO', '')
103
104 # The URL path might contain non-ASCII text, for instance unicode S3 bucket names
105 if six.PY2 and isinstance(path_info, str):
106 path_info = six.u(path_info)
107 if six.PY3 and isinstance(path_info, six.binary_type):
108 path_info = path_info.decode('utf-8')
109
110 if path_info.startswith("/moto-api") or path_info == "/favicon.ico":
111 host = "moto_api"
112 elif path_info.startswith("/latest/meta-data/"):
113 host = "instance_metadata"
114 else:
115 host = environ['HTTP_HOST'].split(':')[0]
116
117 with self.lock:
118 backend = self.get_backend_for_host(host)
119 if not backend:
120 # No regular backend found; try parsing other headers
121 host = self.infer_service_region_host(environ)
122 backend = self.get_backend_for_host(host)
123
124 app = self.app_instances.get(backend, None)
125 if app is None:
126 app = self.create_app(backend)
127 self.app_instances[backend] = app
128 return app
129
130 def __call__(self, environ, start_response):
131 backend_app = self.get_application(environ)
132 return backend_app(environ, start_response)
133
134
135 class RegexConverter(BaseConverter):
136 # http://werkzeug.pocoo.org/docs/routing/#custom-converters
137
138 def __init__(self, url_map, *items):
139 super(RegexConverter, self).__init__(url_map)
140 self.regex = items[0]
141
142
143 class AWSTestHelper(FlaskClient):
144
145 def action_data(self, action_name, **kwargs):
146 """
147 Method calls resource with action_name and returns data of response.
148 """
149 opts = {"Action": action_name}
150 opts.update(kwargs)
151 res = self.get("/?{0}".format(urlencode(opts)),
152 headers={"Host": "{0}.us-east-1.amazonaws.com".format(self.application.service)})
153 return res.data.decode("utf-8")
154
155 def action_json(self, action_name, **kwargs):
156 """
157 Method calls resource with action_name and returns object obtained via
158 deserialization of output.
159 """
160 return json.loads(self.action_data(action_name, **kwargs))
161
162
163 def create_backend_app(service):
164 from werkzeug.routing import Map
165
166 # Create the backend_app
167 backend_app = Flask(__name__)
168 backend_app.debug = True
169 backend_app.service = service
170
171 # Reset view functions to reset the app
172 backend_app.view_functions = {}
173 backend_app.url_map = Map()
174 backend_app.url_map.converters['regex'] = RegexConverter
175 backend = list(BACKENDS[service].values())[0]
176 for url_path, handler in backend.flask_paths.items():
177 if handler.__name__ == 'dispatch':
178 endpoint = '{0}.dispatch'.format(handler.__self__.__name__)
179 else:
180 endpoint = None
181
182 original_endpoint = endpoint
183 index = 2
184 while endpoint in backend_app.view_functions:
185 # HACK: Sometimes we map the same view to multiple url_paths. Flask
186 # requries us to have different names.
187 endpoint = original_endpoint + str(index)
188 index += 1
189
190 backend_app.add_url_rule(
191 url_path,
192 endpoint=endpoint,
193 methods=HTTP_METHODS,
194 view_func=convert_flask_to_httpretty_response(handler),
195 strict_slashes=False,
196 )
197
198 backend_app.test_client_class = AWSTestHelper
199 return backend_app
200
201
202 def main(argv=sys.argv[1:]):
203 parser = argparse.ArgumentParser()
204
205 # Keep this for backwards compat
206 parser.add_argument(
207 "service",
208 type=str,
209 nargs='?', # http://stackoverflow.com/a/4480202/731592
210 default=None)
211 parser.add_argument(
212 '-H', '--host', type=str,
213 help='Which host to bind',
214 default='127.0.0.1')
215 parser.add_argument(
216 '-p', '--port', type=int,
217 help='Port number to use for connection',
218 default=5000)
219 parser.add_argument(
220 '-r', '--reload',
221 action='store_true',
222 help='Reload server on a file change',
223 default=False
224 )
225 parser.add_argument(
226 '-s', '--ssl',
227 action='store_true',
228 help='Enable SSL encrypted connection with auto-generated certificate (use https://... URL)',
229 default=False
230 )
231 parser.add_argument(
232 '-c', '--ssl-cert', type=str,
233 help='Path to SSL certificate',
234 default=None)
235 parser.add_argument(
236 '-k', '--ssl-key', type=str,
237 help='Path to SSL private key',
238 default=None)
239
240 args = parser.parse_args(argv)
241
242 # Wrap the main application
243 main_app = DomainDispatcherApplication(
244 create_backend_app, service=args.service)
245 main_app.debug = True
246
247 ssl_context = None
248 if args.ssl_key and args.ssl_cert:
249 ssl_context = (args.ssl_cert, args.ssl_key)
250 elif args.ssl:
251 ssl_context = 'adhoc'
252
253 run_simple(args.host, args.port, main_app,
254 threaded=True, use_reloader=args.reload,
255 ssl_context=ssl_context)
256
257
258 if __name__ == '__main__':
259 main()
260
```
Path: `moto/s3/urls.py`
Content:
```
1 from __future__ import unicode_literals
2
3 from .responses import S3ResponseInstance
4
5 url_bases = [
6 "https?://s3(.*).amazonaws.com",
7 r"https?://(?P<bucket_name>[a-zA-Z0-9\-_.]*)\.?s3(.*).amazonaws.com"
8 ]
9
10 url_paths = {
11 # subdomain bucket
12 '{0}/$': S3ResponseInstance.bucket_response,
13
14 # subdomain key of path-based bucket
15 '{0}/(?P<key_or_bucket_name>[^/]+)/?$': S3ResponseInstance.ambiguous_response,
16 # path-based bucket + key
17 '{0}/(?P<bucket_name_path>[^/]+)/(?P<key_name>.+)': S3ResponseInstance.key_response,
18 }
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/moto/s3/urls.py b/moto/s3/urls.py
--- a/moto/s3/urls.py
+++ b/moto/s3/urls.py
@@ -15,4 +15,6 @@
'{0}/(?P<key_or_bucket_name>[^/]+)/?$': S3ResponseInstance.ambiguous_response,
# path-based bucket + key
'{0}/(?P<bucket_name_path>[^/]+)/(?P<key_name>.+)': S3ResponseInstance.key_response,
+ # subdomain bucket + key with empty first part of path
+ '{0}//(?P<key_name>.*)$': S3ResponseInstance.key_response,
}
diff --git a/moto/server.py b/moto/server.py
--- a/moto/server.py
+++ b/moto/server.py
@@ -174,10 +174,11 @@
backend_app.url_map.converters['regex'] = RegexConverter
backend = list(BACKENDS[service].values())[0]
for url_path, handler in backend.flask_paths.items():
+ view_func = convert_flask_to_httpretty_response(handler)
if handler.__name__ == 'dispatch':
endpoint = '{0}.dispatch'.format(handler.__self__.__name__)
else:
- endpoint = None
+ endpoint = view_func.__name__
original_endpoint = endpoint
index = 2
@@ -191,7 +192,7 @@
url_path,
endpoint=endpoint,
methods=HTTP_METHODS,
- view_func=convert_flask_to_httpretty_response(handler),
+ view_func=view_func,
strict_slashes=False,
)
| {"golden_diff": "diff --git a/moto/s3/urls.py b/moto/s3/urls.py\n--- a/moto/s3/urls.py\n+++ b/moto/s3/urls.py\n@@ -15,4 +15,6 @@\n '{0}/(?P<key_or_bucket_name>[^/]+)/?$': S3ResponseInstance.ambiguous_response,\n # path-based bucket + key\n '{0}/(?P<bucket_name_path>[^/]+)/(?P<key_name>.+)': S3ResponseInstance.key_response,\n+ # subdomain bucket + key with empty first part of path\n+ '{0}//(?P<key_name>.*)$': S3ResponseInstance.key_response,\n }\ndiff --git a/moto/server.py b/moto/server.py\n--- a/moto/server.py\n+++ b/moto/server.py\n@@ -174,10 +174,11 @@\n backend_app.url_map.converters['regex'] = RegexConverter\n backend = list(BACKENDS[service].values())[0]\n for url_path, handler in backend.flask_paths.items():\n+ view_func = convert_flask_to_httpretty_response(handler)\n if handler.__name__ == 'dispatch':\n endpoint = '{0}.dispatch'.format(handler.__self__.__name__)\n else:\n- endpoint = None\n+ endpoint = view_func.__name__\n \n original_endpoint = endpoint\n index = 2\n@@ -191,7 +192,7 @@\n url_path,\n endpoint=endpoint,\n methods=HTTP_METHODS,\n- view_func=convert_flask_to_httpretty_response(handler),\n+ view_func=view_func,\n strict_slashes=False,\n )\n", "issue": "mock_s3 gives \"ConnectionError: Connection refused...\" error when Key used in put_object call contains leading forward slash\nThis is very specific situation, however the behavior of moto in this situation doesn't match what happens when you use the actual boto3 connecting to AWS.\r\n\r\nThe situation is if you have a Key for a S3 object that you are putting into a S3 bucket that has a leading forward slash (\"/\").\r\n\r\nSo if your Key is \"/abc\" and you try putting an object into a S3 bucket with the key as follows:\r\n\r\n```\r\ndef do_something():\r\n my_bucket = boto3.resource('s3', region_name='us-west-2').Bucket('my_bucket')\r\n my_bucket.put_object(Key='/abc', Body='ABCD')\r\n```\r\n\r\nIf you run this snippet of code (along with required imports and so on), then an object with the key \"/abc\" will be placed in the bucket with the content specified.\r\n\r\nWhen you view this on the AWS Console, it will seem a bit strange because at the \"root\" level of the S3 bucket, you'll find a \"folder\" with no name. Upon clicking this no-named folder, you'll see an object named \"abc\" with the content specified. So this is the expected behavior when boto3 is being used without moto.\r\n\r\nWhen moto is used in a similar situation, lets say as follows:\r\n\r\n```\r\n@moto_s3\r\ndef do_something_moto()\r\n conn = botot3.resource('s3', region_name='us-west-2')\r\n conn.create_bucket(Bucket='my_bucket')\r\n bucket.put_object(Key='/abc', Body='ABCD')\r\n```\r\n\r\nIf you run this code, you will get an error with a traceback ending with:\r\n\r\n`ConnectionError: Connection refused: PUT https://foobar.s3.us-west-2.amazonaws.com//abc`\r\n\r\nIf you remove the leading forward slash from the Key in the put_object call:\r\n\r\n`bucket.put_object(Key='abc', Body='ABCD')`\r\n\r\nThen this code will run without an error. As pointed out above when using the boto3 without moto, a Key with a leading forward slash is a valid key and should not cause an error. This is really an edge case, but I think moto should be made to handle this situation correctly.\r\n\r\nAs for versions of boto3 and moto being used:\r\nboto3: 1.7.19\r\nmoto: 1.3.3\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport argparse\nimport json\nimport re\nimport sys\nfrom threading import Lock\n\nimport six\nfrom flask import Flask\nfrom flask.testing import FlaskClient\n\nfrom six.moves.urllib.parse import urlencode\nfrom werkzeug.routing import BaseConverter\nfrom werkzeug.serving import run_simple\n\nfrom moto.backends import BACKENDS\nfrom moto.core.utils import convert_flask_to_httpretty_response\n\n\nHTTP_METHODS = [\"GET\", \"POST\", \"PUT\", \"DELETE\", \"HEAD\", \"PATCH\"]\n\n\nDEFAULT_SERVICE_REGION = ('s3', 'us-east-1')\n\n# Map of unsigned calls to service-region as per AWS API docs\n# https://docs.aws.amazon.com/cognito/latest/developerguide/resource-permissions.html#amazon-cognito-signed-versus-unsigned-apis\nUNSIGNED_REQUESTS = {\n 'AWSCognitoIdentityService': ('cognito-identity', 'us-east-1'),\n 'AWSCognitoIdentityProviderService': ('cognito-idp', 'us-east-1'),\n}\n\n\nclass DomainDispatcherApplication(object):\n \"\"\"\n Dispatch requests to different applications based on the \"Host:\" header\n value. We'll match the host header value with the url_bases of each backend.\n \"\"\"\n\n def __init__(self, create_app, service=None):\n self.create_app = create_app\n self.lock = Lock()\n self.app_instances = {}\n self.service = service\n\n def get_backend_for_host(self, host):\n if host == 'moto_api':\n return host\n\n if self.service:\n return self.service\n\n if host in BACKENDS:\n return host\n\n for backend_name, backend in BACKENDS.items():\n for url_base in list(backend.values())[0].url_bases:\n if re.match(url_base, 'http://%s' % host):\n return backend_name\n\n def infer_service_region_host(self, environ):\n auth = environ.get('HTTP_AUTHORIZATION')\n if auth:\n # Signed request\n # Parse auth header to find service assuming a SigV4 request\n # https://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html\n # ['Credential=sdffdsa', '20170220', 'us-east-1', 'sns', 'aws4_request']\n try:\n credential_scope = auth.split(\",\")[0].split()[1]\n _, _, region, service, _ = credential_scope.split(\"/\")\n except ValueError:\n # Signature format does not match, this is exceptional and we can't\n # infer a service-region. A reduced set of services still use\n # the deprecated SigV2, ergo prefer S3 as most likely default.\n # https://docs.aws.amazon.com/general/latest/gr/signature-version-2.html\n service, region = DEFAULT_SERVICE_REGION\n else:\n # Unsigned request\n target = environ.get('HTTP_X_AMZ_TARGET')\n if target:\n service, _ = target.split('.', 1)\n service, region = UNSIGNED_REQUESTS.get(service, DEFAULT_SERVICE_REGION)\n else:\n # S3 is the last resort when the target is also unknown\n service, region = DEFAULT_SERVICE_REGION\n\n if service == 'dynamodb':\n if environ['HTTP_X_AMZ_TARGET'].startswith('DynamoDBStreams'):\n host = 'dynamodbstreams'\n else:\n dynamo_api_version = environ['HTTP_X_AMZ_TARGET'].split(\"_\")[1].split(\".\")[0]\n # If Newer API version, use dynamodb2\n if dynamo_api_version > \"20111205\":\n host = \"dynamodb2\"\n else:\n host = \"{service}.{region}.amazonaws.com\".format(\n service=service, region=region)\n\n return host\n\n def get_application(self, environ):\n path_info = environ.get('PATH_INFO', '')\n\n # The URL path might contain non-ASCII text, for instance unicode S3 bucket names\n if six.PY2 and isinstance(path_info, str):\n path_info = six.u(path_info)\n if six.PY3 and isinstance(path_info, six.binary_type):\n path_info = path_info.decode('utf-8')\n\n if path_info.startswith(\"/moto-api\") or path_info == \"/favicon.ico\":\n host = \"moto_api\"\n elif path_info.startswith(\"/latest/meta-data/\"):\n host = \"instance_metadata\"\n else:\n host = environ['HTTP_HOST'].split(':')[0]\n\n with self.lock:\n backend = self.get_backend_for_host(host)\n if not backend:\n # No regular backend found; try parsing other headers\n host = self.infer_service_region_host(environ)\n backend = self.get_backend_for_host(host)\n\n app = self.app_instances.get(backend, None)\n if app is None:\n app = self.create_app(backend)\n self.app_instances[backend] = app\n return app\n\n def __call__(self, environ, start_response):\n backend_app = self.get_application(environ)\n return backend_app(environ, start_response)\n\n\nclass RegexConverter(BaseConverter):\n # http://werkzeug.pocoo.org/docs/routing/#custom-converters\n\n def __init__(self, url_map, *items):\n super(RegexConverter, self).__init__(url_map)\n self.regex = items[0]\n\n\nclass AWSTestHelper(FlaskClient):\n\n def action_data(self, action_name, **kwargs):\n \"\"\"\n Method calls resource with action_name and returns data of response.\n \"\"\"\n opts = {\"Action\": action_name}\n opts.update(kwargs)\n res = self.get(\"/?{0}\".format(urlencode(opts)),\n headers={\"Host\": \"{0}.us-east-1.amazonaws.com\".format(self.application.service)})\n return res.data.decode(\"utf-8\")\n\n def action_json(self, action_name, **kwargs):\n \"\"\"\n Method calls resource with action_name and returns object obtained via\n deserialization of output.\n \"\"\"\n return json.loads(self.action_data(action_name, **kwargs))\n\n\ndef create_backend_app(service):\n from werkzeug.routing import Map\n\n # Create the backend_app\n backend_app = Flask(__name__)\n backend_app.debug = True\n backend_app.service = service\n\n # Reset view functions to reset the app\n backend_app.view_functions = {}\n backend_app.url_map = Map()\n backend_app.url_map.converters['regex'] = RegexConverter\n backend = list(BACKENDS[service].values())[0]\n for url_path, handler in backend.flask_paths.items():\n if handler.__name__ == 'dispatch':\n endpoint = '{0}.dispatch'.format(handler.__self__.__name__)\n else:\n endpoint = None\n\n original_endpoint = endpoint\n index = 2\n while endpoint in backend_app.view_functions:\n # HACK: Sometimes we map the same view to multiple url_paths. Flask\n # requries us to have different names.\n endpoint = original_endpoint + str(index)\n index += 1\n\n backend_app.add_url_rule(\n url_path,\n endpoint=endpoint,\n methods=HTTP_METHODS,\n view_func=convert_flask_to_httpretty_response(handler),\n strict_slashes=False,\n )\n\n backend_app.test_client_class = AWSTestHelper\n return backend_app\n\n\ndef main(argv=sys.argv[1:]):\n parser = argparse.ArgumentParser()\n\n # Keep this for backwards compat\n parser.add_argument(\n \"service\",\n type=str,\n nargs='?', # http://stackoverflow.com/a/4480202/731592\n default=None)\n parser.add_argument(\n '-H', '--host', type=str,\n help='Which host to bind',\n default='127.0.0.1')\n parser.add_argument(\n '-p', '--port', type=int,\n help='Port number to use for connection',\n default=5000)\n parser.add_argument(\n '-r', '--reload',\n action='store_true',\n help='Reload server on a file change',\n default=False\n )\n parser.add_argument(\n '-s', '--ssl',\n action='store_true',\n help='Enable SSL encrypted connection with auto-generated certificate (use https://... URL)',\n default=False\n )\n parser.add_argument(\n '-c', '--ssl-cert', type=str,\n help='Path to SSL certificate',\n default=None)\n parser.add_argument(\n '-k', '--ssl-key', type=str,\n help='Path to SSL private key',\n default=None)\n\n args = parser.parse_args(argv)\n\n # Wrap the main application\n main_app = DomainDispatcherApplication(\n create_backend_app, service=args.service)\n main_app.debug = True\n\n ssl_context = None\n if args.ssl_key and args.ssl_cert:\n ssl_context = (args.ssl_cert, args.ssl_key)\n elif args.ssl:\n ssl_context = 'adhoc'\n\n run_simple(args.host, args.port, main_app,\n threaded=True, use_reloader=args.reload,\n ssl_context=ssl_context)\n\n\nif __name__ == '__main__':\n main()\n", "path": "moto/server.py"}, {"content": "from __future__ import unicode_literals\n\nfrom .responses import S3ResponseInstance\n\nurl_bases = [\n \"https?://s3(.*).amazonaws.com\",\n r\"https?://(?P<bucket_name>[a-zA-Z0-9\\-_.]*)\\.?s3(.*).amazonaws.com\"\n]\n\nurl_paths = {\n # subdomain bucket\n '{0}/$': S3ResponseInstance.bucket_response,\n\n # subdomain key of path-based bucket\n '{0}/(?P<key_or_bucket_name>[^/]+)/?$': S3ResponseInstance.ambiguous_response,\n # path-based bucket + key\n '{0}/(?P<bucket_name_path>[^/]+)/(?P<key_name>.+)': S3ResponseInstance.key_response,\n}\n", "path": "moto/s3/urls.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport argparse\nimport json\nimport re\nimport sys\nfrom threading import Lock\n\nimport six\nfrom flask import Flask\nfrom flask.testing import FlaskClient\n\nfrom six.moves.urllib.parse import urlencode\nfrom werkzeug.routing import BaseConverter\nfrom werkzeug.serving import run_simple\n\nfrom moto.backends import BACKENDS\nfrom moto.core.utils import convert_flask_to_httpretty_response\n\n\nHTTP_METHODS = [\"GET\", \"POST\", \"PUT\", \"DELETE\", \"HEAD\", \"PATCH\"]\n\n\nDEFAULT_SERVICE_REGION = ('s3', 'us-east-1')\n\n# Map of unsigned calls to service-region as per AWS API docs\n# https://docs.aws.amazon.com/cognito/latest/developerguide/resource-permissions.html#amazon-cognito-signed-versus-unsigned-apis\nUNSIGNED_REQUESTS = {\n 'AWSCognitoIdentityService': ('cognito-identity', 'us-east-1'),\n 'AWSCognitoIdentityProviderService': ('cognito-idp', 'us-east-1'),\n}\n\n\nclass DomainDispatcherApplication(object):\n \"\"\"\n Dispatch requests to different applications based on the \"Host:\" header\n value. We'll match the host header value with the url_bases of each backend.\n \"\"\"\n\n def __init__(self, create_app, service=None):\n self.create_app = create_app\n self.lock = Lock()\n self.app_instances = {}\n self.service = service\n\n def get_backend_for_host(self, host):\n if host == 'moto_api':\n return host\n\n if self.service:\n return self.service\n\n if host in BACKENDS:\n return host\n\n for backend_name, backend in BACKENDS.items():\n for url_base in list(backend.values())[0].url_bases:\n if re.match(url_base, 'http://%s' % host):\n return backend_name\n\n def infer_service_region_host(self, environ):\n auth = environ.get('HTTP_AUTHORIZATION')\n if auth:\n # Signed request\n # Parse auth header to find service assuming a SigV4 request\n # https://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html\n # ['Credential=sdffdsa', '20170220', 'us-east-1', 'sns', 'aws4_request']\n try:\n credential_scope = auth.split(\",\")[0].split()[1]\n _, _, region, service, _ = credential_scope.split(\"/\")\n except ValueError:\n # Signature format does not match, this is exceptional and we can't\n # infer a service-region. A reduced set of services still use\n # the deprecated SigV2, ergo prefer S3 as most likely default.\n # https://docs.aws.amazon.com/general/latest/gr/signature-version-2.html\n service, region = DEFAULT_SERVICE_REGION\n else:\n # Unsigned request\n target = environ.get('HTTP_X_AMZ_TARGET')\n if target:\n service, _ = target.split('.', 1)\n service, region = UNSIGNED_REQUESTS.get(service, DEFAULT_SERVICE_REGION)\n else:\n # S3 is the last resort when the target is also unknown\n service, region = DEFAULT_SERVICE_REGION\n\n if service == 'dynamodb':\n if environ['HTTP_X_AMZ_TARGET'].startswith('DynamoDBStreams'):\n host = 'dynamodbstreams'\n else:\n dynamo_api_version = environ['HTTP_X_AMZ_TARGET'].split(\"_\")[1].split(\".\")[0]\n # If Newer API version, use dynamodb2\n if dynamo_api_version > \"20111205\":\n host = \"dynamodb2\"\n else:\n host = \"{service}.{region}.amazonaws.com\".format(\n service=service, region=region)\n\n return host\n\n def get_application(self, environ):\n path_info = environ.get('PATH_INFO', '')\n\n # The URL path might contain non-ASCII text, for instance unicode S3 bucket names\n if six.PY2 and isinstance(path_info, str):\n path_info = six.u(path_info)\n if six.PY3 and isinstance(path_info, six.binary_type):\n path_info = path_info.decode('utf-8')\n\n if path_info.startswith(\"/moto-api\") or path_info == \"/favicon.ico\":\n host = \"moto_api\"\n elif path_info.startswith(\"/latest/meta-data/\"):\n host = \"instance_metadata\"\n else:\n host = environ['HTTP_HOST'].split(':')[0]\n\n with self.lock:\n backend = self.get_backend_for_host(host)\n if not backend:\n # No regular backend found; try parsing other headers\n host = self.infer_service_region_host(environ)\n backend = self.get_backend_for_host(host)\n\n app = self.app_instances.get(backend, None)\n if app is None:\n app = self.create_app(backend)\n self.app_instances[backend] = app\n return app\n\n def __call__(self, environ, start_response):\n backend_app = self.get_application(environ)\n return backend_app(environ, start_response)\n\n\nclass RegexConverter(BaseConverter):\n # http://werkzeug.pocoo.org/docs/routing/#custom-converters\n\n def __init__(self, url_map, *items):\n super(RegexConverter, self).__init__(url_map)\n self.regex = items[0]\n\n\nclass AWSTestHelper(FlaskClient):\n\n def action_data(self, action_name, **kwargs):\n \"\"\"\n Method calls resource with action_name and returns data of response.\n \"\"\"\n opts = {\"Action\": action_name}\n opts.update(kwargs)\n res = self.get(\"/?{0}\".format(urlencode(opts)),\n headers={\"Host\": \"{0}.us-east-1.amazonaws.com\".format(self.application.service)})\n return res.data.decode(\"utf-8\")\n\n def action_json(self, action_name, **kwargs):\n \"\"\"\n Method calls resource with action_name and returns object obtained via\n deserialization of output.\n \"\"\"\n return json.loads(self.action_data(action_name, **kwargs))\n\n\ndef create_backend_app(service):\n from werkzeug.routing import Map\n\n # Create the backend_app\n backend_app = Flask(__name__)\n backend_app.debug = True\n backend_app.service = service\n\n # Reset view functions to reset the app\n backend_app.view_functions = {}\n backend_app.url_map = Map()\n backend_app.url_map.converters['regex'] = RegexConverter\n backend = list(BACKENDS[service].values())[0]\n for url_path, handler in backend.flask_paths.items():\n view_func = convert_flask_to_httpretty_response(handler)\n if handler.__name__ == 'dispatch':\n endpoint = '{0}.dispatch'.format(handler.__self__.__name__)\n else:\n endpoint = view_func.__name__\n\n original_endpoint = endpoint\n index = 2\n while endpoint in backend_app.view_functions:\n # HACK: Sometimes we map the same view to multiple url_paths. Flask\n # requries us to have different names.\n endpoint = original_endpoint + str(index)\n index += 1\n\n backend_app.add_url_rule(\n url_path,\n endpoint=endpoint,\n methods=HTTP_METHODS,\n view_func=view_func,\n strict_slashes=False,\n )\n\n backend_app.test_client_class = AWSTestHelper\n return backend_app\n\n\ndef main(argv=sys.argv[1:]):\n parser = argparse.ArgumentParser()\n\n # Keep this for backwards compat\n parser.add_argument(\n \"service\",\n type=str,\n nargs='?', # http://stackoverflow.com/a/4480202/731592\n default=None)\n parser.add_argument(\n '-H', '--host', type=str,\n help='Which host to bind',\n default='127.0.0.1')\n parser.add_argument(\n '-p', '--port', type=int,\n help='Port number to use for connection',\n default=5000)\n parser.add_argument(\n '-r', '--reload',\n action='store_true',\n help='Reload server on a file change',\n default=False\n )\n parser.add_argument(\n '-s', '--ssl',\n action='store_true',\n help='Enable SSL encrypted connection with auto-generated certificate (use https://... URL)',\n default=False\n )\n parser.add_argument(\n '-c', '--ssl-cert', type=str,\n help='Path to SSL certificate',\n default=None)\n parser.add_argument(\n '-k', '--ssl-key', type=str,\n help='Path to SSL private key',\n default=None)\n\n args = parser.parse_args(argv)\n\n # Wrap the main application\n main_app = DomainDispatcherApplication(\n create_backend_app, service=args.service)\n main_app.debug = True\n\n ssl_context = None\n if args.ssl_key and args.ssl_cert:\n ssl_context = (args.ssl_cert, args.ssl_key)\n elif args.ssl:\n ssl_context = 'adhoc'\n\n run_simple(args.host, args.port, main_app,\n threaded=True, use_reloader=args.reload,\n ssl_context=ssl_context)\n\n\nif __name__ == '__main__':\n main()\n", "path": "moto/server.py"}, {"content": "from __future__ import unicode_literals\n\nfrom .responses import S3ResponseInstance\n\nurl_bases = [\n \"https?://s3(.*).amazonaws.com\",\n r\"https?://(?P<bucket_name>[a-zA-Z0-9\\-_.]*)\\.?s3(.*).amazonaws.com\"\n]\n\nurl_paths = {\n # subdomain bucket\n '{0}/$': S3ResponseInstance.bucket_response,\n\n # subdomain key of path-based bucket\n '{0}/(?P<key_or_bucket_name>[^/]+)/?$': S3ResponseInstance.ambiguous_response,\n # path-based bucket + key\n '{0}/(?P<bucket_name_path>[^/]+)/(?P<key_name>.+)': S3ResponseInstance.key_response,\n # subdomain bucket + key with empty first part of path\n '{0}//(?P<key_name>.*)$': S3ResponseInstance.key_response,\n}\n", "path": "moto/s3/urls.py"}]} | 3,632 | 367 |
gh_patches_debug_4536 | rasdani/github-patches | git_diff | ansible__ansible-modules-extras-1873 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Blockinfile module : Documentation mistake
##### Issue Type:
- Documentation Report
##### Plugin Name:
Blockinfile module
##### Ansible Version:
```
$ ansible --version
ansible 2.0.1.0
config file = /data/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### Ansible Configuration:
n.a.
##### Environment:
n.a
##### Summary:
Wrong spacing in official documentation
##### Steps To Reproduce:
```
https://docs.ansible.com/ansible/blockinfile_module.html
Chapter Example, last example using < with_items >
```
##### Expected Results:
Expression with_items: should be level with blockinfile:
##### Actual Results:
```
Example does not work as expression < with_items: > is not level with <blockinfile:>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `files/blockinfile.py`
Content:
```
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # (c) 2014, 2015 YAEGASHI Takeshi <[email protected]>
5 #
6 # This file is part of Ansible
7 #
8 # Ansible is free software: you can redistribute it and/or modify
9 # it under the terms of the GNU General Public License as published by
10 # the Free Software Foundation, either version 3 of the License, or
11 # (at your option) any later version.
12 #
13 # Ansible is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU General Public License for more details.
17 #
18 # You should have received a copy of the GNU General Public License
19 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
20
21 import re
22 import os
23 import tempfile
24
25 DOCUMENTATION = """
26 ---
27 module: blockinfile
28 author:
29 - 'YAEGASHI Takeshi (@yaegashi)'
30 extends_documentation_fragment:
31 - files
32 - validate
33 short_description: Insert/update/remove a text block
34 surrounded by marker lines.
35 version_added: '2.0'
36 description:
37 - This module will insert/update/remove a block of multi-line text
38 surrounded by customizable marker lines.
39 notes:
40 - This module supports check mode.
41 - When using 'with_*' loops be aware that if you do not set a unique mark the block will be overwritten on each iteration.
42 options:
43 dest:
44 aliases: [ name, destfile ]
45 required: true
46 description:
47 - The file to modify.
48 state:
49 required: false
50 choices: [ present, absent ]
51 default: present
52 description:
53 - Whether the block should be there or not.
54 marker:
55 required: false
56 default: '# {mark} ANSIBLE MANAGED BLOCK'
57 description:
58 - The marker line template.
59 "{mark}" will be replaced with "BEGIN" or "END".
60 block:
61 aliases: [ content ]
62 required: false
63 default: ''
64 description:
65 - The text to insert inside the marker lines.
66 If it's missing or an empty string,
67 the block will be removed as if C(state) were specified to C(absent).
68 insertafter:
69 required: false
70 default: EOF
71 description:
72 - If specified, the block will be inserted after the last match of
73 specified regular expression. A special value is available; C(EOF) for
74 inserting the block at the end of the file. If specified regular
75 expresion has no matches, C(EOF) will be used instead.
76 choices: [ 'EOF', '*regex*' ]
77 insertbefore:
78 required: false
79 default: None
80 description:
81 - If specified, the block will be inserted before the last match of
82 specified regular expression. A special value is available; C(BOF) for
83 inserting the block at the beginning of the file. If specified regular
84 expresion has no matches, the block will be inserted at the end of the
85 file.
86 choices: [ 'BOF', '*regex*' ]
87 create:
88 required: false
89 default: 'no'
90 choices: [ 'yes', 'no' ]
91 description:
92 - Create a new file if it doesn't exist.
93 backup:
94 required: false
95 default: 'no'
96 choices: [ 'yes', 'no' ]
97 description:
98 - Create a backup file including the timestamp information so you can
99 get the original file back if you somehow clobbered it incorrectly.
100 follow:
101 required: false
102 default: "no"
103 choices: [ "yes", "no" ]
104 description:
105 - 'This flag indicates that filesystem links, if they exist, should be followed.'
106 version_added: "2.1"
107 """
108
109 EXAMPLES = r"""
110 - name: insert/update "Match User" configuation block in /etc/ssh/sshd_config
111 blockinfile:
112 dest: /etc/ssh/sshd_config
113 block: |
114 Match User ansible-agent
115 PasswordAuthentication no
116
117 - name: insert/update eth0 configuration stanza in /etc/network/interfaces
118 (it might be better to copy files into /etc/network/interfaces.d/)
119 blockinfile:
120 dest: /etc/network/interfaces
121 block: |
122 iface eth0 inet static
123 address 192.168.0.1
124 netmask 255.255.255.0
125
126 - name: insert/update HTML surrounded by custom markers after <body> line
127 blockinfile:
128 dest: /var/www/html/index.html
129 marker: "<!-- {mark} ANSIBLE MANAGED BLOCK -->"
130 insertafter: "<body>"
131 content: |
132 <h1>Welcome to {{ansible_hostname}}</h1>
133 <p>Last updated on {{ansible_date_time.iso8601}}</p>
134
135 - name: remove HTML as well as surrounding markers
136 blockinfile:
137 dest: /var/www/html/index.html
138 marker: "<!-- {mark} ANSIBLE MANAGED BLOCK -->"
139 content: ""
140
141 - name: insert/update "Match User" configuation block in /etc/ssh/sshd_config
142 blockinfile:
143 dest: /etc/hosts
144 block: |
145 {{item.name}} {{item.ip}}
146 marker: "# {mark} ANSIBLE MANAGED BLOCK {{item.name}}"
147 with_items:
148 - { name: host1, ip: 10.10.1.10 }
149 - { name: host2, ip: 10.10.1.11 }
150 - { name: host3, ip: 10.10.1.12 }
151 """
152
153
154 def write_changes(module, contents, dest):
155
156 tmpfd, tmpfile = tempfile.mkstemp()
157 f = os.fdopen(tmpfd, 'wb')
158 f.write(contents)
159 f.close()
160
161 validate = module.params.get('validate', None)
162 valid = not validate
163 if validate:
164 if "%s" not in validate:
165 module.fail_json(msg="validate must contain %%s: %s" % (validate))
166 (rc, out, err) = module.run_command(validate % tmpfile)
167 valid = rc == 0
168 if rc != 0:
169 module.fail_json(msg='failed to validate: '
170 'rc:%s error:%s' % (rc, err))
171 if valid:
172 module.atomic_move(tmpfile, dest)
173
174
175 def check_file_attrs(module, changed, message):
176
177 file_args = module.load_file_common_arguments(module.params)
178 if module.set_file_attributes_if_different(file_args, False):
179
180 if changed:
181 message += " and "
182 changed = True
183 message += "ownership, perms or SE linux context changed"
184
185 return message, changed
186
187
188 def main():
189 module = AnsibleModule(
190 argument_spec=dict(
191 dest=dict(required=True, aliases=['name', 'destfile']),
192 state=dict(default='present', choices=['absent', 'present']),
193 marker=dict(default='# {mark} ANSIBLE MANAGED BLOCK', type='str'),
194 block=dict(default='', type='str', aliases=['content']),
195 insertafter=dict(default=None),
196 insertbefore=dict(default=None),
197 create=dict(default=False, type='bool'),
198 backup=dict(default=False, type='bool'),
199 validate=dict(default=None, type='str'),
200 ),
201 mutually_exclusive=[['insertbefore', 'insertafter']],
202 add_file_common_args=True,
203 supports_check_mode=True
204 )
205
206 params = module.params
207 dest = os.path.expanduser(params['dest'])
208 if module.boolean(params.get('follow', None)):
209 dest = os.path.realpath(dest)
210
211 if os.path.isdir(dest):
212 module.fail_json(rc=256,
213 msg='Destination %s is a directory !' % dest)
214
215 if not os.path.exists(dest):
216 if not module.boolean(params['create']):
217 module.fail_json(rc=257,
218 msg='Destination %s does not exist !' % dest)
219 original = None
220 lines = []
221 else:
222 f = open(dest, 'rb')
223 original = f.read()
224 f.close()
225 lines = original.splitlines()
226
227 insertbefore = params['insertbefore']
228 insertafter = params['insertafter']
229 block = params['block']
230 marker = params['marker']
231 present = params['state'] == 'present'
232
233 if insertbefore is None and insertafter is None:
234 insertafter = 'EOF'
235
236 if insertafter not in (None, 'EOF'):
237 insertre = re.compile(insertafter)
238 elif insertbefore not in (None, 'BOF'):
239 insertre = re.compile(insertbefore)
240 else:
241 insertre = None
242
243 marker0 = re.sub(r'{mark}', 'BEGIN', marker)
244 marker1 = re.sub(r'{mark}', 'END', marker)
245 if present and block:
246 # Escape seqeuences like '\n' need to be handled in Ansible 1.x
247 if ANSIBLE_VERSION.startswith('1.'):
248 block = re.sub('', block, '')
249 blocklines = [marker0] + block.splitlines() + [marker1]
250 else:
251 blocklines = []
252
253 n0 = n1 = None
254 for i, line in enumerate(lines):
255 if line.startswith(marker0):
256 n0 = i
257 if line.startswith(marker1):
258 n1 = i
259
260 if None in (n0, n1):
261 n0 = None
262 if insertre is not None:
263 for i, line in enumerate(lines):
264 if insertre.search(line):
265 n0 = i
266 if n0 is None:
267 n0 = len(lines)
268 elif insertafter is not None:
269 n0 += 1
270 elif insertbefore is not None:
271 n0 = 0 # insertbefore=BOF
272 else:
273 n0 = len(lines) # insertafter=EOF
274 elif n0 < n1:
275 lines[n0:n1+1] = []
276 else:
277 lines[n1:n0+1] = []
278 n0 = n1
279
280 lines[n0:n0] = blocklines
281
282 if lines:
283 result = '\n'.join(lines)+'\n'
284 else:
285 result = ''
286 if original == result:
287 msg = ''
288 changed = False
289 elif original is None:
290 msg = 'File created'
291 changed = True
292 elif not blocklines:
293 msg = 'Block removed'
294 changed = True
295 else:
296 msg = 'Block inserted'
297 changed = True
298
299 if changed and not module.check_mode:
300 if module.boolean(params['backup']) and os.path.exists(dest):
301 module.backup_local(dest)
302 write_changes(module, result, dest)
303
304 msg, changed = check_file_attrs(module, changed, msg)
305 module.exit_json(changed=changed, msg=msg)
306
307 # import module snippets
308 from ansible.module_utils.basic import *
309 from ansible.module_utils.splitter import *
310 if __name__ == '__main__':
311 main()
312
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/files/blockinfile.py b/files/blockinfile.py
--- a/files/blockinfile.py
+++ b/files/blockinfile.py
@@ -144,10 +144,10 @@
block: |
{{item.name}} {{item.ip}}
marker: "# {mark} ANSIBLE MANAGED BLOCK {{item.name}}"
- with_items:
- - { name: host1, ip: 10.10.1.10 }
- - { name: host2, ip: 10.10.1.11 }
- - { name: host3, ip: 10.10.1.12 }
+ with_items:
+ - { name: host1, ip: 10.10.1.10 }
+ - { name: host2, ip: 10.10.1.11 }
+ - { name: host3, ip: 10.10.1.12 }
"""
| {"golden_diff": "diff --git a/files/blockinfile.py b/files/blockinfile.py\n--- a/files/blockinfile.py\n+++ b/files/blockinfile.py\n@@ -144,10 +144,10 @@\n block: |\n {{item.name}} {{item.ip}}\n marker: \"# {mark} ANSIBLE MANAGED BLOCK {{item.name}}\"\n- with_items:\n- - { name: host1, ip: 10.10.1.10 }\n- - { name: host2, ip: 10.10.1.11 }\n- - { name: host3, ip: 10.10.1.12 }\n+ with_items:\n+ - { name: host1, ip: 10.10.1.10 }\n+ - { name: host2, ip: 10.10.1.11 }\n+ - { name: host3, ip: 10.10.1.12 }\n \"\"\"\n", "issue": "Blockinfile module : Documentation mistake\n##### Issue Type:\n- Documentation Report\n##### Plugin Name:\n\nBlockinfile module\n##### Ansible Version:\n\n```\n$ ansible --version\nansible 2.0.1.0\n config file = /data/ansible/ansible.cfg\n configured module search path = Default w/o overrides\n```\n##### Ansible Configuration:\n\nn.a.\n##### Environment:\n\nn.a\n##### Summary:\n\nWrong spacing in official documentation\n##### Steps To Reproduce:\n\n```\nhttps://docs.ansible.com/ansible/blockinfile_module.html\nChapter Example, last example using < with_items >\n```\n##### Expected Results:\n\nExpression with_items: should be level with blockinfile:\n##### Actual Results:\n\n```\nExample does not work as expression < with_items: > is not level with <blockinfile:> \n```\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2014, 2015 YAEGASHI Takeshi <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\nimport re\nimport os\nimport tempfile\n\nDOCUMENTATION = \"\"\"\n---\nmodule: blockinfile\nauthor:\n - 'YAEGASHI Takeshi (@yaegashi)'\nextends_documentation_fragment:\n - files\n - validate\nshort_description: Insert/update/remove a text block\n surrounded by marker lines.\nversion_added: '2.0'\ndescription:\n - This module will insert/update/remove a block of multi-line text\n surrounded by customizable marker lines.\nnotes:\n - This module supports check mode.\n - When using 'with_*' loops be aware that if you do not set a unique mark the block will be overwritten on each iteration.\noptions:\n dest:\n aliases: [ name, destfile ]\n required: true\n description:\n - The file to modify.\n state:\n required: false\n choices: [ present, absent ]\n default: present\n description:\n - Whether the block should be there or not.\n marker:\n required: false\n default: '# {mark} ANSIBLE MANAGED BLOCK'\n description:\n - The marker line template.\n \"{mark}\" will be replaced with \"BEGIN\" or \"END\".\n block:\n aliases: [ content ]\n required: false\n default: ''\n description:\n - The text to insert inside the marker lines.\n If it's missing or an empty string,\n the block will be removed as if C(state) were specified to C(absent).\n insertafter:\n required: false\n default: EOF\n description:\n - If specified, the block will be inserted after the last match of\n specified regular expression. A special value is available; C(EOF) for\n inserting the block at the end of the file. If specified regular\n expresion has no matches, C(EOF) will be used instead.\n choices: [ 'EOF', '*regex*' ]\n insertbefore:\n required: false\n default: None\n description:\n - If specified, the block will be inserted before the last match of\n specified regular expression. A special value is available; C(BOF) for\n inserting the block at the beginning of the file. If specified regular\n expresion has no matches, the block will be inserted at the end of the\n file.\n choices: [ 'BOF', '*regex*' ]\n create:\n required: false\n default: 'no'\n choices: [ 'yes', 'no' ]\n description:\n - Create a new file if it doesn't exist.\n backup:\n required: false\n default: 'no'\n choices: [ 'yes', 'no' ]\n description:\n - Create a backup file including the timestamp information so you can\n get the original file back if you somehow clobbered it incorrectly.\n follow:\n required: false\n default: \"no\"\n choices: [ \"yes\", \"no\" ]\n description:\n - 'This flag indicates that filesystem links, if they exist, should be followed.'\n version_added: \"2.1\"\n\"\"\"\n\nEXAMPLES = r\"\"\"\n- name: insert/update \"Match User\" configuation block in /etc/ssh/sshd_config\n blockinfile:\n dest: /etc/ssh/sshd_config\n block: |\n Match User ansible-agent\n PasswordAuthentication no\n\n- name: insert/update eth0 configuration stanza in /etc/network/interfaces\n (it might be better to copy files into /etc/network/interfaces.d/)\n blockinfile:\n dest: /etc/network/interfaces\n block: |\n iface eth0 inet static\n address 192.168.0.1\n netmask 255.255.255.0\n\n- name: insert/update HTML surrounded by custom markers after <body> line\n blockinfile:\n dest: /var/www/html/index.html\n marker: \"<!-- {mark} ANSIBLE MANAGED BLOCK -->\"\n insertafter: \"<body>\"\n content: |\n <h1>Welcome to {{ansible_hostname}}</h1>\n <p>Last updated on {{ansible_date_time.iso8601}}</p>\n\n- name: remove HTML as well as surrounding markers\n blockinfile:\n dest: /var/www/html/index.html\n marker: \"<!-- {mark} ANSIBLE MANAGED BLOCK -->\"\n content: \"\"\n\n- name: insert/update \"Match User\" configuation block in /etc/ssh/sshd_config\n blockinfile:\n dest: /etc/hosts\n block: |\n {{item.name}} {{item.ip}}\n marker: \"# {mark} ANSIBLE MANAGED BLOCK {{item.name}}\"\n with_items:\n - { name: host1, ip: 10.10.1.10 }\n - { name: host2, ip: 10.10.1.11 }\n - { name: host3, ip: 10.10.1.12 }\n\"\"\"\n\n\ndef write_changes(module, contents, dest):\n\n tmpfd, tmpfile = tempfile.mkstemp()\n f = os.fdopen(tmpfd, 'wb')\n f.write(contents)\n f.close()\n\n validate = module.params.get('validate', None)\n valid = not validate\n if validate:\n if \"%s\" not in validate:\n module.fail_json(msg=\"validate must contain %%s: %s\" % (validate))\n (rc, out, err) = module.run_command(validate % tmpfile)\n valid = rc == 0\n if rc != 0:\n module.fail_json(msg='failed to validate: '\n 'rc:%s error:%s' % (rc, err))\n if valid:\n module.atomic_move(tmpfile, dest)\n\n\ndef check_file_attrs(module, changed, message):\n\n file_args = module.load_file_common_arguments(module.params)\n if module.set_file_attributes_if_different(file_args, False):\n\n if changed:\n message += \" and \"\n changed = True\n message += \"ownership, perms or SE linux context changed\"\n\n return message, changed\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=dict(\n dest=dict(required=True, aliases=['name', 'destfile']),\n state=dict(default='present', choices=['absent', 'present']),\n marker=dict(default='# {mark} ANSIBLE MANAGED BLOCK', type='str'),\n block=dict(default='', type='str', aliases=['content']),\n insertafter=dict(default=None),\n insertbefore=dict(default=None),\n create=dict(default=False, type='bool'),\n backup=dict(default=False, type='bool'),\n validate=dict(default=None, type='str'),\n ),\n mutually_exclusive=[['insertbefore', 'insertafter']],\n add_file_common_args=True,\n supports_check_mode=True\n )\n\n params = module.params\n dest = os.path.expanduser(params['dest'])\n if module.boolean(params.get('follow', None)):\n dest = os.path.realpath(dest)\n\n if os.path.isdir(dest):\n module.fail_json(rc=256,\n msg='Destination %s is a directory !' % dest)\n\n if not os.path.exists(dest):\n if not module.boolean(params['create']):\n module.fail_json(rc=257,\n msg='Destination %s does not exist !' % dest)\n original = None\n lines = []\n else:\n f = open(dest, 'rb')\n original = f.read()\n f.close()\n lines = original.splitlines()\n\n insertbefore = params['insertbefore']\n insertafter = params['insertafter']\n block = params['block']\n marker = params['marker']\n present = params['state'] == 'present'\n\n if insertbefore is None and insertafter is None:\n insertafter = 'EOF'\n\n if insertafter not in (None, 'EOF'):\n insertre = re.compile(insertafter)\n elif insertbefore not in (None, 'BOF'):\n insertre = re.compile(insertbefore)\n else:\n insertre = None\n\n marker0 = re.sub(r'{mark}', 'BEGIN', marker)\n marker1 = re.sub(r'{mark}', 'END', marker)\n if present and block:\n # Escape seqeuences like '\\n' need to be handled in Ansible 1.x\n if ANSIBLE_VERSION.startswith('1.'):\n block = re.sub('', block, '')\n blocklines = [marker0] + block.splitlines() + [marker1]\n else:\n blocklines = []\n\n n0 = n1 = None\n for i, line in enumerate(lines):\n if line.startswith(marker0):\n n0 = i\n if line.startswith(marker1):\n n1 = i\n\n if None in (n0, n1):\n n0 = None\n if insertre is not None:\n for i, line in enumerate(lines):\n if insertre.search(line):\n n0 = i\n if n0 is None:\n n0 = len(lines)\n elif insertafter is not None:\n n0 += 1\n elif insertbefore is not None:\n n0 = 0 # insertbefore=BOF\n else:\n n0 = len(lines) # insertafter=EOF\n elif n0 < n1:\n lines[n0:n1+1] = []\n else:\n lines[n1:n0+1] = []\n n0 = n1\n\n lines[n0:n0] = blocklines\n\n if lines:\n result = '\\n'.join(lines)+'\\n'\n else:\n result = ''\n if original == result:\n msg = ''\n changed = False\n elif original is None:\n msg = 'File created'\n changed = True\n elif not blocklines:\n msg = 'Block removed'\n changed = True\n else:\n msg = 'Block inserted'\n changed = True\n\n if changed and not module.check_mode:\n if module.boolean(params['backup']) and os.path.exists(dest):\n module.backup_local(dest)\n write_changes(module, result, dest)\n\n msg, changed = check_file_attrs(module, changed, msg)\n module.exit_json(changed=changed, msg=msg)\n\n# import module snippets\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.splitter import *\nif __name__ == '__main__':\n main()\n", "path": "files/blockinfile.py"}], "after_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2014, 2015 YAEGASHI Takeshi <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\nimport re\nimport os\nimport tempfile\n\nDOCUMENTATION = \"\"\"\n---\nmodule: blockinfile\nauthor:\n - 'YAEGASHI Takeshi (@yaegashi)'\nextends_documentation_fragment:\n - files\n - validate\nshort_description: Insert/update/remove a text block\n surrounded by marker lines.\nversion_added: '2.0'\ndescription:\n - This module will insert/update/remove a block of multi-line text\n surrounded by customizable marker lines.\nnotes:\n - This module supports check mode.\n - When using 'with_' loops be aware that if you do not set a unique mark the block will be overwritten on each iteration.\noptions:\n dest:\n aliases: [ name, destfile ]\n required: true\n description:\n - The file to modify.\n state:\n required: false\n choices: [ present, absent ]\n default: present\n description:\n - Whether the block should be there or not.\n marker:\n required: false\n default: '# {mark} ANSIBLE MANAGED BLOCK'\n description:\n - The marker line template.\n \"{mark}\" will be replaced with \"BEGIN\" or \"END\".\n block:\n aliases: [ content ]\n required: false\n default: ''\n description:\n - The text to insert inside the marker lines.\n If it's missing or an empty string,\n the block will be removed as if C(state) were specified to C(absent).\n insertafter:\n required: false\n default: EOF\n description:\n - If specified, the block will be inserted after the last match of\n specified regular expression. A special value is available; C(EOF) for\n inserting the block at the end of the file. If specified regular\n expresion has no matches, C(EOF) will be used instead.\n choices: [ 'EOF', '*regex*' ]\n insertbefore:\n required: false\n default: None\n description:\n - If specified, the block will be inserted before the last match of\n specified regular expression. A special value is available; C(BOF) for\n inserting the block at the beginning of the file. If specified regular\n expresion has no matches, the block will be inserted at the end of the\n file.\n choices: [ 'BOF', '*regex*' ]\n create:\n required: false\n default: 'no'\n choices: [ 'yes', 'no' ]\n description:\n - Create a new file if it doesn't exist.\n backup:\n required: false\n default: 'no'\n choices: [ 'yes', 'no' ]\n description:\n - Create a backup file including the timestamp information so you can\n get the original file back if you somehow clobbered it incorrectly.\n follow:\n required: false\n default: \"no\"\n choices: [ \"yes\", \"no\" ]\n description:\n - 'This flag indicates that filesystem links, if they exist, should be followed.'\n version_added: \"2.1\"\n\"\"\"\n\nEXAMPLES = r\"\"\"\n- name: insert/update \"Match User\" configuation block in /etc/ssh/sshd_config\n blockinfile:\n dest: /etc/ssh/sshd_config\n block: |\n Match User ansible-agent\n PasswordAuthentication no\n\n- name: insert/update eth0 configuration stanza in /etc/network/interfaces\n (it might be better to copy files into /etc/network/interfaces.d/)\n blockinfile:\n dest: /etc/network/interfaces\n block: |\n iface eth0 inet static\n address 192.168.0.1\n netmask 255.255.255.0\n\n- name: insert/update HTML surrounded by custom markers after <body> line\n blockinfile:\n dest: /var/www/html/index.html\n marker: \"<!-- {mark} ANSIBLE MANAGED BLOCK -->\"\n insertafter: \"<body>\"\n content: |\n <h1>Welcome to {{ansible_hostname}}</h1>\n <p>Last updated on {{ansible_date_time.iso8601}}</p>\n\n- name: remove HTML as well as surrounding markers\n blockinfile:\n dest: /var/www/html/index.html\n marker: \"<!-- {mark} ANSIBLE MANAGED BLOCK -->\"\n content: \"\"\n\n- name: insert/update \"Match User\" configuation block in /etc/ssh/sshd_config\n blockinfile:\n dest: /etc/hosts\n block: |\n {{item.name}} {{item.ip}}\n marker: \"# {mark} ANSIBLE MANAGED BLOCK {{item.name}}\"\n with_items:\n - { name: host1, ip: 10.10.1.10 }\n - { name: host2, ip: 10.10.1.11 }\n - { name: host3, ip: 10.10.1.12 }\n\"\"\"\n\n\ndef write_changes(module, contents, dest):\n\n tmpfd, tmpfile = tempfile.mkstemp()\n f = os.fdopen(tmpfd, 'wb')\n f.write(contents)\n f.close()\n\n validate = module.params.get('validate', None)\n valid = not validate\n if validate:\n if \"%s\" not in validate:\n module.fail_json(msg=\"validate must contain %%s: %s\" % (validate))\n (rc, out, err) = module.run_command(validate % tmpfile)\n valid = rc == 0\n if rc != 0:\n module.fail_json(msg='failed to validate: '\n 'rc:%s error:%s' % (rc, err))\n if valid:\n module.atomic_move(tmpfile, dest)\n\n\ndef check_file_attrs(module, changed, message):\n\n file_args = module.load_file_common_arguments(module.params)\n if module.set_file_attributes_if_different(file_args, False):\n\n if changed:\n message += \" and \"\n changed = True\n message += \"ownership, perms or SE linux context changed\"\n\n return message, changed\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=dict(\n dest=dict(required=True, aliases=['name', 'destfile']),\n state=dict(default='present', choices=['absent', 'present']),\n marker=dict(default='# {mark} ANSIBLE MANAGED BLOCK', type='str'),\n block=dict(default='', type='str', aliases=['content']),\n insertafter=dict(default=None),\n insertbefore=dict(default=None),\n create=dict(default=False, type='bool'),\n backup=dict(default=False, type='bool'),\n validate=dict(default=None, type='str'),\n ),\n mutually_exclusive=[['insertbefore', 'insertafter']],\n add_file_common_args=True,\n supports_check_mode=True\n )\n\n params = module.params\n dest = os.path.expanduser(params['dest'])\n if module.boolean(params.get('follow', None)):\n dest = os.path.realpath(dest)\n\n if os.path.isdir(dest):\n module.fail_json(rc=256,\n msg='Destination %s is a directory !' % dest)\n\n if not os.path.exists(dest):\n if not module.boolean(params['create']):\n module.fail_json(rc=257,\n msg='Destination %s does not exist !' % dest)\n original = None\n lines = []\n else:\n f = open(dest, 'rb')\n original = f.read()\n f.close()\n lines = original.splitlines()\n\n insertbefore = params['insertbefore']\n insertafter = params['insertafter']\n block = params['block']\n marker = params['marker']\n present = params['state'] == 'present'\n\n if insertbefore is None and insertafter is None:\n insertafter = 'EOF'\n\n if insertafter not in (None, 'EOF'):\n insertre = re.compile(insertafter)\n elif insertbefore not in (None, 'BOF'):\n insertre = re.compile(insertbefore)\n else:\n insertre = None\n\n marker0 = re.sub(r'{mark}', 'BEGIN', marker)\n marker1 = re.sub(r'{mark}', 'END', marker)\n if present and block:\n # Escape seqeuences like '\\n' need to be handled in Ansible 1.x\n if ANSIBLE_VERSION.startswith('1.'):\n block = re.sub('', block, '')\n blocklines = [marker0] + block.splitlines() + [marker1]\n else:\n blocklines = []\n\n n0 = n1 = None\n for i, line in enumerate(lines):\n if line.startswith(marker0):\n n0 = i\n if line.startswith(marker1):\n n1 = i\n\n if None in (n0, n1):\n n0 = None\n if insertre is not None:\n for i, line in enumerate(lines):\n if insertre.search(line):\n n0 = i\n if n0 is None:\n n0 = len(lines)\n elif insertafter is not None:\n n0 += 1\n elif insertbefore is not None:\n n0 = 0 # insertbefore=BOF\n else:\n n0 = len(lines) # insertafter=EOF\n elif n0 < n1:\n lines[n0:n1+1] = []\n else:\n lines[n1:n0+1] = []\n n0 = n1\n\n lines[n0:n0] = blocklines\n\n if lines:\n result = '\\n'.join(lines)+'\\n'\n else:\n result = ''\n if original == result:\n msg = ''\n changed = False\n elif original is None:\n msg = 'File created'\n changed = True\n elif not blocklines:\n msg = 'Block removed'\n changed = True\n else:\n msg = 'Block inserted'\n changed = True\n\n if changed and not module.check_mode:\n if module.boolean(params['backup']) and os.path.exists(dest):\n module.backup_local(dest)\n write_changes(module, result, dest)\n\n msg, changed = check_file_attrs(module, changed, msg)\n module.exit_json(changed=changed, msg=msg)\n\n# import module snippets\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.splitter import *\nif __name__ == '__main__':\n main()\n", "path": "files/blockinfile.py"}]} | 3,683 | 223 |
gh_patches_debug_32969 | rasdani/github-patches | git_diff | tough-dev-school__education-backend-20 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Не работают триггеры для бандлов
https://sentry.io/organizations/fedor-borshev/issues/1403243325/?project=1807512&query=is%3Aunresolved
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/orders/models.py`
Content:
```
1 from typing import Optional
2
3 from django.utils import timezone
4 from django.utils.translation import ugettext_lazy as _
5
6 from app.models import DefaultQuerySet, TimestampedModel, models
7 from orders.signals import order_got_shipped
8
9
10 class ItemField(models.ForeignKey):
11 """This is a simple replacement for the ContentType framework -- fields of this type
12 are fields linked to items
13 """
14 def __init__(self, *args, **kwargs):
15 self._is_item = True
16 super().__init__(*args, **kwargs)
17
18
19 class UnknownItemException(Exception):
20 pass
21
22
23 class OrderQuerySet(DefaultQuerySet):
24 def paid(self, invert=False):
25 return self.filter(paid__isnull=invert)
26
27
28 class Order(TimestampedModel):
29 objects = OrderQuerySet.as_manager() # type: OrderQuerySet
30
31 user = models.ForeignKey('users.User', on_delete=models.PROTECT)
32 price = models.DecimalField(max_digits=9, decimal_places=2)
33
34 paid = models.DateTimeField(
35 _('Date when order got paid'),
36 null=True, blank=True,
37 help_text=_('If set during creation, order automaticaly gets shipped'),
38 )
39 shipped = models.DateTimeField(_('Date when order was shipped'), null=True, blank=True)
40
41 course = ItemField('courses.Course', null=True, blank=True, on_delete=models.PROTECT)
42 record = ItemField('courses.Record', null=True, blank=True, on_delete=models.PROTECT)
43 bundle = ItemField('courses.Bundle', null=True, blank=True, on_delete=models.PROTECT)
44
45 class Meta:
46 ordering = ['-id']
47 verbose_name = _('Order')
48 verbose_name_plural = _('Orders')
49
50 def __str__(self):
51 return f'Order #{self.pk}'
52
53 @property
54 def item(self):
55 """Find the attached item. Simple replacement for ContentType framework
56 """
57 for field in self.__class__._meta.get_fields():
58 if getattr(field, '_is_item', False):
59 if getattr(self, f'{field.name}_id', None) is not None:
60 return getattr(self, field.name)
61
62 @classmethod
63 def get_item_foreignkey(cls, item) -> Optional[models.fields.Field]:
64 """
65 Given an item model, returns the ForeignKey to it"""
66 for field in cls._meta.get_fields():
67 if getattr(field, '_is_item', False):
68 if field.related_model == item.__class__:
69 return field.name
70
71 def set_item(self, item):
72 foreign_key = self.__class__.get_item_foreignkey(item)
73 if foreign_key is not None:
74 setattr(self, foreign_key, item)
75 return
76
77 raise UnknownItemException('There is not foreignKey for {}'.format(item.__class__))
78
79 def set_paid(self):
80 is_already_paid = self.paid is not None
81
82 self.paid = timezone.now()
83
84 self.save()
85
86 if not is_already_paid and self.item is not None:
87 self.ship()
88
89 def ship(self):
90 """Ship the order. Better call it asynchronously"""
91 self.item.ship(to=self.user)
92
93 self.shipped = timezone.now()
94
95 self.save()
96
97 order_got_shipped.send(
98 sender=self.__class__,
99 order=self,
100 )
101
```
Path: `src/courses/models.py`
Content:
```
1 from urllib.parse import urljoin
2
3 from django.conf import settings
4 from django.utils.translation import ugettext_lazy as _
5
6 from app.models import TimestampedModel, models
7 from app.s3 import AppS3
8 from shipping.mixins import Shippable
9
10
11 class Course(Shippable, TimestampedModel):
12 name = models.CharField(max_length=255)
13 name_genitive = models.CharField(_('Genitive name'), max_length=255, help_text='«мастер-класса о TDD». К примеру для записей.')
14 name_receipt = models.CharField(_('Name for receipts'), max_length=255, help_text='«посещение мастер-класса по TDD» или «Доступ к записи курсов кройки и шитья»')
15 full_name = models.CharField(
16 _('Full name for letters'), max_length=255,
17 help_text='Билет на мастер-класс о TDD или «запись курсов кройки и шитья»',
18 )
19 slug = models.SlugField()
20 clickmeeting_room_url = models.URLField(_('Clickmeeting room URL'), null=True, blank=True, help_text=_('If set, every user who purcashes this course gets invited'))
21 template_id = models.CharField(_('Mailjet template_id'), max_length=256, blank=True, null=True, help_text=_('Leave it blank for the default template'))
22
23 class Meta:
24 ordering = ['-id']
25 verbose_name = _('Course')
26 verbose_name_plural = _('Courses')
27
28 def get_absolute_url(self):
29 return urljoin(settings.FRONTEND_URL, '/'.join(['courses', self.slug, '']))
30
31
32 class Record(Shippable, TimestampedModel):
33 course = models.ForeignKey(Course, on_delete=models.CASCADE)
34 name = models.CharField(max_length=255)
35 name_receipt = models.CharField(_('Name for receipts'), max_length=255, help_text='«Доступ к записи курсов кройки и шитья»')
36 full_name = models.CharField(_('Full name for letters'), max_length=255, help_text='«Запись мастер-класса о TDD»')
37 slug = models.SlugField()
38 full_name = models.CharField(
39 _('Full name for letters'), max_length=255,
40 help_text='«Запись мастер-класса о TDD»',
41 )
42
43 s3_object_id = models.CharField(max_length=512)
44 template_id = models.CharField(_('Mailjet template_id'), max_length=256, blank=True, null=True, help_text=_('Leave it blank for the default template'))
45
46 class Meta:
47 ordering = ['-id']
48 verbose_name = _('Record')
49 verbose_name_plural = _('Records')
50
51 @property
52 def name_genitive(self):
53 return self.course.name_genitive
54
55 def get_url(self, expires: int = 3 * 24 * 60 * 60):
56 return AppS3().get_presigned_url(self.s3_object_id, expires=expires)
57
58 def __str__(self):
59 return f'Запись {self.name_genitive}'
60
61 def get_absolute_url(self):
62 return self.course.get_absolute_url()
63
64
65 class Bundle(Shippable, TimestampedModel):
66 records = models.ManyToManyField('courses.Record')
67 courses = models.ManyToManyField('courses.Course')
68
69 name = models.CharField(max_length=255)
70 name_receipt = models.CharField(_('Name for receipts'), max_length=255, help_text='«Доступ к записи курсов кройки и шитья»')
71 full_name = models.CharField(_('Full name for letters'), max_length=255, help_text='«Запись мастер-класса о TDD»')
72 slug = models.SlugField()
73 full_name = models.CharField(
74 _('Full name for letters'), max_length=255,
75 help_text='«Запись мастер-класса о TDD»',
76 )
77
78 class Meta:
79 ordering = ['-id']
80 verbose_name = _('Bundle')
81 verbose_name_plural = _('Bundles')
82
83 def ship(self, to):
84 for record in self.records.iterator():
85 record.ship(to=to)
86
87 for course in self.courses.iterator():
88 course.ship(to=to)
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/courses/models.py b/src/courses/models.py
--- a/src/courses/models.py
+++ b/src/courses/models.py
@@ -80,6 +80,9 @@
verbose_name = _('Bundle')
verbose_name_plural = _('Bundles')
+ def get_absolute_url(self):
+ return urljoin(settings.FRONTEND_URL, '/'.join(['bundles', self.slug, '']))
+
def ship(self, to):
for record in self.records.iterator():
record.ship(to=to)
diff --git a/src/orders/models.py b/src/orders/models.py
--- a/src/orders/models.py
+++ b/src/orders/models.py
@@ -1,4 +1,4 @@
-from typing import Optional
+from typing import Iterable, Optional
from django.utils import timezone
from django.utils.translation import ugettext_lazy as _
@@ -59,18 +59,28 @@
if getattr(self, f'{field.name}_id', None) is not None:
return getattr(self, field.name)
+ @classmethod
+ def _iterate_items(cls) -> Iterable[models.fields.Field]:
+ for field in cls._meta.get_fields():
+ if getattr(field, '_is_item', False):
+ yield field
+
@classmethod
def get_item_foreignkey(cls, item) -> Optional[models.fields.Field]:
"""
Given an item model, returns the ForeignKey to it"""
- for field in cls._meta.get_fields():
- if getattr(field, '_is_item', False):
- if field.related_model == item.__class__:
- return field.name
+ for field in cls._iterate_items():
+ if field.related_model == item.__class__:
+ return field.name
+
+ def reset_items(self):
+ for field in self._iterate_items():
+ setattr(self, field.name, None)
def set_item(self, item):
foreign_key = self.__class__.get_item_foreignkey(item)
if foreign_key is not None:
+ self.reset_items()
setattr(self, foreign_key, item)
return
| {"golden_diff": "diff --git a/src/courses/models.py b/src/courses/models.py\n--- a/src/courses/models.py\n+++ b/src/courses/models.py\n@@ -80,6 +80,9 @@\n verbose_name = _('Bundle')\n verbose_name_plural = _('Bundles')\n \n+ def get_absolute_url(self):\n+ return urljoin(settings.FRONTEND_URL, '/'.join(['bundles', self.slug, '']))\n+\n def ship(self, to):\n for record in self.records.iterator():\n record.ship(to=to)\ndiff --git a/src/orders/models.py b/src/orders/models.py\n--- a/src/orders/models.py\n+++ b/src/orders/models.py\n@@ -1,4 +1,4 @@\n-from typing import Optional\n+from typing import Iterable, Optional\n \n from django.utils import timezone\n from django.utils.translation import ugettext_lazy as _\n@@ -59,18 +59,28 @@\n if getattr(self, f'{field.name}_id', None) is not None:\n return getattr(self, field.name)\n \n+ @classmethod\n+ def _iterate_items(cls) -> Iterable[models.fields.Field]:\n+ for field in cls._meta.get_fields():\n+ if getattr(field, '_is_item', False):\n+ yield field\n+\n @classmethod\n def get_item_foreignkey(cls, item) -> Optional[models.fields.Field]:\n \"\"\"\n Given an item model, returns the ForeignKey to it\"\"\"\n- for field in cls._meta.get_fields():\n- if getattr(field, '_is_item', False):\n- if field.related_model == item.__class__:\n- return field.name\n+ for field in cls._iterate_items():\n+ if field.related_model == item.__class__:\n+ return field.name\n+\n+ def reset_items(self):\n+ for field in self._iterate_items():\n+ setattr(self, field.name, None)\n \n def set_item(self, item):\n foreign_key = self.__class__.get_item_foreignkey(item)\n if foreign_key is not None:\n+ self.reset_items()\n setattr(self, foreign_key, item)\n return\n", "issue": "\u041d\u0435 \u0440\u0430\u0431\u043e\u0442\u0430\u044e\u0442 \u0442\u0440\u0438\u0433\u0433\u0435\u0440\u044b \u0434\u043b\u044f \u0431\u0430\u043d\u0434\u043b\u043e\u0432\nhttps://sentry.io/organizations/fedor-borshev/issues/1403243325/?project=1807512&query=is%3Aunresolved\n", "before_files": [{"content": "from typing import Optional\n\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom app.models import DefaultQuerySet, TimestampedModel, models\nfrom orders.signals import order_got_shipped\n\n\nclass ItemField(models.ForeignKey):\n \"\"\"This is a simple replacement for the ContentType framework -- fields of this type\n are fields linked to items\n \"\"\"\n def __init__(self, *args, **kwargs):\n self._is_item = True\n super().__init__(*args, **kwargs)\n\n\nclass UnknownItemException(Exception):\n pass\n\n\nclass OrderQuerySet(DefaultQuerySet):\n def paid(self, invert=False):\n return self.filter(paid__isnull=invert)\n\n\nclass Order(TimestampedModel):\n objects = OrderQuerySet.as_manager() # type: OrderQuerySet\n\n user = models.ForeignKey('users.User', on_delete=models.PROTECT)\n price = models.DecimalField(max_digits=9, decimal_places=2)\n\n paid = models.DateTimeField(\n _('Date when order got paid'),\n null=True, blank=True,\n help_text=_('If set during creation, order automaticaly gets shipped'),\n )\n shipped = models.DateTimeField(_('Date when order was shipped'), null=True, blank=True)\n\n course = ItemField('courses.Course', null=True, blank=True, on_delete=models.PROTECT)\n record = ItemField('courses.Record', null=True, blank=True, on_delete=models.PROTECT)\n bundle = ItemField('courses.Bundle', null=True, blank=True, on_delete=models.PROTECT)\n\n class Meta:\n ordering = ['-id']\n verbose_name = _('Order')\n verbose_name_plural = _('Orders')\n\n def __str__(self):\n return f'Order #{self.pk}'\n\n @property\n def item(self):\n \"\"\"Find the attached item. Simple replacement for ContentType framework\n \"\"\"\n for field in self.__class__._meta.get_fields():\n if getattr(field, '_is_item', False):\n if getattr(self, f'{field.name}_id', None) is not None:\n return getattr(self, field.name)\n\n @classmethod\n def get_item_foreignkey(cls, item) -> Optional[models.fields.Field]:\n \"\"\"\n Given an item model, returns the ForeignKey to it\"\"\"\n for field in cls._meta.get_fields():\n if getattr(field, '_is_item', False):\n if field.related_model == item.__class__:\n return field.name\n\n def set_item(self, item):\n foreign_key = self.__class__.get_item_foreignkey(item)\n if foreign_key is not None:\n setattr(self, foreign_key, item)\n return\n\n raise UnknownItemException('There is not foreignKey for {}'.format(item.__class__))\n\n def set_paid(self):\n is_already_paid = self.paid is not None\n\n self.paid = timezone.now()\n\n self.save()\n\n if not is_already_paid and self.item is not None:\n self.ship()\n\n def ship(self):\n \"\"\"Ship the order. Better call it asynchronously\"\"\"\n self.item.ship(to=self.user)\n\n self.shipped = timezone.now()\n\n self.save()\n\n order_got_shipped.send(\n sender=self.__class__,\n order=self,\n )\n", "path": "src/orders/models.py"}, {"content": "from urllib.parse import urljoin\n\nfrom django.conf import settings\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom app.models import TimestampedModel, models\nfrom app.s3 import AppS3\nfrom shipping.mixins import Shippable\n\n\nclass Course(Shippable, TimestampedModel):\n name = models.CharField(max_length=255)\n name_genitive = models.CharField(_('Genitive name'), max_length=255, help_text='\u00ab\u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043e TDD\u00bb. \u041a \u043f\u0440\u0438\u043c\u0435\u0440\u0443 \u0434\u043b\u044f \u0437\u0430\u043f\u0438\u0441\u0435\u0439.')\n name_receipt = models.CharField(_('Name for receipts'), max_length=255, help_text='\u00ab\u043f\u043e\u0441\u0435\u0449\u0435\u043d\u0438\u0435 \u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043f\u043e TDD\u00bb \u0438\u043b\u0438 \u00ab\u0414\u043e\u0441\u0442\u0443\u043f \u043a \u0437\u0430\u043f\u0438\u0441\u0438 \u043a\u0443\u0440\u0441\u043e\u0432 \u043a\u0440\u043e\u0439\u043a\u0438 \u0438 \u0448\u0438\u0442\u044c\u044f\u00bb')\n full_name = models.CharField(\n _('Full name for letters'), max_length=255,\n help_text='\u0411\u0438\u043b\u0435\u0442 \u043d\u0430 \u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441 \u043e TDD \u0438\u043b\u0438 \u00ab\u0437\u0430\u043f\u0438\u0441\u044c \u043a\u0443\u0440\u0441\u043e\u0432 \u043a\u0440\u043e\u0439\u043a\u0438 \u0438 \u0448\u0438\u0442\u044c\u044f\u00bb',\n )\n slug = models.SlugField()\n clickmeeting_room_url = models.URLField(_('Clickmeeting room URL'), null=True, blank=True, help_text=_('If set, every user who purcashes this course gets invited'))\n template_id = models.CharField(_('Mailjet template_id'), max_length=256, blank=True, null=True, help_text=_('Leave it blank for the default template'))\n\n class Meta:\n ordering = ['-id']\n verbose_name = _('Course')\n verbose_name_plural = _('Courses')\n\n def get_absolute_url(self):\n return urljoin(settings.FRONTEND_URL, '/'.join(['courses', self.slug, '']))\n\n\nclass Record(Shippable, TimestampedModel):\n course = models.ForeignKey(Course, on_delete=models.CASCADE)\n name = models.CharField(max_length=255)\n name_receipt = models.CharField(_('Name for receipts'), max_length=255, help_text='\u00ab\u0414\u043e\u0441\u0442\u0443\u043f \u043a \u0437\u0430\u043f\u0438\u0441\u0438 \u043a\u0443\u0440\u0441\u043e\u0432 \u043a\u0440\u043e\u0439\u043a\u0438 \u0438 \u0448\u0438\u0442\u044c\u044f\u00bb')\n full_name = models.CharField(_('Full name for letters'), max_length=255, help_text='\u00ab\u0417\u0430\u043f\u0438\u0441\u044c \u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043e TDD\u00bb')\n slug = models.SlugField()\n full_name = models.CharField(\n _('Full name for letters'), max_length=255,\n help_text='\u00ab\u0417\u0430\u043f\u0438\u0441\u044c \u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043e TDD\u00bb',\n )\n\n s3_object_id = models.CharField(max_length=512)\n template_id = models.CharField(_('Mailjet template_id'), max_length=256, blank=True, null=True, help_text=_('Leave it blank for the default template'))\n\n class Meta:\n ordering = ['-id']\n verbose_name = _('Record')\n verbose_name_plural = _('Records')\n\n @property\n def name_genitive(self):\n return self.course.name_genitive\n\n def get_url(self, expires: int = 3 * 24 * 60 * 60):\n return AppS3().get_presigned_url(self.s3_object_id, expires=expires)\n\n def __str__(self):\n return f'\u0417\u0430\u043f\u0438\u0441\u044c {self.name_genitive}'\n\n def get_absolute_url(self):\n return self.course.get_absolute_url()\n\n\nclass Bundle(Shippable, TimestampedModel):\n records = models.ManyToManyField('courses.Record')\n courses = models.ManyToManyField('courses.Course')\n\n name = models.CharField(max_length=255)\n name_receipt = models.CharField(_('Name for receipts'), max_length=255, help_text='\u00ab\u0414\u043e\u0441\u0442\u0443\u043f \u043a \u0437\u0430\u043f\u0438\u0441\u0438 \u043a\u0443\u0440\u0441\u043e\u0432 \u043a\u0440\u043e\u0439\u043a\u0438 \u0438 \u0448\u0438\u0442\u044c\u044f\u00bb')\n full_name = models.CharField(_('Full name for letters'), max_length=255, help_text='\u00ab\u0417\u0430\u043f\u0438\u0441\u044c \u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043e TDD\u00bb')\n slug = models.SlugField()\n full_name = models.CharField(\n _('Full name for letters'), max_length=255,\n help_text='\u00ab\u0417\u0430\u043f\u0438\u0441\u044c \u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043e TDD\u00bb',\n )\n\n class Meta:\n ordering = ['-id']\n verbose_name = _('Bundle')\n verbose_name_plural = _('Bundles')\n\n def ship(self, to):\n for record in self.records.iterator():\n record.ship(to=to)\n\n for course in self.courses.iterator():\n course.ship(to=to)\n", "path": "src/courses/models.py"}], "after_files": [{"content": "from typing import Iterable, Optional\n\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom app.models import DefaultQuerySet, TimestampedModel, models\nfrom orders.signals import order_got_shipped\n\n\nclass ItemField(models.ForeignKey):\n \"\"\"This is a simple replacement for the ContentType framework -- fields of this type\n are fields linked to items\n \"\"\"\n def __init__(self, *args, **kwargs):\n self._is_item = True\n super().__init__(*args, **kwargs)\n\n\nclass UnknownItemException(Exception):\n pass\n\n\nclass OrderQuerySet(DefaultQuerySet):\n def paid(self, invert=False):\n return self.filter(paid__isnull=invert)\n\n\nclass Order(TimestampedModel):\n objects = OrderQuerySet.as_manager() # type: OrderQuerySet\n\n user = models.ForeignKey('users.User', on_delete=models.PROTECT)\n price = models.DecimalField(max_digits=9, decimal_places=2)\n\n paid = models.DateTimeField(\n _('Date when order got paid'),\n null=True, blank=True,\n help_text=_('If set during creation, order automaticaly gets shipped'),\n )\n shipped = models.DateTimeField(_('Date when order was shipped'), null=True, blank=True)\n\n course = ItemField('courses.Course', null=True, blank=True, on_delete=models.PROTECT)\n record = ItemField('courses.Record', null=True, blank=True, on_delete=models.PROTECT)\n bundle = ItemField('courses.Bundle', null=True, blank=True, on_delete=models.PROTECT)\n\n class Meta:\n ordering = ['-id']\n verbose_name = _('Order')\n verbose_name_plural = _('Orders')\n\n def __str__(self):\n return f'Order #{self.pk}'\n\n @property\n def item(self):\n \"\"\"Find the attached item. Simple replacement for ContentType framework\n \"\"\"\n for field in self.__class__._meta.get_fields():\n if getattr(field, '_is_item', False):\n if getattr(self, f'{field.name}_id', None) is not None:\n return getattr(self, field.name)\n\n @classmethod\n def _iterate_items(cls) -> Iterable[models.fields.Field]:\n for field in cls._meta.get_fields():\n if getattr(field, '_is_item', False):\n yield field\n\n @classmethod\n def get_item_foreignkey(cls, item) -> Optional[models.fields.Field]:\n \"\"\"\n Given an item model, returns the ForeignKey to it\"\"\"\n for field in cls._iterate_items():\n if field.related_model == item.__class__:\n return field.name\n\n def reset_items(self):\n for field in self._iterate_items():\n setattr(self, field.name, None)\n\n def set_item(self, item):\n foreign_key = self.__class__.get_item_foreignkey(item)\n if foreign_key is not None:\n self.reset_items()\n setattr(self, foreign_key, item)\n return\n\n raise UnknownItemException('There is not foreignKey for {}'.format(item.__class__))\n\n def set_paid(self):\n is_already_paid = self.paid is not None\n\n self.paid = timezone.now()\n\n self.save()\n\n if not is_already_paid and self.item is not None:\n self.ship()\n\n def ship(self):\n \"\"\"Ship the order. Better call it asynchronously\"\"\"\n self.item.ship(to=self.user)\n\n self.shipped = timezone.now()\n\n self.save()\n\n order_got_shipped.send(\n sender=self.__class__,\n order=self,\n )\n", "path": "src/orders/models.py"}, {"content": "from urllib.parse import urljoin\n\nfrom django.conf import settings\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom app.models import TimestampedModel, models\nfrom app.s3 import AppS3\nfrom shipping.mixins import Shippable\n\n\nclass Course(Shippable, TimestampedModel):\n name = models.CharField(max_length=255)\n name_genitive = models.CharField(_('Genitive name'), max_length=255, help_text='\u00ab\u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043e TDD\u00bb. \u041a \u043f\u0440\u0438\u043c\u0435\u0440\u0443 \u0434\u043b\u044f \u0437\u0430\u043f\u0438\u0441\u0435\u0439.')\n name_receipt = models.CharField(_('Name for receipts'), max_length=255, help_text='\u00ab\u043f\u043e\u0441\u0435\u0449\u0435\u043d\u0438\u0435 \u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043f\u043e TDD\u00bb \u0438\u043b\u0438 \u00ab\u0414\u043e\u0441\u0442\u0443\u043f \u043a \u0437\u0430\u043f\u0438\u0441\u0438 \u043a\u0443\u0440\u0441\u043e\u0432 \u043a\u0440\u043e\u0439\u043a\u0438 \u0438 \u0448\u0438\u0442\u044c\u044f\u00bb')\n full_name = models.CharField(\n _('Full name for letters'), max_length=255,\n help_text='\u0411\u0438\u043b\u0435\u0442 \u043d\u0430 \u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441 \u043e TDD \u0438\u043b\u0438 \u00ab\u0437\u0430\u043f\u0438\u0441\u044c \u043a\u0443\u0440\u0441\u043e\u0432 \u043a\u0440\u043e\u0439\u043a\u0438 \u0438 \u0448\u0438\u0442\u044c\u044f\u00bb',\n )\n slug = models.SlugField()\n clickmeeting_room_url = models.URLField(_('Clickmeeting room URL'), null=True, blank=True, help_text=_('If set, every user who purcashes this course gets invited'))\n template_id = models.CharField(_('Mailjet template_id'), max_length=256, blank=True, null=True, help_text=_('Leave it blank for the default template'))\n\n class Meta:\n ordering = ['-id']\n verbose_name = _('Course')\n verbose_name_plural = _('Courses')\n\n def get_absolute_url(self):\n return urljoin(settings.FRONTEND_URL, '/'.join(['courses', self.slug, '']))\n\n\nclass Record(Shippable, TimestampedModel):\n course = models.ForeignKey(Course, on_delete=models.CASCADE)\n name = models.CharField(max_length=255)\n name_receipt = models.CharField(_('Name for receipts'), max_length=255, help_text='\u00ab\u0414\u043e\u0441\u0442\u0443\u043f \u043a \u0437\u0430\u043f\u0438\u0441\u0438 \u043a\u0443\u0440\u0441\u043e\u0432 \u043a\u0440\u043e\u0439\u043a\u0438 \u0438 \u0448\u0438\u0442\u044c\u044f\u00bb')\n full_name = models.CharField(_('Full name for letters'), max_length=255, help_text='\u00ab\u0417\u0430\u043f\u0438\u0441\u044c \u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043e TDD\u00bb')\n slug = models.SlugField()\n full_name = models.CharField(\n _('Full name for letters'), max_length=255,\n help_text='\u00ab\u0417\u0430\u043f\u0438\u0441\u044c \u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043e TDD\u00bb',\n )\n\n s3_object_id = models.CharField(max_length=512)\n template_id = models.CharField(_('Mailjet template_id'), max_length=256, blank=True, null=True, help_text=_('Leave it blank for the default template'))\n\n class Meta:\n ordering = ['-id']\n verbose_name = _('Record')\n verbose_name_plural = _('Records')\n\n @property\n def name_genitive(self):\n return self.course.name_genitive\n\n def get_url(self, expires: int = 3 * 24 * 60 * 60):\n return AppS3().get_presigned_url(self.s3_object_id, expires=expires)\n\n def __str__(self):\n return f'\u0417\u0430\u043f\u0438\u0441\u044c {self.name_genitive}'\n\n def get_absolute_url(self):\n return self.course.get_absolute_url()\n\n\nclass Bundle(Shippable, TimestampedModel):\n records = models.ManyToManyField('courses.Record')\n courses = models.ManyToManyField('courses.Course')\n\n name = models.CharField(max_length=255)\n name_receipt = models.CharField(_('Name for receipts'), max_length=255, help_text='\u00ab\u0414\u043e\u0441\u0442\u0443\u043f \u043a \u0437\u0430\u043f\u0438\u0441\u0438 \u043a\u0443\u0440\u0441\u043e\u0432 \u043a\u0440\u043e\u0439\u043a\u0438 \u0438 \u0448\u0438\u0442\u044c\u044f\u00bb')\n full_name = models.CharField(_('Full name for letters'), max_length=255, help_text='\u00ab\u0417\u0430\u043f\u0438\u0441\u044c \u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043e TDD\u00bb')\n slug = models.SlugField()\n full_name = models.CharField(\n _('Full name for letters'), max_length=255,\n help_text='\u00ab\u0417\u0430\u043f\u0438\u0441\u044c \u043c\u0430\u0441\u0442\u0435\u0440-\u043a\u043b\u0430\u0441\u0441\u0430 \u043e TDD\u00bb',\n )\n\n class Meta:\n ordering = ['-id']\n verbose_name = _('Bundle')\n verbose_name_plural = _('Bundles')\n\n def get_absolute_url(self):\n return urljoin(settings.FRONTEND_URL, '/'.join(['bundles', self.slug, '']))\n\n def ship(self, to):\n for record in self.records.iterator():\n record.ship(to=to)\n\n for course in self.courses.iterator():\n course.ship(to=to)\n", "path": "src/courses/models.py"}]} | 2,329 | 452 |
gh_patches_debug_56924 | rasdani/github-patches | git_diff | Cloud-CV__EvalAI-697 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Set default renderer to JSONRenderer in DRF backend
For reference: http://www.django-rest-framework.org/api-guide/renderers/#setting-the-renderers
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `settings/common.py`
Content:
```
1 """
2 Django settings for evalai project.
3
4 Generated by 'django-admin startproject' using Django 1.10.2.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/1.10/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/1.10/ref/settings/
11 """
12
13 import datetime
14 import os
15 import sys
16
17 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
18 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
19 APPS_DIR = os.path.join(BASE_DIR, 'apps')
20
21 sys.path.append(APPS_DIR)
22
23 # Quick-start development settings - unsuitable for production
24 # See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/
25
26 # SECURITY WARNING: keep the secret key used in production secret!
27 SECRET_KEY = os.environ.get('SECRET_KEY', 'random_secret_key')
28
29 # SECURITY WARNING: don't run with debug turned on in production!
30 DEBUG = True
31
32 ALLOWED_HOSTS = []
33
34
35 # Application definition
36
37 DEFAULT_APPS = [
38 'django.contrib.admin',
39 'django.contrib.auth',
40 'django.contrib.contenttypes',
41 'django.contrib.sessions',
42 'django.contrib.messages',
43 'django.contrib.staticfiles',
44 'django.contrib.sites',
45 ]
46
47 OUR_APPS = [
48 'accounts',
49 'analytics',
50 'base',
51 'challenges',
52 'hosts',
53 'jobs',
54 'participants',
55 'submissions',
56 'web',
57 ]
58
59 THIRD_PARTY_APPS = [
60 'allauth',
61 'allauth.account',
62 'corsheaders',
63 'rest_auth',
64 'rest_auth.registration',
65 'rest_framework.authtoken',
66 'rest_framework',
67 'rest_framework_docs',
68 'rest_framework_expiring_authtoken',
69 ]
70
71 INSTALLED_APPS = DEFAULT_APPS + OUR_APPS + THIRD_PARTY_APPS
72
73 MIDDLEWARE = [
74 'corsheaders.middleware.CorsMiddleware',
75 'django.middleware.security.SecurityMiddleware',
76 'django.contrib.sessions.middleware.SessionMiddleware',
77 'django.middleware.common.CommonMiddleware',
78 'django.middleware.csrf.CsrfViewMiddleware',
79 'django.contrib.auth.middleware.AuthenticationMiddleware',
80 'django.contrib.messages.middleware.MessageMiddleware',
81 'django.middleware.clickjacking.XFrameOptionsMiddleware',
82 ]
83
84 ROOT_URLCONF = 'evalai.urls'
85
86
87 TEMPLATES = [
88 {
89 'BACKEND': 'django.template.backends.django.DjangoTemplates',
90 'DIRS': [],
91 'APP_DIRS': True,
92 'OPTIONS': {
93 'context_processors': [
94 'django.template.context_processors.debug',
95 'django.template.context_processors.request',
96 'django.contrib.auth.context_processors.auth',
97 'django.contrib.messages.context_processors.messages',
98 ],
99 },
100 },
101 ]
102
103 WSGI_APPLICATION = 'evalai.wsgi.application'
104
105
106 # Password validation
107 # https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators
108
109 AUTH_PASSWORD_VALIDATORS = [
110 {
111 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', # noqa
112 },
113 {
114 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', # noqa
115 },
116 {
117 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', # noqa
118 },
119 {
120 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', # noqa
121 },
122 ]
123
124
125 # Internationalization
126 # https://docs.djangoproject.com/en/1.10/topics/i18n/
127
128 LANGUAGE_CODE = 'en-us'
129
130 TIME_ZONE = 'UTC'
131
132 USE_I18N = True
133
134 USE_L10N = True
135
136 USE_TZ = True
137
138 # Static files (CSS, JavaScript, Images)
139 # https://docs.djangoproject.com/en/1.10/howto/static-files/
140
141 STATIC_URL = '/static/'
142
143 MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
144 MEDIA_URL = "/media/"
145
146 SITE_ID = 1
147
148 REST_FRAMEWORK = {
149 'DEFAULT_PAGINATION_CLASS': (
150 'rest_framework.pagination.LimitOffsetPagination'),
151 'PAGE_SIZE': 10,
152 'DEFAULT_PERMISSION_CLASSES': [
153 'rest_framework.permissions.IsAuthenticatedOrReadOnly'
154 ],
155 'DEFAULT_AUTHENTICATION_CLASSES': [
156 'rest_framework_expiring_authtoken.authentication.ExpiringTokenAuthentication',
157 ],
158 'TEST_REQUEST_DEFAULT_FORMAT': 'json',
159 'DEFAULT_THROTTLE_CLASSES': (
160 'rest_framework.throttling.AnonRateThrottle',
161 'rest_framework.throttling.UserRateThrottle'
162 ),
163 'DEFAULT_THROTTLE_RATES': {
164 'anon': '100/minute',
165 'user': '100/minute'
166 }
167 }
168
169 # ALLAUTH SETTINGS
170 ACCOUNT_EMAIL_REQUIRED = True
171 OLD_PASSWORD_FIELD_ENABLED = True
172
173 AUTHENTICATION_BACKENDS = (
174 # Needed to login by username in Django admin, regardless of `allauth`
175 'django.contrib.auth.backends.ModelBackend',
176 # `allauth` specific authentication methods, such as login by e-mail
177 'allauth.account.auth_backends.AuthenticationBackend',
178 )
179
180 # CORS Settings
181 CORS_ORIGIN_ALLOW_ALL = True
182
183 # REST Framework Expiring Tokens Configuration
184 EXPIRING_TOKEN_LIFESPAN = datetime.timedelta(days=7)
185
186 # Logging
187 LOGGING = {
188 'version': 1,
189 'disable_existing_loggers': False,
190 'root': {
191 'level': 'INFO',
192 'handlers': ['console'],
193 },
194 'filters': {
195 'require_debug_false': {
196 '()': 'django.utils.log.RequireDebugFalse',
197 },
198 'require_debug_true': {
199 '()': 'django.utils.log.RequireDebugTrue',
200 }
201 },
202 'formatters': {
203 'simple': {
204 'format': '[%(asctime)s] %(levelname)s %(message)s',
205 'datefmt': '%Y-%m-%d %H:%M:%S'
206 },
207 'verbose': {
208 'format': '[%(asctime)s] %(levelname)s %(module)s %(message)s',
209 'datefmt': '%Y-%m-%d %H:%M:%S'
210 }
211 },
212 'handlers': {
213 'console': {
214 'level': 'INFO',
215 'filters': ['require_debug_true'],
216 'class': 'logging.StreamHandler',
217 'formatter': 'simple'
218 },
219 'logfile': {
220 'level': 'DEBUG',
221 'class': 'logging.handlers.RotatingFileHandler',
222 'filename': "/tmp/logfile",
223 'maxBytes': 50000,
224 'backupCount': 10,
225 'formatter': 'verbose'
226 },
227 'mail_admins': {
228 'level': 'ERROR',
229 'class': 'django.utils.log.AdminEmailHandler',
230 'filters': ['require_debug_false'],
231 }
232 },
233 'loggers': {
234 'django': {
235 'handlers': ['console'],
236 'propagate': True,
237 },
238 'django.request': {
239 'handlers': ['mail_admins'],
240 'level': 'ERROR',
241 'propagate': False,
242 },
243 'django.security': {
244 'handlers': ['mail_admins'],
245 'level': 'ERROR',
246 'propagate': False,
247 },
248 'django.db.backends': {
249 'handlers': ['mail_admins'],
250 'level': 'ERROR',
251 'propagate': False,
252 }
253 }
254 }
255
256 CACHES = {
257 'default': {
258 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
259 }
260 }
261
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/settings/common.py b/settings/common.py
--- a/settings/common.py
+++ b/settings/common.py
@@ -163,7 +163,10 @@
'DEFAULT_THROTTLE_RATES': {
'anon': '100/minute',
'user': '100/minute'
- }
+ },
+ 'DEFAULT_RENDERER_CLASSES': (
+ 'rest_framework.renderers.JSONRenderer',
+ )
}
# ALLAUTH SETTINGS
| {"golden_diff": "diff --git a/settings/common.py b/settings/common.py\n--- a/settings/common.py\n+++ b/settings/common.py\n@@ -163,7 +163,10 @@\n 'DEFAULT_THROTTLE_RATES': {\n 'anon': '100/minute',\n 'user': '100/minute'\n- }\n+ },\n+ 'DEFAULT_RENDERER_CLASSES': (\n+ 'rest_framework.renderers.JSONRenderer',\n+ )\n }\n \n # ALLAUTH SETTINGS\n", "issue": "Set default renderer to JSONRenderer in DRF backend\nFor reference: http://www.django-rest-framework.org/api-guide/renderers/#setting-the-renderers\n", "before_files": [{"content": "\"\"\"\nDjango settings for evalai project.\n\nGenerated by 'django-admin startproject' using Django 1.10.2.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.10/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.10/ref/settings/\n\"\"\"\n\nimport datetime\nimport os\nimport sys\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nAPPS_DIR = os.path.join(BASE_DIR, 'apps')\n\nsys.path.append(APPS_DIR)\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.environ.get('SECRET_KEY', 'random_secret_key')\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = True\n\nALLOWED_HOSTS = []\n\n\n# Application definition\n\nDEFAULT_APPS = [\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'django.contrib.sites',\n]\n\nOUR_APPS = [\n 'accounts',\n 'analytics',\n 'base',\n 'challenges',\n 'hosts',\n 'jobs',\n 'participants',\n 'submissions',\n 'web',\n]\n\nTHIRD_PARTY_APPS = [\n 'allauth',\n 'allauth.account',\n 'corsheaders',\n 'rest_auth',\n 'rest_auth.registration',\n 'rest_framework.authtoken',\n 'rest_framework',\n 'rest_framework_docs',\n 'rest_framework_expiring_authtoken',\n]\n\nINSTALLED_APPS = DEFAULT_APPS + OUR_APPS + THIRD_PARTY_APPS\n\nMIDDLEWARE = [\n 'corsheaders.middleware.CorsMiddleware',\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nROOT_URLCONF = 'evalai.urls'\n\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'evalai.wsgi.application'\n\n\n# Password validation\n# https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', # noqa\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.10/topics/i18n/\n\nLANGUAGE_CODE = 'en-us'\n\nTIME_ZONE = 'UTC'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.10/howto/static-files/\n\nSTATIC_URL = '/static/'\n\nMEDIA_ROOT = os.path.join(BASE_DIR, 'media')\nMEDIA_URL = \"/media/\"\n\nSITE_ID = 1\n\nREST_FRAMEWORK = {\n 'DEFAULT_PAGINATION_CLASS': (\n 'rest_framework.pagination.LimitOffsetPagination'),\n 'PAGE_SIZE': 10,\n 'DEFAULT_PERMISSION_CLASSES': [\n 'rest_framework.permissions.IsAuthenticatedOrReadOnly'\n ],\n 'DEFAULT_AUTHENTICATION_CLASSES': [\n 'rest_framework_expiring_authtoken.authentication.ExpiringTokenAuthentication',\n ],\n 'TEST_REQUEST_DEFAULT_FORMAT': 'json',\n 'DEFAULT_THROTTLE_CLASSES': (\n 'rest_framework.throttling.AnonRateThrottle',\n 'rest_framework.throttling.UserRateThrottle'\n ),\n 'DEFAULT_THROTTLE_RATES': {\n 'anon': '100/minute',\n 'user': '100/minute'\n }\n}\n\n# ALLAUTH SETTINGS\nACCOUNT_EMAIL_REQUIRED = True\nOLD_PASSWORD_FIELD_ENABLED = True\n\nAUTHENTICATION_BACKENDS = (\n # Needed to login by username in Django admin, regardless of `allauth`\n 'django.contrib.auth.backends.ModelBackend',\n # `allauth` specific authentication methods, such as login by e-mail\n 'allauth.account.auth_backends.AuthenticationBackend',\n)\n\n# CORS Settings\nCORS_ORIGIN_ALLOW_ALL = True\n\n# REST Framework Expiring Tokens Configuration\nEXPIRING_TOKEN_LIFESPAN = datetime.timedelta(days=7)\n\n# Logging\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'root': {\n 'level': 'INFO',\n 'handlers': ['console'],\n },\n 'filters': {\n 'require_debug_false': {\n '()': 'django.utils.log.RequireDebugFalse',\n },\n 'require_debug_true': {\n '()': 'django.utils.log.RequireDebugTrue',\n }\n },\n 'formatters': {\n 'simple': {\n 'format': '[%(asctime)s] %(levelname)s %(message)s',\n 'datefmt': '%Y-%m-%d %H:%M:%S'\n },\n 'verbose': {\n 'format': '[%(asctime)s] %(levelname)s %(module)s %(message)s',\n 'datefmt': '%Y-%m-%d %H:%M:%S'\n }\n },\n 'handlers': {\n 'console': {\n 'level': 'INFO',\n 'filters': ['require_debug_true'],\n 'class': 'logging.StreamHandler',\n 'formatter': 'simple'\n },\n 'logfile': {\n 'level': 'DEBUG',\n 'class': 'logging.handlers.RotatingFileHandler',\n 'filename': \"/tmp/logfile\",\n 'maxBytes': 50000,\n 'backupCount': 10,\n 'formatter': 'verbose'\n },\n 'mail_admins': {\n 'level': 'ERROR',\n 'class': 'django.utils.log.AdminEmailHandler',\n 'filters': ['require_debug_false'],\n }\n },\n 'loggers': {\n 'django': {\n 'handlers': ['console'],\n 'propagate': True,\n },\n 'django.request': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': False,\n },\n 'django.security': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': False,\n },\n 'django.db.backends': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': False,\n }\n }\n}\n\nCACHES = {\n 'default': {\n 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n }\n}\n", "path": "settings/common.py"}], "after_files": [{"content": "\"\"\"\nDjango settings for evalai project.\n\nGenerated by 'django-admin startproject' using Django 1.10.2.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.10/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.10/ref/settings/\n\"\"\"\n\nimport datetime\nimport os\nimport sys\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nAPPS_DIR = os.path.join(BASE_DIR, 'apps')\n\nsys.path.append(APPS_DIR)\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.environ.get('SECRET_KEY', 'random_secret_key')\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = True\n\nALLOWED_HOSTS = []\n\n\n# Application definition\n\nDEFAULT_APPS = [\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'django.contrib.sites',\n]\n\nOUR_APPS = [\n 'accounts',\n 'analytics',\n 'base',\n 'challenges',\n 'hosts',\n 'jobs',\n 'participants',\n 'submissions',\n 'web',\n]\n\nTHIRD_PARTY_APPS = [\n 'allauth',\n 'allauth.account',\n 'corsheaders',\n 'rest_auth',\n 'rest_auth.registration',\n 'rest_framework.authtoken',\n 'rest_framework',\n 'rest_framework_docs',\n 'rest_framework_expiring_authtoken',\n]\n\nINSTALLED_APPS = DEFAULT_APPS + OUR_APPS + THIRD_PARTY_APPS\n\nMIDDLEWARE = [\n 'corsheaders.middleware.CorsMiddleware',\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nROOT_URLCONF = 'evalai.urls'\n\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'evalai.wsgi.application'\n\n\n# Password validation\n# https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', # noqa\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.10/topics/i18n/\n\nLANGUAGE_CODE = 'en-us'\n\nTIME_ZONE = 'UTC'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.10/howto/static-files/\n\nSTATIC_URL = '/static/'\n\nMEDIA_ROOT = os.path.join(BASE_DIR, 'media')\nMEDIA_URL = \"/media/\"\n\nSITE_ID = 1\n\nREST_FRAMEWORK = {\n 'DEFAULT_PAGINATION_CLASS': (\n 'rest_framework.pagination.LimitOffsetPagination'),\n 'PAGE_SIZE': 10,\n 'DEFAULT_PERMISSION_CLASSES': [\n 'rest_framework.permissions.IsAuthenticatedOrReadOnly'\n ],\n 'DEFAULT_AUTHENTICATION_CLASSES': [\n 'rest_framework_expiring_authtoken.authentication.ExpiringTokenAuthentication',\n ],\n 'TEST_REQUEST_DEFAULT_FORMAT': 'json',\n 'DEFAULT_THROTTLE_CLASSES': (\n 'rest_framework.throttling.AnonRateThrottle',\n 'rest_framework.throttling.UserRateThrottle'\n ),\n 'DEFAULT_THROTTLE_RATES': {\n 'anon': '100/minute',\n 'user': '100/minute'\n },\n 'DEFAULT_RENDERER_CLASSES': (\n 'rest_framework.renderers.JSONRenderer',\n )\n}\n\n# ALLAUTH SETTINGS\nACCOUNT_EMAIL_REQUIRED = True\nOLD_PASSWORD_FIELD_ENABLED = True\n\nAUTHENTICATION_BACKENDS = (\n # Needed to login by username in Django admin, regardless of `allauth`\n 'django.contrib.auth.backends.ModelBackend',\n # `allauth` specific authentication methods, such as login by e-mail\n 'allauth.account.auth_backends.AuthenticationBackend',\n)\n\n# CORS Settings\nCORS_ORIGIN_ALLOW_ALL = True\n\n# REST Framework Expiring Tokens Configuration\nEXPIRING_TOKEN_LIFESPAN = datetime.timedelta(days=7)\n\n# Logging\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'root': {\n 'level': 'INFO',\n 'handlers': ['console'],\n },\n 'filters': {\n 'require_debug_false': {\n '()': 'django.utils.log.RequireDebugFalse',\n },\n 'require_debug_true': {\n '()': 'django.utils.log.RequireDebugTrue',\n }\n },\n 'formatters': {\n 'simple': {\n 'format': '[%(asctime)s] %(levelname)s %(message)s',\n 'datefmt': '%Y-%m-%d %H:%M:%S'\n },\n 'verbose': {\n 'format': '[%(asctime)s] %(levelname)s %(module)s %(message)s',\n 'datefmt': '%Y-%m-%d %H:%M:%S'\n }\n },\n 'handlers': {\n 'console': {\n 'level': 'INFO',\n 'filters': ['require_debug_true'],\n 'class': 'logging.StreamHandler',\n 'formatter': 'simple'\n },\n 'logfile': {\n 'level': 'DEBUG',\n 'class': 'logging.handlers.RotatingFileHandler',\n 'filename': \"/tmp/logfile\",\n 'maxBytes': 50000,\n 'backupCount': 10,\n 'formatter': 'verbose'\n },\n 'mail_admins': {\n 'level': 'ERROR',\n 'class': 'django.utils.log.AdminEmailHandler',\n 'filters': ['require_debug_false'],\n }\n },\n 'loggers': {\n 'django': {\n 'handlers': ['console'],\n 'propagate': True,\n },\n 'django.request': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': False,\n },\n 'django.security': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': False,\n },\n 'django.db.backends': {\n 'handlers': ['mail_admins'],\n 'level': 'ERROR',\n 'propagate': False,\n }\n }\n}\n\nCACHES = {\n 'default': {\n 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n }\n}\n", "path": "settings/common.py"}]} | 2,586 | 106 |
gh_patches_debug_2514 | rasdani/github-patches | git_diff | liberapay__liberapay.com-173 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Changing organization type doesn't work
In identity tab, when I change the organization type to set Organization instead of Business, my changes are not saved.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `liberapay/security/authentication.py`
Content:
```
1 """Defines website authentication helpers.
2 """
3 import binascii
4
5 from six.moves.urllib.parse import urlencode
6
7 from aspen import Response
8
9 from liberapay.constants import SESSION, SESSION_TIMEOUT
10 from liberapay.exceptions import AuthRequired
11 from liberapay.models.participant import Participant
12
13
14 class _ANON(object):
15 ANON = True
16 is_admin = False
17 id = None
18 __bool__ = __nonzero__ = lambda *a: False
19 get_tip_to = lambda self, tippee: Participant._zero_tip_dict(tippee)
20 __repr__ = lambda self: '<ANON>'
21
22
23 ANON = _ANON()
24
25
26 def _get_body(request):
27 try:
28 body = request.body
29 except Response:
30 return
31 if not isinstance(body, dict):
32 return
33 return body
34
35
36 def sign_in_with_form_data(body, state):
37 p = None
38 _, website = state['_'], state['website']
39
40 if body.get('log-in.id'):
41 id = body.pop('log-in.id')
42 k = 'email' if '@' in id else 'username'
43 p = Participant.authenticate(
44 k, 'password',
45 id, body.pop('log-in.password')
46 )
47 if p and p.status == 'closed':
48 p.update_status('active')
49
50 elif body.get('sign-in.username'):
51 if body.pop('sign-in.terms') != 'agree':
52 raise Response(400, 'you have to agree to the terms')
53 kind = body.pop('sign-in.kind')
54 if kind not in ('individual', 'organization'):
55 raise Response(400, 'bad kind')
56 with website.db.get_cursor() as c:
57 p = Participant.make_active(
58 body.pop('sign-in.username'), kind, body.pop('sign-in.password'),
59 cursor=c
60 )
61 p.add_email(body.pop('sign-in.email'), cursor=c)
62 p.authenticated = True
63
64 elif body.get('email-login.email'):
65 email = body.pop('email-login.email')
66 p = Participant._from_thing('email', email)
67 if p:
68 p.start_session()
69 qs = {'log-in.id': p.id, 'log-in.token': p.session_token}
70 p.send_email(
71 'password_reset',
72 email=email,
73 link=p.url('settings/', qs),
74 link_validity=SESSION_TIMEOUT,
75 )
76 state['email-login.sent-to'] = email
77 else:
78 state['sign-in.error'] = _(
79 "We didn't find any account whose primary email address is {0}.",
80 email
81 )
82 p = None
83
84 return p
85
86
87 def start_user_as_anon():
88 """Make sure we always have a user object, regardless of exceptions during authentication.
89 """
90 return {'user': ANON}
91
92
93 def authenticate_user_if_possible(request, state, user, _):
94 """This signs the user in.
95 """
96 if request.line.uri.startswith('/assets/'):
97 return
98
99 # HTTP auth
100 if 'Authorization' in request.headers:
101 header = request.headers['authorization']
102 if not header.startswith('Basic '):
103 raise Response(401, 'Unsupported authentication method')
104 try:
105 creds = binascii.a2b_base64(header[len('Basic '):]).split(':', 1)
106 except binascii.Error:
107 raise Response(400, 'Malformed "Authorization" header')
108 participant = Participant.authenticate('id', 'password', *creds)
109 if not participant:
110 raise Response(401)
111 return {'user': participant}
112
113 # Cookie and form auth
114 # We want to try cookie auth first, but we want form auth to supersede it
115 p = None
116 response = state.setdefault('response', Response())
117 if SESSION in request.headers.cookie:
118 creds = request.headers.cookie[SESSION].value.split(':', 1)
119 p = Participant.authenticate('id', 'session', *creds)
120 if p:
121 state['user'] = p
122 session_p, p = p, None
123 session_suffix = ''
124 redirect_url = request.line.uri
125 if request.method == 'POST':
126 body = _get_body(request)
127 if body:
128 p = sign_in_with_form_data(body, state)
129 carry_on = body.pop('email-login.carry-on', None)
130 if not p and carry_on:
131 p_email = session_p and (
132 session_p.email or session_p.get_emails()[0].address
133 )
134 if p_email != carry_on:
135 state['email-login.carry-on'] = carry_on
136 raise AuthRequired
137 elif request.method == 'GET' and request.qs.get('log-in.id'):
138 id, token = request.qs.pop('log-in.id'), request.qs.pop('log-in.token')
139 p = Participant.authenticate('id', 'session', id, token)
140 if not p and (not session_p or session_p.id != id):
141 raise Response(400, _("This login link is expired or invalid."))
142 else:
143 qs = '?' + urlencode(request.qs, doseq=True) if request.qs else ''
144 redirect_url = request.path.raw + qs
145 session_p = p
146 session_suffix = '.em'
147 if p:
148 if session_p:
149 session_p.sign_out(response.headers.cookie)
150 p.sign_in(response.headers.cookie, session_suffix)
151 state['user'] = p
152 if request.body.pop('form.repost', None) != 'true':
153 response.redirect(redirect_url)
154
155
156 def add_auth_to_response(response, request=None, user=ANON):
157 if request is None:
158 return # early parsing must've failed
159 if request.line.uri.startswith('/assets/'):
160 return # assets never get auth headers
161
162 if SESSION in request.headers.cookie:
163 if not user.ANON:
164 user.keep_signed_in(response.headers.cookie)
165
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/liberapay/security/authentication.py b/liberapay/security/authentication.py
--- a/liberapay/security/authentication.py
+++ b/liberapay/security/authentication.py
@@ -44,6 +44,8 @@
k, 'password',
id, body.pop('log-in.password')
)
+ if not p:
+ state['sign-in.error'] = _("Bad username or password.")
if p and p.status == 'closed':
p.update_status('active')
| {"golden_diff": "diff --git a/liberapay/security/authentication.py b/liberapay/security/authentication.py\n--- a/liberapay/security/authentication.py\n+++ b/liberapay/security/authentication.py\n@@ -44,6 +44,8 @@\n k, 'password',\n id, body.pop('log-in.password')\n )\n+ if not p:\n+ state['sign-in.error'] = _(\"Bad username or password.\")\n if p and p.status == 'closed':\n p.update_status('active')\n", "issue": "Changing organization type doesn't work\nIn identity tab, when I change the organization type to set Organization instead of Business, my changes are not saved. \n\n", "before_files": [{"content": "\"\"\"Defines website authentication helpers.\n\"\"\"\nimport binascii\n\nfrom six.moves.urllib.parse import urlencode\n\nfrom aspen import Response\n\nfrom liberapay.constants import SESSION, SESSION_TIMEOUT\nfrom liberapay.exceptions import AuthRequired\nfrom liberapay.models.participant import Participant\n\n\nclass _ANON(object):\n ANON = True\n is_admin = False\n id = None\n __bool__ = __nonzero__ = lambda *a: False\n get_tip_to = lambda self, tippee: Participant._zero_tip_dict(tippee)\n __repr__ = lambda self: '<ANON>'\n\n\nANON = _ANON()\n\n\ndef _get_body(request):\n try:\n body = request.body\n except Response:\n return\n if not isinstance(body, dict):\n return\n return body\n\n\ndef sign_in_with_form_data(body, state):\n p = None\n _, website = state['_'], state['website']\n\n if body.get('log-in.id'):\n id = body.pop('log-in.id')\n k = 'email' if '@' in id else 'username'\n p = Participant.authenticate(\n k, 'password',\n id, body.pop('log-in.password')\n )\n if p and p.status == 'closed':\n p.update_status('active')\n\n elif body.get('sign-in.username'):\n if body.pop('sign-in.terms') != 'agree':\n raise Response(400, 'you have to agree to the terms')\n kind = body.pop('sign-in.kind')\n if kind not in ('individual', 'organization'):\n raise Response(400, 'bad kind')\n with website.db.get_cursor() as c:\n p = Participant.make_active(\n body.pop('sign-in.username'), kind, body.pop('sign-in.password'),\n cursor=c\n )\n p.add_email(body.pop('sign-in.email'), cursor=c)\n p.authenticated = True\n\n elif body.get('email-login.email'):\n email = body.pop('email-login.email')\n p = Participant._from_thing('email', email)\n if p:\n p.start_session()\n qs = {'log-in.id': p.id, 'log-in.token': p.session_token}\n p.send_email(\n 'password_reset',\n email=email,\n link=p.url('settings/', qs),\n link_validity=SESSION_TIMEOUT,\n )\n state['email-login.sent-to'] = email\n else:\n state['sign-in.error'] = _(\n \"We didn't find any account whose primary email address is {0}.\",\n email\n )\n p = None\n\n return p\n\n\ndef start_user_as_anon():\n \"\"\"Make sure we always have a user object, regardless of exceptions during authentication.\n \"\"\"\n return {'user': ANON}\n\n\ndef authenticate_user_if_possible(request, state, user, _):\n \"\"\"This signs the user in.\n \"\"\"\n if request.line.uri.startswith('/assets/'):\n return\n\n # HTTP auth\n if 'Authorization' in request.headers:\n header = request.headers['authorization']\n if not header.startswith('Basic '):\n raise Response(401, 'Unsupported authentication method')\n try:\n creds = binascii.a2b_base64(header[len('Basic '):]).split(':', 1)\n except binascii.Error:\n raise Response(400, 'Malformed \"Authorization\" header')\n participant = Participant.authenticate('id', 'password', *creds)\n if not participant:\n raise Response(401)\n return {'user': participant}\n\n # Cookie and form auth\n # We want to try cookie auth first, but we want form auth to supersede it\n p = None\n response = state.setdefault('response', Response())\n if SESSION in request.headers.cookie:\n creds = request.headers.cookie[SESSION].value.split(':', 1)\n p = Participant.authenticate('id', 'session', *creds)\n if p:\n state['user'] = p\n session_p, p = p, None\n session_suffix = ''\n redirect_url = request.line.uri\n if request.method == 'POST':\n body = _get_body(request)\n if body:\n p = sign_in_with_form_data(body, state)\n carry_on = body.pop('email-login.carry-on', None)\n if not p and carry_on:\n p_email = session_p and (\n session_p.email or session_p.get_emails()[0].address\n )\n if p_email != carry_on:\n state['email-login.carry-on'] = carry_on\n raise AuthRequired\n elif request.method == 'GET' and request.qs.get('log-in.id'):\n id, token = request.qs.pop('log-in.id'), request.qs.pop('log-in.token')\n p = Participant.authenticate('id', 'session', id, token)\n if not p and (not session_p or session_p.id != id):\n raise Response(400, _(\"This login link is expired or invalid.\"))\n else:\n qs = '?' + urlencode(request.qs, doseq=True) if request.qs else ''\n redirect_url = request.path.raw + qs\n session_p = p\n session_suffix = '.em'\n if p:\n if session_p:\n session_p.sign_out(response.headers.cookie)\n p.sign_in(response.headers.cookie, session_suffix)\n state['user'] = p\n if request.body.pop('form.repost', None) != 'true':\n response.redirect(redirect_url)\n\n\ndef add_auth_to_response(response, request=None, user=ANON):\n if request is None:\n return # early parsing must've failed\n if request.line.uri.startswith('/assets/'):\n return # assets never get auth headers\n\n if SESSION in request.headers.cookie:\n if not user.ANON:\n user.keep_signed_in(response.headers.cookie)\n", "path": "liberapay/security/authentication.py"}], "after_files": [{"content": "\"\"\"Defines website authentication helpers.\n\"\"\"\nimport binascii\n\nfrom six.moves.urllib.parse import urlencode\n\nfrom aspen import Response\n\nfrom liberapay.constants import SESSION, SESSION_TIMEOUT\nfrom liberapay.exceptions import AuthRequired\nfrom liberapay.models.participant import Participant\n\n\nclass _ANON(object):\n ANON = True\n is_admin = False\n id = None\n __bool__ = __nonzero__ = lambda *a: False\n get_tip_to = lambda self, tippee: Participant._zero_tip_dict(tippee)\n __repr__ = lambda self: '<ANON>'\n\n\nANON = _ANON()\n\n\ndef _get_body(request):\n try:\n body = request.body\n except Response:\n return\n if not isinstance(body, dict):\n return\n return body\n\n\ndef sign_in_with_form_data(body, state):\n p = None\n _, website = state['_'], state['website']\n\n if body.get('log-in.id'):\n id = body.pop('log-in.id')\n k = 'email' if '@' in id else 'username'\n p = Participant.authenticate(\n k, 'password',\n id, body.pop('log-in.password')\n )\n if not p:\n state['sign-in.error'] = _(\"Bad username or password.\")\n if p and p.status == 'closed':\n p.update_status('active')\n\n elif body.get('sign-in.username'):\n if body.pop('sign-in.terms') != 'agree':\n raise Response(400, 'you have to agree to the terms')\n kind = body.pop('sign-in.kind')\n if kind not in ('individual', 'organization'):\n raise Response(400, 'bad kind')\n with website.db.get_cursor() as c:\n p = Participant.make_active(\n body.pop('sign-in.username'), kind, body.pop('sign-in.password'),\n cursor=c\n )\n p.add_email(body.pop('sign-in.email'), cursor=c)\n p.authenticated = True\n\n elif body.get('email-login.email'):\n email = body.pop('email-login.email')\n p = Participant._from_thing('email', email)\n if p:\n p.start_session()\n qs = {'log-in.id': p.id, 'log-in.token': p.session_token}\n p.send_email(\n 'password_reset',\n email=email,\n link=p.url('settings/', qs),\n link_validity=SESSION_TIMEOUT,\n )\n state['email-login.sent-to'] = email\n else:\n state['sign-in.error'] = _(\n \"We didn't find any account whose primary email address is {0}.\",\n email\n )\n p = None\n\n return p\n\n\ndef start_user_as_anon():\n \"\"\"Make sure we always have a user object, regardless of exceptions during authentication.\n \"\"\"\n return {'user': ANON}\n\n\ndef authenticate_user_if_possible(request, state, user, _):\n \"\"\"This signs the user in.\n \"\"\"\n if request.line.uri.startswith('/assets/'):\n return\n\n # HTTP auth\n if 'Authorization' in request.headers:\n header = request.headers['authorization']\n if not header.startswith('Basic '):\n raise Response(401, 'Unsupported authentication method')\n try:\n creds = binascii.a2b_base64(header[len('Basic '):]).split(':', 1)\n except binascii.Error:\n raise Response(400, 'Malformed \"Authorization\" header')\n participant = Participant.authenticate('id', 'password', *creds)\n if not participant:\n raise Response(401)\n return {'user': participant}\n\n # Cookie and form auth\n # We want to try cookie auth first, but we want form auth to supersede it\n p = None\n response = state.setdefault('response', Response())\n if SESSION in request.headers.cookie:\n creds = request.headers.cookie[SESSION].value.split(':', 1)\n p = Participant.authenticate('id', 'session', *creds)\n if p:\n state['user'] = p\n session_p, p = p, None\n session_suffix = ''\n redirect_url = request.line.uri\n if request.method == 'POST':\n body = _get_body(request)\n if body:\n p = sign_in_with_form_data(body, state)\n carry_on = body.pop('email-login.carry-on', None)\n if not p and carry_on:\n p_email = session_p and (\n session_p.email or session_p.get_emails()[0].address\n )\n if p_email != carry_on:\n state['email-login.carry-on'] = carry_on\n raise AuthRequired\n elif request.method == 'GET' and request.qs.get('log-in.id'):\n id, token = request.qs.pop('log-in.id'), request.qs.pop('log-in.token')\n p = Participant.authenticate('id', 'session', id, token)\n if not p and (not session_p or session_p.id != id):\n raise Response(400, _(\"This login link is expired or invalid.\"))\n else:\n qs = '?' + urlencode(request.qs, doseq=True) if request.qs else ''\n redirect_url = request.path.raw + qs\n session_p = p\n session_suffix = '.em'\n if p:\n if session_p:\n session_p.sign_out(response.headers.cookie)\n p.sign_in(response.headers.cookie, session_suffix)\n state['user'] = p\n if request.body.pop('form.repost', None) != 'true':\n response.redirect(redirect_url)\n\n\ndef add_auth_to_response(response, request=None, user=ANON):\n if request is None:\n return # early parsing must've failed\n if request.line.uri.startswith('/assets/'):\n return # assets never get auth headers\n\n if SESSION in request.headers.cookie:\n if not user.ANON:\n user.keep_signed_in(response.headers.cookie)\n", "path": "liberapay/security/authentication.py"}]} | 1,944 | 108 |
gh_patches_debug_28945 | rasdani/github-patches | git_diff | huggingface__transformers-13400 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Zero-shot classification pipeline truncation support
Transformers 4.10.0 brought [a change](https://github.com/huggingface/transformers/pull/13299/files#diff-c5af53af9b08fb383b49d7a07c1a56c890198b5cd48adc97aeef753fe2e7d60dR91) that modified the default truncation strategy to TruncationStrategy.DO_NOT_TRUNCATE for the ZeroShotClassificationPipeline.
That uncovered an issue in that the [ZeroShotClassificationPipeline](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/zero_shot_classification.py#L217 ) doesn't appear to pass kwargs to the parent's call method. So even when calling the pipeline with truncation=True, it doesn't allow for truncation.
Thank you for the assistance in advance, appreciate all the work you guys do.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/transformers/pipelines/zero_shot_classification.py`
Content:
```
1 from typing import List, Union
2
3 import numpy as np
4
5 from ..file_utils import add_end_docstrings, is_torch_available
6 from ..tokenization_utils import TruncationStrategy
7 from ..utils import logging
8 from .base import PIPELINE_INIT_ARGS, ArgumentHandler, Pipeline
9
10
11 if is_torch_available():
12 import torch
13
14 logger = logging.get_logger(__name__)
15
16
17 class ZeroShotClassificationArgumentHandler(ArgumentHandler):
18 """
19 Handles arguments for zero-shot for text classification by turning each possible label into an NLI
20 premise/hypothesis pair.
21 """
22
23 def _parse_labels(self, labels):
24 if isinstance(labels, str):
25 labels = [label.strip() for label in labels.split(",")]
26 return labels
27
28 def __call__(self, sequences, labels, hypothesis_template):
29 if len(labels) == 0 or len(sequences) == 0:
30 raise ValueError("You must include at least one label and at least one sequence.")
31 if hypothesis_template.format(labels[0]) == hypothesis_template:
32 raise ValueError(
33 (
34 'The provided hypothesis_template "{}" was not able to be formatted with the target labels. '
35 "Make sure the passed template includes formatting syntax such as {{}} where the label should go."
36 ).format(hypothesis_template)
37 )
38
39 if isinstance(sequences, str):
40 sequences = [sequences]
41 labels = self._parse_labels(labels)
42
43 sequence_pairs = []
44 for sequence in sequences:
45 sequence_pairs.extend([[sequence, hypothesis_template.format(label)] for label in labels])
46
47 return sequence_pairs
48
49
50 @add_end_docstrings(PIPELINE_INIT_ARGS)
51 class ZeroShotClassificationPipeline(Pipeline):
52 """
53 NLI-based zero-shot classification pipeline using a :obj:`ModelForSequenceClassification` trained on NLI (natural
54 language inference) tasks.
55
56 Any combination of sequences and labels can be passed and each combination will be posed as a premise/hypothesis
57 pair and passed to the pretrained model. Then, the logit for `entailment` is taken as the logit for the candidate
58 label being valid. Any NLI model can be used, but the id of the `entailment` label must be included in the model
59 config's :attr:`~transformers.PretrainedConfig.label2id`.
60
61 This NLI pipeline can currently be loaded from :func:`~transformers.pipeline` using the following task identifier:
62 :obj:`"zero-shot-classification"`.
63
64 The models that this pipeline can use are models that have been fine-tuned on an NLI task. See the up-to-date list
65 of available models on `huggingface.co/models <https://huggingface.co/models?search=nli>`__.
66 """
67
68 def __init__(self, args_parser=ZeroShotClassificationArgumentHandler(), *args, **kwargs):
69 super().__init__(*args, **kwargs)
70 self._args_parser = args_parser
71 if self.entailment_id == -1:
72 logger.warning(
73 "Failed to determine 'entailment' label id from the label2id mapping in the model config. Setting to "
74 "-1. Define a descriptive label2id mapping in the model config to ensure correct outputs."
75 )
76
77 @property
78 def entailment_id(self):
79 for label, ind in self.model.config.label2id.items():
80 if label.lower().startswith("entail"):
81 return ind
82 return -1
83
84 def _parse_and_tokenize(
85 self,
86 sequences,
87 candidate_labels,
88 hypothesis_template,
89 padding=True,
90 add_special_tokens=True,
91 truncation=TruncationStrategy.DO_NOT_TRUNCATE,
92 **kwargs
93 ):
94 """
95 Parse arguments and tokenize only_first so that hypothesis (label) is not truncated
96 """
97 sequence_pairs = self._args_parser(sequences, candidate_labels, hypothesis_template)
98 return_tensors = self.framework
99 if getattr(self.tokenizer, "pad_token", None) is None:
100 # XXX some tokenizers do not have a padding token, we use simple lists
101 # and no padding then
102 logger.warning("The tokenizer {self.tokenizer} does not have a pad token, we're not running it as a batch")
103 padding = False
104 inputs = []
105 for sequence_pair in sequence_pairs:
106 model_input = self.tokenizer(
107 text=sequence_pair[0],
108 text_pair=sequence_pair[1],
109 add_special_tokens=add_special_tokens,
110 return_tensors=return_tensors,
111 padding=padding,
112 truncation=truncation,
113 )
114 inputs.append(model_input)
115 else:
116 inputs = self.tokenizer(
117 sequence_pairs,
118 add_special_tokens=add_special_tokens,
119 return_tensors=return_tensors,
120 padding=padding,
121 truncation=truncation,
122 )
123
124 return inputs
125
126 def _forward(self, inputs, return_tensors=False):
127 """
128 Internal framework specific forward dispatching
129
130 Args:
131 inputs: dict holding all the keyword arguments for required by the model forward method.
132 return_tensors: Whether to return native framework (pt/tf) tensors rather than numpy array
133
134 Returns:
135 Numpy array
136 """
137 # Encode for forward
138 with self.device_placement():
139 if self.framework == "tf":
140 if isinstance(inputs, list):
141 predictions = []
142 for input_ in inputs:
143 prediction = self.model(input_.data, training=False)[0]
144 predictions.append(prediction)
145 else:
146 predictions = self.model(inputs.data, training=False)[0]
147 else:
148 with torch.no_grad():
149 if isinstance(inputs, list):
150 predictions = []
151 for input_ in inputs:
152 model_input = self.ensure_tensor_on_device(**input_)
153 prediction = self.model(**model_input)[0].cpu()
154 predictions.append(prediction)
155
156 else:
157 inputs = self.ensure_tensor_on_device(**inputs)
158 predictions = self.model(**inputs)[0].cpu()
159
160 if return_tensors:
161 return predictions
162 else:
163 if isinstance(predictions, list):
164 predictions = np.array([p.numpy() for p in predictions])
165 else:
166 predictions = predictions.numpy()
167 return predictions
168
169 def __call__(
170 self,
171 sequences: Union[str, List[str]],
172 candidate_labels,
173 hypothesis_template="This example is {}.",
174 multi_label=False,
175 **kwargs,
176 ):
177 """
178 Classify the sequence(s) given as inputs. See the :obj:`~transformers.ZeroShotClassificationPipeline`
179 documentation for more information.
180
181 Args:
182 sequences (:obj:`str` or :obj:`List[str]`):
183 The sequence(s) to classify, will be truncated if the model input is too large.
184 candidate_labels (:obj:`str` or :obj:`List[str]`):
185 The set of possible class labels to classify each sequence into. Can be a single label, a string of
186 comma-separated labels, or a list of labels.
187 hypothesis_template (:obj:`str`, `optional`, defaults to :obj:`"This example is {}."`):
188 The template used to turn each label into an NLI-style hypothesis. This template must include a {} or
189 similar syntax for the candidate label to be inserted into the template. For example, the default
190 template is :obj:`"This example is {}."` With the candidate label :obj:`"sports"`, this would be fed
191 into the model like :obj:`"<cls> sequence to classify <sep> This example is sports . <sep>"`. The
192 default template works well in many cases, but it may be worthwhile to experiment with different
193 templates depending on the task setting.
194 multi_label (:obj:`bool`, `optional`, defaults to :obj:`False`):
195 Whether or not multiple candidate labels can be true. If :obj:`False`, the scores are normalized such
196 that the sum of the label likelihoods for each sequence is 1. If :obj:`True`, the labels are considered
197 independent and probabilities are normalized for each candidate by doing a softmax of the entailment
198 score vs. the contradiction score.
199
200 Return:
201 A :obj:`dict` or a list of :obj:`dict`: Each result comes as a dictionary with the following keys:
202
203 - **sequence** (:obj:`str`) -- The sequence for which this is the output.
204 - **labels** (:obj:`List[str]`) -- The labels sorted by order of likelihood.
205 - **scores** (:obj:`List[float]`) -- The probabilities for each of the labels.
206 """
207 if "multi_class" in kwargs and kwargs["multi_class"] is not None:
208 multi_label = kwargs.pop("multi_class")
209 logger.warning(
210 "The `multi_class` argument has been deprecated and renamed to `multi_label`. "
211 "`multi_class` will be removed in a future version of Transformers."
212 )
213
214 if sequences and isinstance(sequences, str):
215 sequences = [sequences]
216
217 outputs = super().__call__(sequences, candidate_labels, hypothesis_template)
218 if isinstance(outputs, list):
219 # XXX: Some tokenizers cannot handle batching because they don't
220 # have pad_token, so outputs will be a list, however, because outputs
221 # is only n logits and sequence_length is not present anymore, we
222 # can recreate a tensor out of outputs.
223 outputs = np.array(outputs)
224 num_sequences = len(sequences)
225 candidate_labels = self._args_parser._parse_labels(candidate_labels)
226 reshaped_outputs = outputs.reshape((num_sequences, len(candidate_labels), -1))
227
228 if len(candidate_labels) == 1:
229 multi_label = True
230
231 if not multi_label:
232 # softmax the "entailment" logits over all candidate labels
233 entail_logits = reshaped_outputs[..., self.entailment_id]
234 scores = np.exp(entail_logits) / np.exp(entail_logits).sum(-1, keepdims=True)
235 else:
236 # softmax over the entailment vs. contradiction dim for each label independently
237 entailment_id = self.entailment_id
238 contradiction_id = -1 if entailment_id == 0 else 0
239 entail_contr_logits = reshaped_outputs[..., [contradiction_id, entailment_id]]
240 scores = np.exp(entail_contr_logits) / np.exp(entail_contr_logits).sum(-1, keepdims=True)
241 scores = scores[..., 1]
242
243 result = []
244 for iseq in range(num_sequences):
245 top_inds = list(reversed(scores[iseq].argsort()))
246 result.append(
247 {
248 "sequence": sequences if isinstance(sequences, str) else sequences[iseq],
249 "labels": [candidate_labels[i] for i in top_inds],
250 "scores": scores[iseq][top_inds].tolist(),
251 }
252 )
253
254 if len(result) == 1:
255 return result[0]
256 return result
257
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/transformers/pipelines/zero_shot_classification.py b/src/transformers/pipelines/zero_shot_classification.py
--- a/src/transformers/pipelines/zero_shot_classification.py
+++ b/src/transformers/pipelines/zero_shot_classification.py
@@ -88,7 +88,7 @@
hypothesis_template,
padding=True,
add_special_tokens=True,
- truncation=TruncationStrategy.DO_NOT_TRUNCATE,
+ truncation=TruncationStrategy.ONLY_FIRST,
**kwargs
):
"""
@@ -113,13 +113,31 @@
)
inputs.append(model_input)
else:
- inputs = self.tokenizer(
- sequence_pairs,
- add_special_tokens=add_special_tokens,
- return_tensors=return_tensors,
- padding=padding,
- truncation=truncation,
- )
+ try:
+ inputs = self.tokenizer(
+ sequence_pairs,
+ add_special_tokens=add_special_tokens,
+ return_tensors=return_tensors,
+ padding=padding,
+ truncation=truncation,
+ )
+ except Exception as e:
+ if "too short" in str(e):
+ # tokenizers might yell that we want to truncate
+ # to a value that is not even reached by the input.
+ # In that case we don't want to truncate.
+ # It seems there's not a really better way to catch that
+ # exception.
+
+ inputs = self.tokenizer(
+ sequence_pairs,
+ add_special_tokens=add_special_tokens,
+ return_tensors=return_tensors,
+ padding=padding,
+ truncation=TruncationStrategy.DO_NOT_TRUNCATE,
+ )
+ else:
+ raise e
return inputs
| {"golden_diff": "diff --git a/src/transformers/pipelines/zero_shot_classification.py b/src/transformers/pipelines/zero_shot_classification.py\n--- a/src/transformers/pipelines/zero_shot_classification.py\n+++ b/src/transformers/pipelines/zero_shot_classification.py\n@@ -88,7 +88,7 @@\n hypothesis_template,\n padding=True,\n add_special_tokens=True,\n- truncation=TruncationStrategy.DO_NOT_TRUNCATE,\n+ truncation=TruncationStrategy.ONLY_FIRST,\n **kwargs\n ):\n \"\"\"\n@@ -113,13 +113,31 @@\n )\n inputs.append(model_input)\n else:\n- inputs = self.tokenizer(\n- sequence_pairs,\n- add_special_tokens=add_special_tokens,\n- return_tensors=return_tensors,\n- padding=padding,\n- truncation=truncation,\n- )\n+ try:\n+ inputs = self.tokenizer(\n+ sequence_pairs,\n+ add_special_tokens=add_special_tokens,\n+ return_tensors=return_tensors,\n+ padding=padding,\n+ truncation=truncation,\n+ )\n+ except Exception as e:\n+ if \"too short\" in str(e):\n+ # tokenizers might yell that we want to truncate\n+ # to a value that is not even reached by the input.\n+ # In that case we don't want to truncate.\n+ # It seems there's not a really better way to catch that\n+ # exception.\n+\n+ inputs = self.tokenizer(\n+ sequence_pairs,\n+ add_special_tokens=add_special_tokens,\n+ return_tensors=return_tensors,\n+ padding=padding,\n+ truncation=TruncationStrategy.DO_NOT_TRUNCATE,\n+ )\n+ else:\n+ raise e\n \n return inputs\n", "issue": "Zero-shot classification pipeline truncation support\nTransformers 4.10.0 brought [a change](https://github.com/huggingface/transformers/pull/13299/files#diff-c5af53af9b08fb383b49d7a07c1a56c890198b5cd48adc97aeef753fe2e7d60dR91) that modified the default truncation strategy to TruncationStrategy.DO_NOT_TRUNCATE for the ZeroShotClassificationPipeline.\r\n\r\nThat uncovered an issue in that the [ZeroShotClassificationPipeline](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/zero_shot_classification.py#L217 ) doesn't appear to pass kwargs to the parent's call method. So even when calling the pipeline with truncation=True, it doesn't allow for truncation.\r\n\r\nThank you for the assistance in advance, appreciate all the work you guys do. \n", "before_files": [{"content": "from typing import List, Union\n\nimport numpy as np\n\nfrom ..file_utils import add_end_docstrings, is_torch_available\nfrom ..tokenization_utils import TruncationStrategy\nfrom ..utils import logging\nfrom .base import PIPELINE_INIT_ARGS, ArgumentHandler, Pipeline\n\n\nif is_torch_available():\n import torch\n\nlogger = logging.get_logger(__name__)\n\n\nclass ZeroShotClassificationArgumentHandler(ArgumentHandler):\n \"\"\"\n Handles arguments for zero-shot for text classification by turning each possible label into an NLI\n premise/hypothesis pair.\n \"\"\"\n\n def _parse_labels(self, labels):\n if isinstance(labels, str):\n labels = [label.strip() for label in labels.split(\",\")]\n return labels\n\n def __call__(self, sequences, labels, hypothesis_template):\n if len(labels) == 0 or len(sequences) == 0:\n raise ValueError(\"You must include at least one label and at least one sequence.\")\n if hypothesis_template.format(labels[0]) == hypothesis_template:\n raise ValueError(\n (\n 'The provided hypothesis_template \"{}\" was not able to be formatted with the target labels. '\n \"Make sure the passed template includes formatting syntax such as {{}} where the label should go.\"\n ).format(hypothesis_template)\n )\n\n if isinstance(sequences, str):\n sequences = [sequences]\n labels = self._parse_labels(labels)\n\n sequence_pairs = []\n for sequence in sequences:\n sequence_pairs.extend([[sequence, hypothesis_template.format(label)] for label in labels])\n\n return sequence_pairs\n\n\n@add_end_docstrings(PIPELINE_INIT_ARGS)\nclass ZeroShotClassificationPipeline(Pipeline):\n \"\"\"\n NLI-based zero-shot classification pipeline using a :obj:`ModelForSequenceClassification` trained on NLI (natural\n language inference) tasks.\n\n Any combination of sequences and labels can be passed and each combination will be posed as a premise/hypothesis\n pair and passed to the pretrained model. Then, the logit for `entailment` is taken as the logit for the candidate\n label being valid. Any NLI model can be used, but the id of the `entailment` label must be included in the model\n config's :attr:`~transformers.PretrainedConfig.label2id`.\n\n This NLI pipeline can currently be loaded from :func:`~transformers.pipeline` using the following task identifier:\n :obj:`\"zero-shot-classification\"`.\n\n The models that this pipeline can use are models that have been fine-tuned on an NLI task. See the up-to-date list\n of available models on `huggingface.co/models <https://huggingface.co/models?search=nli>`__.\n \"\"\"\n\n def __init__(self, args_parser=ZeroShotClassificationArgumentHandler(), *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._args_parser = args_parser\n if self.entailment_id == -1:\n logger.warning(\n \"Failed to determine 'entailment' label id from the label2id mapping in the model config. Setting to \"\n \"-1. Define a descriptive label2id mapping in the model config to ensure correct outputs.\"\n )\n\n @property\n def entailment_id(self):\n for label, ind in self.model.config.label2id.items():\n if label.lower().startswith(\"entail\"):\n return ind\n return -1\n\n def _parse_and_tokenize(\n self,\n sequences,\n candidate_labels,\n hypothesis_template,\n padding=True,\n add_special_tokens=True,\n truncation=TruncationStrategy.DO_NOT_TRUNCATE,\n **kwargs\n ):\n \"\"\"\n Parse arguments and tokenize only_first so that hypothesis (label) is not truncated\n \"\"\"\n sequence_pairs = self._args_parser(sequences, candidate_labels, hypothesis_template)\n return_tensors = self.framework\n if getattr(self.tokenizer, \"pad_token\", None) is None:\n # XXX some tokenizers do not have a padding token, we use simple lists\n # and no padding then\n logger.warning(\"The tokenizer {self.tokenizer} does not have a pad token, we're not running it as a batch\")\n padding = False\n inputs = []\n for sequence_pair in sequence_pairs:\n model_input = self.tokenizer(\n text=sequence_pair[0],\n text_pair=sequence_pair[1],\n add_special_tokens=add_special_tokens,\n return_tensors=return_tensors,\n padding=padding,\n truncation=truncation,\n )\n inputs.append(model_input)\n else:\n inputs = self.tokenizer(\n sequence_pairs,\n add_special_tokens=add_special_tokens,\n return_tensors=return_tensors,\n padding=padding,\n truncation=truncation,\n )\n\n return inputs\n\n def _forward(self, inputs, return_tensors=False):\n \"\"\"\n Internal framework specific forward dispatching\n\n Args:\n inputs: dict holding all the keyword arguments for required by the model forward method.\n return_tensors: Whether to return native framework (pt/tf) tensors rather than numpy array\n\n Returns:\n Numpy array\n \"\"\"\n # Encode for forward\n with self.device_placement():\n if self.framework == \"tf\":\n if isinstance(inputs, list):\n predictions = []\n for input_ in inputs:\n prediction = self.model(input_.data, training=False)[0]\n predictions.append(prediction)\n else:\n predictions = self.model(inputs.data, training=False)[0]\n else:\n with torch.no_grad():\n if isinstance(inputs, list):\n predictions = []\n for input_ in inputs:\n model_input = self.ensure_tensor_on_device(**input_)\n prediction = self.model(**model_input)[0].cpu()\n predictions.append(prediction)\n\n else:\n inputs = self.ensure_tensor_on_device(**inputs)\n predictions = self.model(**inputs)[0].cpu()\n\n if return_tensors:\n return predictions\n else:\n if isinstance(predictions, list):\n predictions = np.array([p.numpy() for p in predictions])\n else:\n predictions = predictions.numpy()\n return predictions\n\n def __call__(\n self,\n sequences: Union[str, List[str]],\n candidate_labels,\n hypothesis_template=\"This example is {}.\",\n multi_label=False,\n **kwargs,\n ):\n \"\"\"\n Classify the sequence(s) given as inputs. See the :obj:`~transformers.ZeroShotClassificationPipeline`\n documentation for more information.\n\n Args:\n sequences (:obj:`str` or :obj:`List[str]`):\n The sequence(s) to classify, will be truncated if the model input is too large.\n candidate_labels (:obj:`str` or :obj:`List[str]`):\n The set of possible class labels to classify each sequence into. Can be a single label, a string of\n comma-separated labels, or a list of labels.\n hypothesis_template (:obj:`str`, `optional`, defaults to :obj:`\"This example is {}.\"`):\n The template used to turn each label into an NLI-style hypothesis. This template must include a {} or\n similar syntax for the candidate label to be inserted into the template. For example, the default\n template is :obj:`\"This example is {}.\"` With the candidate label :obj:`\"sports\"`, this would be fed\n into the model like :obj:`\"<cls> sequence to classify <sep> This example is sports . <sep>\"`. The\n default template works well in many cases, but it may be worthwhile to experiment with different\n templates depending on the task setting.\n multi_label (:obj:`bool`, `optional`, defaults to :obj:`False`):\n Whether or not multiple candidate labels can be true. If :obj:`False`, the scores are normalized such\n that the sum of the label likelihoods for each sequence is 1. If :obj:`True`, the labels are considered\n independent and probabilities are normalized for each candidate by doing a softmax of the entailment\n score vs. the contradiction score.\n\n Return:\n A :obj:`dict` or a list of :obj:`dict`: Each result comes as a dictionary with the following keys:\n\n - **sequence** (:obj:`str`) -- The sequence for which this is the output.\n - **labels** (:obj:`List[str]`) -- The labels sorted by order of likelihood.\n - **scores** (:obj:`List[float]`) -- The probabilities for each of the labels.\n \"\"\"\n if \"multi_class\" in kwargs and kwargs[\"multi_class\"] is not None:\n multi_label = kwargs.pop(\"multi_class\")\n logger.warning(\n \"The `multi_class` argument has been deprecated and renamed to `multi_label`. \"\n \"`multi_class` will be removed in a future version of Transformers.\"\n )\n\n if sequences and isinstance(sequences, str):\n sequences = [sequences]\n\n outputs = super().__call__(sequences, candidate_labels, hypothesis_template)\n if isinstance(outputs, list):\n # XXX: Some tokenizers cannot handle batching because they don't\n # have pad_token, so outputs will be a list, however, because outputs\n # is only n logits and sequence_length is not present anymore, we\n # can recreate a tensor out of outputs.\n outputs = np.array(outputs)\n num_sequences = len(sequences)\n candidate_labels = self._args_parser._parse_labels(candidate_labels)\n reshaped_outputs = outputs.reshape((num_sequences, len(candidate_labels), -1))\n\n if len(candidate_labels) == 1:\n multi_label = True\n\n if not multi_label:\n # softmax the \"entailment\" logits over all candidate labels\n entail_logits = reshaped_outputs[..., self.entailment_id]\n scores = np.exp(entail_logits) / np.exp(entail_logits).sum(-1, keepdims=True)\n else:\n # softmax over the entailment vs. contradiction dim for each label independently\n entailment_id = self.entailment_id\n contradiction_id = -1 if entailment_id == 0 else 0\n entail_contr_logits = reshaped_outputs[..., [contradiction_id, entailment_id]]\n scores = np.exp(entail_contr_logits) / np.exp(entail_contr_logits).sum(-1, keepdims=True)\n scores = scores[..., 1]\n\n result = []\n for iseq in range(num_sequences):\n top_inds = list(reversed(scores[iseq].argsort()))\n result.append(\n {\n \"sequence\": sequences if isinstance(sequences, str) else sequences[iseq],\n \"labels\": [candidate_labels[i] for i in top_inds],\n \"scores\": scores[iseq][top_inds].tolist(),\n }\n )\n\n if len(result) == 1:\n return result[0]\n return result\n", "path": "src/transformers/pipelines/zero_shot_classification.py"}], "after_files": [{"content": "from typing import List, Union\n\nimport numpy as np\n\nfrom ..file_utils import add_end_docstrings, is_torch_available\nfrom ..tokenization_utils import TruncationStrategy\nfrom ..utils import logging\nfrom .base import PIPELINE_INIT_ARGS, ArgumentHandler, Pipeline\n\n\nif is_torch_available():\n import torch\n\nlogger = logging.get_logger(__name__)\n\n\nclass ZeroShotClassificationArgumentHandler(ArgumentHandler):\n \"\"\"\n Handles arguments for zero-shot for text classification by turning each possible label into an NLI\n premise/hypothesis pair.\n \"\"\"\n\n def _parse_labels(self, labels):\n if isinstance(labels, str):\n labels = [label.strip() for label in labels.split(\",\")]\n return labels\n\n def __call__(self, sequences, labels, hypothesis_template):\n if len(labels) == 0 or len(sequences) == 0:\n raise ValueError(\"You must include at least one label and at least one sequence.\")\n if hypothesis_template.format(labels[0]) == hypothesis_template:\n raise ValueError(\n (\n 'The provided hypothesis_template \"{}\" was not able to be formatted with the target labels. '\n \"Make sure the passed template includes formatting syntax such as {{}} where the label should go.\"\n ).format(hypothesis_template)\n )\n\n if isinstance(sequences, str):\n sequences = [sequences]\n labels = self._parse_labels(labels)\n\n sequence_pairs = []\n for sequence in sequences:\n sequence_pairs.extend([[sequence, hypothesis_template.format(label)] for label in labels])\n\n return sequence_pairs\n\n\n@add_end_docstrings(PIPELINE_INIT_ARGS)\nclass ZeroShotClassificationPipeline(Pipeline):\n \"\"\"\n NLI-based zero-shot classification pipeline using a :obj:`ModelForSequenceClassification` trained on NLI (natural\n language inference) tasks.\n\n Any combination of sequences and labels can be passed and each combination will be posed as a premise/hypothesis\n pair and passed to the pretrained model. Then, the logit for `entailment` is taken as the logit for the candidate\n label being valid. Any NLI model can be used, but the id of the `entailment` label must be included in the model\n config's :attr:`~transformers.PretrainedConfig.label2id`.\n\n This NLI pipeline can currently be loaded from :func:`~transformers.pipeline` using the following task identifier:\n :obj:`\"zero-shot-classification\"`.\n\n The models that this pipeline can use are models that have been fine-tuned on an NLI task. See the up-to-date list\n of available models on `huggingface.co/models <https://huggingface.co/models?search=nli>`__.\n \"\"\"\n\n def __init__(self, args_parser=ZeroShotClassificationArgumentHandler(), *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._args_parser = args_parser\n if self.entailment_id == -1:\n logger.warning(\n \"Failed to determine 'entailment' label id from the label2id mapping in the model config. Setting to \"\n \"-1. Define a descriptive label2id mapping in the model config to ensure correct outputs.\"\n )\n\n @property\n def entailment_id(self):\n for label, ind in self.model.config.label2id.items():\n if label.lower().startswith(\"entail\"):\n return ind\n return -1\n\n def _parse_and_tokenize(\n self,\n sequences,\n candidate_labels,\n hypothesis_template,\n padding=True,\n add_special_tokens=True,\n truncation=TruncationStrategy.ONLY_FIRST,\n **kwargs\n ):\n \"\"\"\n Parse arguments and tokenize only_first so that hypothesis (label) is not truncated\n \"\"\"\n sequence_pairs = self._args_parser(sequences, candidate_labels, hypothesis_template)\n return_tensors = self.framework\n if getattr(self.tokenizer, \"pad_token\", None) is None:\n # XXX some tokenizers do not have a padding token, we use simple lists\n # and no padding then\n logger.warning(\"The tokenizer {self.tokenizer} does not have a pad token, we're not running it as a batch\")\n padding = False\n inputs = []\n for sequence_pair in sequence_pairs:\n model_input = self.tokenizer(\n text=sequence_pair[0],\n text_pair=sequence_pair[1],\n add_special_tokens=add_special_tokens,\n return_tensors=return_tensors,\n padding=padding,\n truncation=truncation,\n )\n inputs.append(model_input)\n else:\n try:\n inputs = self.tokenizer(\n sequence_pairs,\n add_special_tokens=add_special_tokens,\n return_tensors=return_tensors,\n padding=padding,\n truncation=truncation,\n )\n except Exception as e:\n if \"too short\" in str(e):\n # tokenizers might yell that we want to truncate\n # to a value that is not even reached by the input.\n # In that case we don't want to truncate.\n # It seems there's not a really better way to catch that\n # exception.\n\n inputs = self.tokenizer(\n sequence_pairs,\n add_special_tokens=add_special_tokens,\n return_tensors=return_tensors,\n padding=padding,\n truncation=TruncationStrategy.DO_NOT_TRUNCATE,\n )\n else:\n raise e\n\n return inputs\n\n def _forward(self, inputs, return_tensors=False):\n \"\"\"\n Internal framework specific forward dispatching\n\n Args:\n inputs: dict holding all the keyword arguments for required by the model forward method.\n return_tensors: Whether to return native framework (pt/tf) tensors rather than numpy array\n\n Returns:\n Numpy array\n \"\"\"\n # Encode for forward\n with self.device_placement():\n if self.framework == \"tf\":\n if isinstance(inputs, list):\n predictions = []\n for input_ in inputs:\n prediction = self.model(input_.data, training=False)[0]\n predictions.append(prediction)\n else:\n predictions = self.model(inputs.data, training=False)[0]\n else:\n with torch.no_grad():\n if isinstance(inputs, list):\n predictions = []\n for input_ in inputs:\n model_input = self.ensure_tensor_on_device(**input_)\n prediction = self.model(**model_input)[0].cpu()\n predictions.append(prediction)\n\n else:\n inputs = self.ensure_tensor_on_device(**inputs)\n predictions = self.model(**inputs)[0].cpu()\n\n if return_tensors:\n return predictions\n else:\n if isinstance(predictions, list):\n predictions = np.array([p.numpy() for p in predictions])\n else:\n predictions = predictions.numpy()\n return predictions\n\n def __call__(\n self,\n sequences: Union[str, List[str]],\n candidate_labels,\n hypothesis_template=\"This example is {}.\",\n multi_label=False,\n **kwargs,\n ):\n \"\"\"\n Classify the sequence(s) given as inputs. See the :obj:`~transformers.ZeroShotClassificationPipeline`\n documentation for more information.\n\n Args:\n sequences (:obj:`str` or :obj:`List[str]`):\n The sequence(s) to classify, will be truncated if the model input is too large.\n candidate_labels (:obj:`str` or :obj:`List[str]`):\n The set of possible class labels to classify each sequence into. Can be a single label, a string of\n comma-separated labels, or a list of labels.\n hypothesis_template (:obj:`str`, `optional`, defaults to :obj:`\"This example is {}.\"`):\n The template used to turn each label into an NLI-style hypothesis. This template must include a {} or\n similar syntax for the candidate label to be inserted into the template. For example, the default\n template is :obj:`\"This example is {}.\"` With the candidate label :obj:`\"sports\"`, this would be fed\n into the model like :obj:`\"<cls> sequence to classify <sep> This example is sports . <sep>\"`. The\n default template works well in many cases, but it may be worthwhile to experiment with different\n templates depending on the task setting.\n multi_label (:obj:`bool`, `optional`, defaults to :obj:`False`):\n Whether or not multiple candidate labels can be true. If :obj:`False`, the scores are normalized such\n that the sum of the label likelihoods for each sequence is 1. If :obj:`True`, the labels are considered\n independent and probabilities are normalized for each candidate by doing a softmax of the entailment\n score vs. the contradiction score.\n\n Return:\n A :obj:`dict` or a list of :obj:`dict`: Each result comes as a dictionary with the following keys:\n\n - **sequence** (:obj:`str`) -- The sequence for which this is the output.\n - **labels** (:obj:`List[str]`) -- The labels sorted by order of likelihood.\n - **scores** (:obj:`List[float]`) -- The probabilities for each of the labels.\n \"\"\"\n if \"multi_class\" in kwargs and kwargs[\"multi_class\"] is not None:\n multi_label = kwargs.pop(\"multi_class\")\n logger.warning(\n \"The `multi_class` argument has been deprecated and renamed to `multi_label`. \"\n \"`multi_class` will be removed in a future version of Transformers.\"\n )\n\n if sequences and isinstance(sequences, str):\n sequences = [sequences]\n\n outputs = super().__call__(sequences, candidate_labels, hypothesis_template)\n if isinstance(outputs, list):\n # XXX: Some tokenizers cannot handle batching because they don't\n # have pad_token, so outputs will be a list, however, because outputs\n # is only n logits and sequence_length is not present anymore, we\n # can recreate a tensor out of outputs.\n outputs = np.array(outputs)\n num_sequences = len(sequences)\n candidate_labels = self._args_parser._parse_labels(candidate_labels)\n reshaped_outputs = outputs.reshape((num_sequences, len(candidate_labels), -1))\n\n if len(candidate_labels) == 1:\n multi_label = True\n\n if not multi_label:\n # softmax the \"entailment\" logits over all candidate labels\n entail_logits = reshaped_outputs[..., self.entailment_id]\n scores = np.exp(entail_logits) / np.exp(entail_logits).sum(-1, keepdims=True)\n else:\n # softmax over the entailment vs. contradiction dim for each label independently\n entailment_id = self.entailment_id\n contradiction_id = -1 if entailment_id == 0 else 0\n entail_contr_logits = reshaped_outputs[..., [contradiction_id, entailment_id]]\n scores = np.exp(entail_contr_logits) / np.exp(entail_contr_logits).sum(-1, keepdims=True)\n scores = scores[..., 1]\n\n result = []\n for iseq in range(num_sequences):\n top_inds = list(reversed(scores[iseq].argsort()))\n result.append(\n {\n \"sequence\": sequences if isinstance(sequences, str) else sequences[iseq],\n \"labels\": [candidate_labels[i] for i in top_inds],\n \"scores\": scores[iseq][top_inds].tolist(),\n }\n )\n\n if len(result) == 1:\n return result[0]\n return result\n", "path": "src/transformers/pipelines/zero_shot_classification.py"}]} | 3,448 | 391 |
gh_patches_debug_38885 | rasdani/github-patches | git_diff | scikit-image__scikit-image-4471 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
profile_line gives wrong output for a binary mask
## Description
As of now (skimage 0.12.3), profile_line sometimes returns wrong output for binary masks (np.array(dtype=bool)). See below for an example. If the image is cast to an uint8 first, it returns the expected result.
A possible patch might be even just specifying the allowed dtypes in the docstring.
## Way to reproduce
``` python
from numpy import meshgrid, pi, cos, sin, all, uint8
from skimage.measure import profile_line
shape = (200, 200)
center_x, center_y = (140, 150)
radius = 20
x, y = meshgrid(range(shape[1]), range(shape[0]))
mask = (y - center_y)**2 + (x - center_x)**2 < radius**2
src = (center_y, center_x)
phi = 4*pi/9.
dy = 31 * cos(phi)
dx = 31 * sin(phi)
dst = (center_y + dy, center_x + dx)
profile_uint8 = profile_line(mask.astype(uint8), src, dst)
print profile_uint8
assert all(profile_uint8[:radius] == 1)
profile_bool = profile_line(mask, src, dst)
print profile_bool
assert all(profile_bool[:radius] == 1) # Fails
assert all(profile_bool == profile_uint8) # Fails
```
The output is (there is a zero in the middle of the profile_line output):
```
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. (ZERO HERE?!) 1. 1.
1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-24-1a94903a5586> in <module>()
19 profile_bool = profile_line(mask, src, dst)
20 print profile_bool
---> 21 assert all(profile_bool[:radius] == 1) # Fails
22
23 assert all(profile_bool == profile_uint8) # Fails
AssertionError:
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/measure/profile.py`
Content:
```
1 import numpy as np
2 from scipy import ndimage as ndi
3
4
5 def profile_line(image, src, dst, linewidth=1,
6 order=1, mode='constant', cval=0.0,
7 *, reduce_func=np.mean):
8 """Return the intensity profile of an image measured along a scan line.
9
10 Parameters
11 ----------
12 image : numeric array, shape (M, N[, C])
13 The image, either grayscale (2D array) or multichannel
14 (3D array, where the final axis contains the channel
15 information).
16 src : 2-tuple of numeric scalar (float or int)
17 The start point of the scan line.
18 dst : 2-tuple of numeric scalar (float or int)
19 The end point of the scan line. The destination point is *included*
20 in the profile, in contrast to standard numpy indexing.
21 linewidth : int, optional
22 Width of the scan, perpendicular to the line
23 order : int in {0, 1, 2, 3, 4, 5}, optional
24 The order of the spline interpolation to compute image values at
25 non-integer coordinates. 0 means nearest-neighbor interpolation.
26 mode : {'constant', 'nearest', 'reflect', 'mirror', 'wrap'}, optional
27 How to compute any values falling outside of the image.
28 cval : float, optional
29 If `mode` is 'constant', what constant value to use outside the image.
30 reduce_func : callable, optional
31 Function used to calculate the aggregation of pixel values
32 perpendicular to the profile_line direction when `linewidth` > 1.
33 If set to None the unreduced array will be returned.
34
35 Returns
36 -------
37 return_value : array
38 The intensity profile along the scan line. The length of the profile
39 is the ceil of the computed length of the scan line.
40
41 Examples
42 --------
43 >>> x = np.array([[1, 1, 1, 2, 2, 2]])
44 >>> img = np.vstack([np.zeros_like(x), x, x, x, np.zeros_like(x)])
45 >>> img
46 array([[0, 0, 0, 0, 0, 0],
47 [1, 1, 1, 2, 2, 2],
48 [1, 1, 1, 2, 2, 2],
49 [1, 1, 1, 2, 2, 2],
50 [0, 0, 0, 0, 0, 0]])
51 >>> profile_line(img, (2, 1), (2, 4))
52 array([1., 1., 2., 2.])
53 >>> profile_line(img, (1, 0), (1, 6), cval=4)
54 array([1., 1., 1., 2., 2., 2., 4.])
55
56 The destination point is included in the profile, in contrast to
57 standard numpy indexing.
58 For example:
59
60 >>> profile_line(img, (1, 0), (1, 6)) # The final point is out of bounds
61 array([1., 1., 1., 2., 2., 2., 0.])
62 >>> profile_line(img, (1, 0), (1, 5)) # This accesses the full first row
63 array([1., 1., 1., 2., 2., 2.])
64
65 For different reduce_func inputs:
66
67 >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.mean)
68 array([0.66666667, 0.66666667, 0.66666667, 1.33333333])
69 >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.max)
70 array([1, 1, 1, 2])
71 >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sum)
72 array([2, 2, 2, 4])
73
74 The unreduced array will be returned when `reduce_func` is None or when
75 `reduce_func` acts on each pixel value individually.
76
77 >>> profile_line(img, (1, 2), (4, 2), linewidth=3, order=0,
78 ... reduce_func=None)
79 array([[1, 1, 2],
80 [1, 1, 2],
81 [1, 1, 2],
82 [0, 0, 0]])
83 >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sqrt)
84 array([[1. , 1. , 0. ],
85 [1. , 1. , 0. ],
86 [1. , 1. , 0. ],
87 [1.41421356, 1.41421356, 0. ]])
88 """
89 perp_lines = _line_profile_coordinates(src, dst, linewidth=linewidth)
90 if image.ndim == 3:
91 pixels = [ndi.map_coordinates(image[..., i], perp_lines,
92 order=order, mode=mode, cval=cval)
93 for i in range(image.shape[2])]
94 pixels = np.transpose(np.asarray(pixels), (1, 2, 0))
95 else:
96 pixels = ndi.map_coordinates(image, perp_lines,
97 order=order, mode=mode, cval=cval)
98 # The outputted array with reduce_func=None gives an array where the
99 # row values (axis=1) are flipped. Here, we make this consistent.
100 pixels = np.flip(pixels, axis=1)
101
102 if reduce_func is None:
103 intensities = pixels
104 else:
105 try:
106 intensities = reduce_func(pixels, axis=1)
107 except TypeError: # function doesn't allow axis kwarg
108 intensities = np.apply_along_axis(reduce_func, arr=pixels, axis=1)
109
110 return intensities
111
112
113 def _line_profile_coordinates(src, dst, linewidth=1):
114 """Return the coordinates of the profile of an image along a scan line.
115
116 Parameters
117 ----------
118 src : 2-tuple of numeric scalar (float or int)
119 The start point of the scan line.
120 dst : 2-tuple of numeric scalar (float or int)
121 The end point of the scan line.
122 linewidth : int, optional
123 Width of the scan, perpendicular to the line
124
125 Returns
126 -------
127 coords : array, shape (2, N, C), float
128 The coordinates of the profile along the scan line. The length of the
129 profile is the ceil of the computed length of the scan line.
130
131 Notes
132 -----
133 This is a utility method meant to be used internally by skimage functions.
134 The destination point is included in the profile, in contrast to
135 standard numpy indexing.
136 """
137 src_row, src_col = src = np.asarray(src, dtype=float)
138 dst_row, dst_col = dst = np.asarray(dst, dtype=float)
139 d_row, d_col = dst - src
140 theta = np.arctan2(d_row, d_col)
141
142 length = int(np.ceil(np.hypot(d_row, d_col) + 1))
143 # we add one above because we include the last point in the profile
144 # (in contrast to standard numpy indexing)
145 line_col = np.linspace(src_col, dst_col, length)
146 line_row = np.linspace(src_row, dst_row, length)
147
148 # we subtract 1 from linewidth to change from pixel-counting
149 # (make this line 3 pixels wide) to point distances (the
150 # distance between pixel centers)
151 col_width = (linewidth - 1) * np.sin(-theta) / 2
152 row_width = (linewidth - 1) * np.cos(theta) / 2
153 perp_rows = np.array([np.linspace(row_i - row_width, row_i + row_width,
154 linewidth) for row_i in line_row])
155 perp_cols = np.array([np.linspace(col_i - col_width, col_i + col_width,
156 linewidth) for col_i in line_col])
157 return np.array([perp_rows, perp_cols])
158
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/skimage/measure/profile.py b/skimage/measure/profile.py
--- a/skimage/measure/profile.py
+++ b/skimage/measure/profile.py
@@ -1,9 +1,10 @@
+from warnings import warn
import numpy as np
from scipy import ndimage as ndi
def profile_line(image, src, dst, linewidth=1,
- order=1, mode='constant', cval=0.0,
+ order=None, mode='constant', cval=0.0,
*, reduce_func=np.mean):
"""Return the intensity profile of an image measured along a scan line.
@@ -21,8 +22,9 @@
linewidth : int, optional
Width of the scan, perpendicular to the line
order : int in {0, 1, 2, 3, 4, 5}, optional
- The order of the spline interpolation to compute image values at
- non-integer coordinates. 0 means nearest-neighbor interpolation.
+ The order of the spline interpolation, default is 0 if
+ image.dtype is bool and 1 otherwise. The order has to be in
+ the range 0-5. See `skimage.transform.warp` for detail.
mode : {'constant', 'nearest', 'reflect', 'mirror', 'wrap'}, optional
How to compute any values falling outside of the image.
cval : float, optional
@@ -86,14 +88,26 @@
[1. , 1. , 0. ],
[1.41421356, 1.41421356, 0. ]])
"""
+ if order is None:
+ order = 0 if image.dtype == bool else 1
+
+ if image.dtype == bool and order != 0:
+ warn("Input image dtype is bool. Interpolation is not defined "
+ "with bool data type. Please set order to 0 or explicitely "
+ "cast input image to another data type. Starting from version "
+ "0.19 a ValueError will be raised instead of this warning.",
+ FutureWarning, stacklevel=2)
+
perp_lines = _line_profile_coordinates(src, dst, linewidth=linewidth)
if image.ndim == 3:
pixels = [ndi.map_coordinates(image[..., i], perp_lines,
- order=order, mode=mode, cval=cval)
- for i in range(image.shape[2])]
+ prefilter=order > 1,
+ order=order, mode=mode,
+ cval=cval) for i in
+ range(image.shape[2])]
pixels = np.transpose(np.asarray(pixels), (1, 2, 0))
else:
- pixels = ndi.map_coordinates(image, perp_lines,
+ pixels = ndi.map_coordinates(image, perp_lines, prefilter=order > 1,
order=order, mode=mode, cval=cval)
# The outputted array with reduce_func=None gives an array where the
# row values (axis=1) are flipped. Here, we make this consistent.
| {"golden_diff": "diff --git a/skimage/measure/profile.py b/skimage/measure/profile.py\n--- a/skimage/measure/profile.py\n+++ b/skimage/measure/profile.py\n@@ -1,9 +1,10 @@\n+from warnings import warn\n import numpy as np\n from scipy import ndimage as ndi\n \n \n def profile_line(image, src, dst, linewidth=1,\n- order=1, mode='constant', cval=0.0,\n+ order=None, mode='constant', cval=0.0,\n *, reduce_func=np.mean):\n \"\"\"Return the intensity profile of an image measured along a scan line.\n \n@@ -21,8 +22,9 @@\n linewidth : int, optional\n Width of the scan, perpendicular to the line\n order : int in {0, 1, 2, 3, 4, 5}, optional\n- The order of the spline interpolation to compute image values at\n- non-integer coordinates. 0 means nearest-neighbor interpolation.\n+ The order of the spline interpolation, default is 0 if\n+ image.dtype is bool and 1 otherwise. The order has to be in\n+ the range 0-5. See `skimage.transform.warp` for detail.\n mode : {'constant', 'nearest', 'reflect', 'mirror', 'wrap'}, optional\n How to compute any values falling outside of the image.\n cval : float, optional\n@@ -86,14 +88,26 @@\n [1. , 1. , 0. ],\n [1.41421356, 1.41421356, 0. ]])\n \"\"\"\n+ if order is None:\n+ order = 0 if image.dtype == bool else 1\n+\n+ if image.dtype == bool and order != 0:\n+ warn(\"Input image dtype is bool. Interpolation is not defined \"\n+ \"with bool data type. Please set order to 0 or explicitely \"\n+ \"cast input image to another data type. Starting from version \"\n+ \"0.19 a ValueError will be raised instead of this warning.\",\n+ FutureWarning, stacklevel=2)\n+\n perp_lines = _line_profile_coordinates(src, dst, linewidth=linewidth)\n if image.ndim == 3:\n pixels = [ndi.map_coordinates(image[..., i], perp_lines,\n- order=order, mode=mode, cval=cval)\n- for i in range(image.shape[2])]\n+ prefilter=order > 1,\n+ order=order, mode=mode,\n+ cval=cval) for i in\n+ range(image.shape[2])]\n pixels = np.transpose(np.asarray(pixels), (1, 2, 0))\n else:\n- pixels = ndi.map_coordinates(image, perp_lines,\n+ pixels = ndi.map_coordinates(image, perp_lines, prefilter=order > 1,\n order=order, mode=mode, cval=cval)\n # The outputted array with reduce_func=None gives an array where the\n # row values (axis=1) are flipped. Here, we make this consistent.\n", "issue": "profile_line gives wrong output for a binary mask\n## Description\n\nAs of now (skimage 0.12.3), profile_line sometimes returns wrong output for binary masks (np.array(dtype=bool)). See below for an example. If the image is cast to an uint8 first, it returns the expected result.\n\nA possible patch might be even just specifying the allowed dtypes in the docstring.\n## Way to reproduce\n\n``` python\nfrom numpy import meshgrid, pi, cos, sin, all, uint8\nfrom skimage.measure import profile_line\n\nshape = (200, 200)\ncenter_x, center_y = (140, 150)\nradius = 20\nx, y = meshgrid(range(shape[1]), range(shape[0]))\nmask = (y - center_y)**2 + (x - center_x)**2 < radius**2\nsrc = (center_y, center_x)\nphi = 4*pi/9.\ndy = 31 * cos(phi)\ndx = 31 * sin(phi)\ndst = (center_y + dy, center_x + dx)\n\nprofile_uint8 = profile_line(mask.astype(uint8), src, dst)\nprint profile_uint8\nassert all(profile_uint8[:radius] == 1)\n\nprofile_bool = profile_line(mask, src, dst)\nprint profile_bool\nassert all(profile_bool[:radius] == 1) # Fails\n\nassert all(profile_bool == profile_uint8) # Fails\n```\n\nThe output is (there is a zero in the middle of the profile_line output):\n\n```\n[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.\n 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. (ZERO HERE?!) 1. 1.\n 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n\n---------------------------------------------------------------------------\nAssertionError Traceback (most recent call last)\n<ipython-input-24-1a94903a5586> in <module>()\n 19 profile_bool = profile_line(mask, src, dst)\n 20 print profile_bool\n---> 21 assert all(profile_bool[:radius] == 1) # Fails\n 22 \n 23 assert all(profile_bool == profile_uint8) # Fails\n\nAssertionError: \n```\n\n", "before_files": [{"content": "import numpy as np\nfrom scipy import ndimage as ndi\n\n\ndef profile_line(image, src, dst, linewidth=1,\n order=1, mode='constant', cval=0.0,\n *, reduce_func=np.mean):\n \"\"\"Return the intensity profile of an image measured along a scan line.\n\n Parameters\n ----------\n image : numeric array, shape (M, N[, C])\n The image, either grayscale (2D array) or multichannel\n (3D array, where the final axis contains the channel\n information).\n src : 2-tuple of numeric scalar (float or int)\n The start point of the scan line.\n dst : 2-tuple of numeric scalar (float or int)\n The end point of the scan line. The destination point is *included*\n in the profile, in contrast to standard numpy indexing.\n linewidth : int, optional\n Width of the scan, perpendicular to the line\n order : int in {0, 1, 2, 3, 4, 5}, optional\n The order of the spline interpolation to compute image values at\n non-integer coordinates. 0 means nearest-neighbor interpolation.\n mode : {'constant', 'nearest', 'reflect', 'mirror', 'wrap'}, optional\n How to compute any values falling outside of the image.\n cval : float, optional\n If `mode` is 'constant', what constant value to use outside the image.\n reduce_func : callable, optional\n Function used to calculate the aggregation of pixel values\n perpendicular to the profile_line direction when `linewidth` > 1.\n If set to None the unreduced array will be returned.\n\n Returns\n -------\n return_value : array\n The intensity profile along the scan line. The length of the profile\n is the ceil of the computed length of the scan line.\n\n Examples\n --------\n >>> x = np.array([[1, 1, 1, 2, 2, 2]])\n >>> img = np.vstack([np.zeros_like(x), x, x, x, np.zeros_like(x)])\n >>> img\n array([[0, 0, 0, 0, 0, 0],\n [1, 1, 1, 2, 2, 2],\n [1, 1, 1, 2, 2, 2],\n [1, 1, 1, 2, 2, 2],\n [0, 0, 0, 0, 0, 0]])\n >>> profile_line(img, (2, 1), (2, 4))\n array([1., 1., 2., 2.])\n >>> profile_line(img, (1, 0), (1, 6), cval=4)\n array([1., 1., 1., 2., 2., 2., 4.])\n\n The destination point is included in the profile, in contrast to\n standard numpy indexing.\n For example:\n\n >>> profile_line(img, (1, 0), (1, 6)) # The final point is out of bounds\n array([1., 1., 1., 2., 2., 2., 0.])\n >>> profile_line(img, (1, 0), (1, 5)) # This accesses the full first row\n array([1., 1., 1., 2., 2., 2.])\n\n For different reduce_func inputs:\n\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.mean)\n array([0.66666667, 0.66666667, 0.66666667, 1.33333333])\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.max)\n array([1, 1, 1, 2])\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sum)\n array([2, 2, 2, 4])\n\n The unreduced array will be returned when `reduce_func` is None or when\n `reduce_func` acts on each pixel value individually.\n\n >>> profile_line(img, (1, 2), (4, 2), linewidth=3, order=0,\n ... reduce_func=None)\n array([[1, 1, 2],\n [1, 1, 2],\n [1, 1, 2],\n [0, 0, 0]])\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sqrt)\n array([[1. , 1. , 0. ],\n [1. , 1. , 0. ],\n [1. , 1. , 0. ],\n [1.41421356, 1.41421356, 0. ]])\n \"\"\"\n perp_lines = _line_profile_coordinates(src, dst, linewidth=linewidth)\n if image.ndim == 3:\n pixels = [ndi.map_coordinates(image[..., i], perp_lines,\n order=order, mode=mode, cval=cval)\n for i in range(image.shape[2])]\n pixels = np.transpose(np.asarray(pixels), (1, 2, 0))\n else:\n pixels = ndi.map_coordinates(image, perp_lines,\n order=order, mode=mode, cval=cval)\n # The outputted array with reduce_func=None gives an array where the\n # row values (axis=1) are flipped. Here, we make this consistent.\n pixels = np.flip(pixels, axis=1)\n\n if reduce_func is None:\n intensities = pixels\n else:\n try:\n intensities = reduce_func(pixels, axis=1)\n except TypeError: # function doesn't allow axis kwarg\n intensities = np.apply_along_axis(reduce_func, arr=pixels, axis=1)\n\n return intensities\n\n\ndef _line_profile_coordinates(src, dst, linewidth=1):\n \"\"\"Return the coordinates of the profile of an image along a scan line.\n\n Parameters\n ----------\n src : 2-tuple of numeric scalar (float or int)\n The start point of the scan line.\n dst : 2-tuple of numeric scalar (float or int)\n The end point of the scan line.\n linewidth : int, optional\n Width of the scan, perpendicular to the line\n\n Returns\n -------\n coords : array, shape (2, N, C), float\n The coordinates of the profile along the scan line. The length of the\n profile is the ceil of the computed length of the scan line.\n\n Notes\n -----\n This is a utility method meant to be used internally by skimage functions.\n The destination point is included in the profile, in contrast to\n standard numpy indexing.\n \"\"\"\n src_row, src_col = src = np.asarray(src, dtype=float)\n dst_row, dst_col = dst = np.asarray(dst, dtype=float)\n d_row, d_col = dst - src\n theta = np.arctan2(d_row, d_col)\n\n length = int(np.ceil(np.hypot(d_row, d_col) + 1))\n # we add one above because we include the last point in the profile\n # (in contrast to standard numpy indexing)\n line_col = np.linspace(src_col, dst_col, length)\n line_row = np.linspace(src_row, dst_row, length)\n\n # we subtract 1 from linewidth to change from pixel-counting\n # (make this line 3 pixels wide) to point distances (the\n # distance between pixel centers)\n col_width = (linewidth - 1) * np.sin(-theta) / 2\n row_width = (linewidth - 1) * np.cos(theta) / 2\n perp_rows = np.array([np.linspace(row_i - row_width, row_i + row_width,\n linewidth) for row_i in line_row])\n perp_cols = np.array([np.linspace(col_i - col_width, col_i + col_width,\n linewidth) for col_i in line_col])\n return np.array([perp_rows, perp_cols])\n", "path": "skimage/measure/profile.py"}], "after_files": [{"content": "from warnings import warn\nimport numpy as np\nfrom scipy import ndimage as ndi\n\n\ndef profile_line(image, src, dst, linewidth=1,\n order=None, mode='constant', cval=0.0,\n *, reduce_func=np.mean):\n \"\"\"Return the intensity profile of an image measured along a scan line.\n\n Parameters\n ----------\n image : numeric array, shape (M, N[, C])\n The image, either grayscale (2D array) or multichannel\n (3D array, where the final axis contains the channel\n information).\n src : 2-tuple of numeric scalar (float or int)\n The start point of the scan line.\n dst : 2-tuple of numeric scalar (float or int)\n The end point of the scan line. The destination point is *included*\n in the profile, in contrast to standard numpy indexing.\n linewidth : int, optional\n Width of the scan, perpendicular to the line\n order : int in {0, 1, 2, 3, 4, 5}, optional\n The order of the spline interpolation, default is 0 if\n image.dtype is bool and 1 otherwise. The order has to be in\n the range 0-5. See `skimage.transform.warp` for detail.\n mode : {'constant', 'nearest', 'reflect', 'mirror', 'wrap'}, optional\n How to compute any values falling outside of the image.\n cval : float, optional\n If `mode` is 'constant', what constant value to use outside the image.\n reduce_func : callable, optional\n Function used to calculate the aggregation of pixel values\n perpendicular to the profile_line direction when `linewidth` > 1.\n If set to None the unreduced array will be returned.\n\n Returns\n -------\n return_value : array\n The intensity profile along the scan line. The length of the profile\n is the ceil of the computed length of the scan line.\n\n Examples\n --------\n >>> x = np.array([[1, 1, 1, 2, 2, 2]])\n >>> img = np.vstack([np.zeros_like(x), x, x, x, np.zeros_like(x)])\n >>> img\n array([[0, 0, 0, 0, 0, 0],\n [1, 1, 1, 2, 2, 2],\n [1, 1, 1, 2, 2, 2],\n [1, 1, 1, 2, 2, 2],\n [0, 0, 0, 0, 0, 0]])\n >>> profile_line(img, (2, 1), (2, 4))\n array([1., 1., 2., 2.])\n >>> profile_line(img, (1, 0), (1, 6), cval=4)\n array([1., 1., 1., 2., 2., 2., 4.])\n\n The destination point is included in the profile, in contrast to\n standard numpy indexing.\n For example:\n\n >>> profile_line(img, (1, 0), (1, 6)) # The final point is out of bounds\n array([1., 1., 1., 2., 2., 2., 0.])\n >>> profile_line(img, (1, 0), (1, 5)) # This accesses the full first row\n array([1., 1., 1., 2., 2., 2.])\n\n For different reduce_func inputs:\n\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.mean)\n array([0.66666667, 0.66666667, 0.66666667, 1.33333333])\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.max)\n array([1, 1, 1, 2])\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sum)\n array([2, 2, 2, 4])\n\n The unreduced array will be returned when `reduce_func` is None or when\n `reduce_func` acts on each pixel value individually.\n\n >>> profile_line(img, (1, 2), (4, 2), linewidth=3, order=0,\n ... reduce_func=None)\n array([[1, 1, 2],\n [1, 1, 2],\n [1, 1, 2],\n [0, 0, 0]])\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sqrt)\n array([[1. , 1. , 0. ],\n [1. , 1. , 0. ],\n [1. , 1. , 0. ],\n [1.41421356, 1.41421356, 0. ]])\n \"\"\"\n if order is None:\n order = 0 if image.dtype == bool else 1\n\n if image.dtype == bool and order != 0:\n warn(\"Input image dtype is bool. Interpolation is not defined \"\n \"with bool data type. Please set order to 0 or explicitely \"\n \"cast input image to another data type. Starting from version \"\n \"0.19 a ValueError will be raised instead of this warning.\",\n FutureWarning, stacklevel=2)\n\n perp_lines = _line_profile_coordinates(src, dst, linewidth=linewidth)\n if image.ndim == 3:\n pixels = [ndi.map_coordinates(image[..., i], perp_lines,\n prefilter=order > 1,\n order=order, mode=mode,\n cval=cval) for i in\n range(image.shape[2])]\n pixels = np.transpose(np.asarray(pixels), (1, 2, 0))\n else:\n pixels = ndi.map_coordinates(image, perp_lines, prefilter=order > 1,\n order=order, mode=mode, cval=cval)\n # The outputted array with reduce_func=None gives an array where the\n # row values (axis=1) are flipped. Here, we make this consistent.\n pixels = np.flip(pixels, axis=1)\n\n if reduce_func is None:\n intensities = pixels\n else:\n try:\n intensities = reduce_func(pixels, axis=1)\n except TypeError: # function doesn't allow axis kwarg\n intensities = np.apply_along_axis(reduce_func, arr=pixels, axis=1)\n\n return intensities\n\n\ndef _line_profile_coordinates(src, dst, linewidth=1):\n \"\"\"Return the coordinates of the profile of an image along a scan line.\n\n Parameters\n ----------\n src : 2-tuple of numeric scalar (float or int)\n The start point of the scan line.\n dst : 2-tuple of numeric scalar (float or int)\n The end point of the scan line.\n linewidth : int, optional\n Width of the scan, perpendicular to the line\n\n Returns\n -------\n coords : array, shape (2, N, C), float\n The coordinates of the profile along the scan line. The length of the\n profile is the ceil of the computed length of the scan line.\n\n Notes\n -----\n This is a utility method meant to be used internally by skimage functions.\n The destination point is included in the profile, in contrast to\n standard numpy indexing.\n \"\"\"\n src_row, src_col = src = np.asarray(src, dtype=float)\n dst_row, dst_col = dst = np.asarray(dst, dtype=float)\n d_row, d_col = dst - src\n theta = np.arctan2(d_row, d_col)\n\n length = int(np.ceil(np.hypot(d_row, d_col) + 1))\n # we add one above because we include the last point in the profile\n # (in contrast to standard numpy indexing)\n line_col = np.linspace(src_col, dst_col, length)\n line_row = np.linspace(src_row, dst_row, length)\n\n # we subtract 1 from linewidth to change from pixel-counting\n # (make this line 3 pixels wide) to point distances (the\n # distance between pixel centers)\n col_width = (linewidth - 1) * np.sin(-theta) / 2\n row_width = (linewidth - 1) * np.cos(theta) / 2\n perp_rows = np.array([np.linspace(row_i - row_width, row_i + row_width,\n linewidth) for row_i in line_row])\n perp_cols = np.array([np.linspace(col_i - col_width, col_i + col_width,\n linewidth) for col_i in line_col])\n return np.array([perp_rows, perp_cols])\n", "path": "skimage/measure/profile.py"}]} | 3,197 | 707 |
gh_patches_debug_49043 | rasdani/github-patches | git_diff | arviz-devs__arviz-2032 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plot_dot
**Describe the bug**
plotdot fig size doesn't behave the way I expect, in that when I set `figsize` in an axes that triple a previous one its not triple the size. There also are some minor bugs where the dots seem to be overlapping some
**To Reproduce**
```
samples = stats.beta(2,2).rvs(100)
width = 10
fig, ax = plt.subplots(figsize=(width, 10))
az.plot_dot(samples, ax=ax)
ax.set_title(f"Width: {width}")
ax.set_xlim(0,1)
```
Then try this, but see that figure is not three times the width
```
width = 30
fig, ax = plt.subplots(figsize=(width, 10))
az.plot_dot(samples, ax=ax)
ax.set_title(f"Width: {width}")
ax.set_xlim(0,1)
```


**Expected behavior**
Figsize from `plt.subplots` is respected
**Additional context**
Arviz '0.12.0'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `arviz/plots/backends/matplotlib/dotplot.py`
Content:
```
1 """Matplotlib dotplot."""
2 import math
3 import warnings
4 import numpy as np
5 import matplotlib.pyplot as plt
6 from matplotlib import _pylab_helpers
7
8 from ...plot_utils import _scale_fig_size
9 from . import backend_kwarg_defaults, create_axes_grid, backend_show
10 from ...plot_utils import plot_point_interval
11 from ...dotplot import wilkinson_algorithm, layout_stacks
12
13
14 def plot_dot(
15 values,
16 binwidth,
17 dotsize,
18 stackratio,
19 hdi_prob,
20 quartiles,
21 rotated,
22 dotcolor,
23 intervalcolor,
24 markersize,
25 markercolor,
26 marker,
27 figsize,
28 linewidth,
29 point_estimate,
30 nquantiles,
31 point_interval,
32 ax,
33 show,
34 backend_kwargs,
35 plot_kwargs,
36 ):
37 """Matplotlib dotplot."""
38 if backend_kwargs is None:
39 backend_kwargs = {}
40
41 backend_kwargs = {**backend_kwarg_defaults(), **backend_kwargs}
42
43 backend_kwargs.setdefault("figsize", figsize)
44 backend_kwargs["squeeze"] = True
45
46 (figsize, _, _, _, auto_linewidth, auto_markersize) = _scale_fig_size(figsize, None)
47
48 if plot_kwargs is None:
49 plot_kwargs = {}
50 plot_kwargs.setdefault("color", dotcolor)
51
52 if linewidth is None:
53 linewidth = auto_linewidth
54
55 if markersize is None:
56 markersize = auto_markersize
57
58 if ax is None:
59 fig_manager = _pylab_helpers.Gcf.get_active()
60 if fig_manager is not None:
61 ax = fig_manager.canvas.figure.gca()
62 else:
63 _, ax = create_axes_grid(
64 1,
65 backend_kwargs=backend_kwargs,
66 )
67
68 if point_interval:
69 ax = plot_point_interval(
70 ax,
71 values,
72 point_estimate,
73 hdi_prob,
74 quartiles,
75 linewidth,
76 markersize,
77 markercolor,
78 marker,
79 rotated,
80 intervalcolor,
81 "matplotlib",
82 )
83
84 if nquantiles > values.shape[0]:
85 warnings.warn(
86 "nquantiles must be less than or equal to the number of data points", UserWarning
87 )
88 nquantiles = values.shape[0]
89 else:
90 qlist = np.linspace(1 / (2 * nquantiles), 1 - 1 / (2 * nquantiles), nquantiles)
91 values = np.quantile(values, qlist)
92
93 if binwidth is None:
94 binwidth = math.sqrt((values[-1] - values[0] + 1) ** 2 / (2 * nquantiles * np.pi))
95
96 ## Wilkinson's Algorithm
97 stack_locs, stack_count = wilkinson_algorithm(values, binwidth)
98 x, y = layout_stacks(stack_locs, stack_count, binwidth, stackratio, rotated)
99
100 for (x_i, y_i) in zip(x, y):
101 dot = plt.Circle((x_i, y_i), dotsize * binwidth / 2, **plot_kwargs)
102 ax.add_patch(dot)
103
104 if rotated:
105 ax.tick_params(bottom=False, labelbottom=False)
106 else:
107 ax.tick_params(left=False, labelleft=False)
108
109 ax.set_aspect("equal", adjustable="box")
110 ax.autoscale()
111
112 if backend_show(show):
113 plt.show()
114
115 return ax
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/arviz/plots/backends/matplotlib/dotplot.py b/arviz/plots/backends/matplotlib/dotplot.py
--- a/arviz/plots/backends/matplotlib/dotplot.py
+++ b/arviz/plots/backends/matplotlib/dotplot.py
@@ -106,7 +106,7 @@
else:
ax.tick_params(left=False, labelleft=False)
- ax.set_aspect("equal", adjustable="box")
+ ax.set_aspect("equal", adjustable="datalim")
ax.autoscale()
if backend_show(show):
| {"golden_diff": "diff --git a/arviz/plots/backends/matplotlib/dotplot.py b/arviz/plots/backends/matplotlib/dotplot.py\n--- a/arviz/plots/backends/matplotlib/dotplot.py\n+++ b/arviz/plots/backends/matplotlib/dotplot.py\n@@ -106,7 +106,7 @@\n else:\n ax.tick_params(left=False, labelleft=False)\n \n- ax.set_aspect(\"equal\", adjustable=\"box\")\n+ ax.set_aspect(\"equal\", adjustable=\"datalim\")\n ax.autoscale()\n \n if backend_show(show):\n", "issue": "plot_dot \n**Describe the bug**\r\nplotdot fig size doesn't behave the way I expect, in that when I set `figsize` in an axes that triple a previous one its not triple the size. There also are some minor bugs where the dots seem to be overlapping some\r\n\r\n**To Reproduce**\r\n```\r\nsamples = stats.beta(2,2).rvs(100)\r\n\r\nwidth = 10\r\nfig, ax = plt.subplots(figsize=(width, 10))\r\naz.plot_dot(samples, ax=ax)\r\nax.set_title(f\"Width: {width}\")\r\nax.set_xlim(0,1)\r\n```\r\n\r\nThen try this, but see that figure is not three times the width\r\n```\r\nwidth = 30\r\nfig, ax = plt.subplots(figsize=(width, 10))\r\naz.plot_dot(samples, ax=ax)\r\nax.set_title(f\"Width: {width}\")\r\nax.set_xlim(0,1)\r\n```\r\n\r\n\r\n\r\n\r\n\r\n**Expected behavior**\r\nFigsize from `plt.subplots` is respected\r\n\r\n**Additional context**\r\nArviz '0.12.0'\r\n\n", "before_files": [{"content": "\"\"\"Matplotlib dotplot.\"\"\"\nimport math\nimport warnings\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import _pylab_helpers\n\nfrom ...plot_utils import _scale_fig_size\nfrom . import backend_kwarg_defaults, create_axes_grid, backend_show\nfrom ...plot_utils import plot_point_interval\nfrom ...dotplot import wilkinson_algorithm, layout_stacks\n\n\ndef plot_dot(\n values,\n binwidth,\n dotsize,\n stackratio,\n hdi_prob,\n quartiles,\n rotated,\n dotcolor,\n intervalcolor,\n markersize,\n markercolor,\n marker,\n figsize,\n linewidth,\n point_estimate,\n nquantiles,\n point_interval,\n ax,\n show,\n backend_kwargs,\n plot_kwargs,\n):\n \"\"\"Matplotlib dotplot.\"\"\"\n if backend_kwargs is None:\n backend_kwargs = {}\n\n backend_kwargs = {**backend_kwarg_defaults(), **backend_kwargs}\n\n backend_kwargs.setdefault(\"figsize\", figsize)\n backend_kwargs[\"squeeze\"] = True\n\n (figsize, _, _, _, auto_linewidth, auto_markersize) = _scale_fig_size(figsize, None)\n\n if plot_kwargs is None:\n plot_kwargs = {}\n plot_kwargs.setdefault(\"color\", dotcolor)\n\n if linewidth is None:\n linewidth = auto_linewidth\n\n if markersize is None:\n markersize = auto_markersize\n\n if ax is None:\n fig_manager = _pylab_helpers.Gcf.get_active()\n if fig_manager is not None:\n ax = fig_manager.canvas.figure.gca()\n else:\n _, ax = create_axes_grid(\n 1,\n backend_kwargs=backend_kwargs,\n )\n\n if point_interval:\n ax = plot_point_interval(\n ax,\n values,\n point_estimate,\n hdi_prob,\n quartiles,\n linewidth,\n markersize,\n markercolor,\n marker,\n rotated,\n intervalcolor,\n \"matplotlib\",\n )\n\n if nquantiles > values.shape[0]:\n warnings.warn(\n \"nquantiles must be less than or equal to the number of data points\", UserWarning\n )\n nquantiles = values.shape[0]\n else:\n qlist = np.linspace(1 / (2 * nquantiles), 1 - 1 / (2 * nquantiles), nquantiles)\n values = np.quantile(values, qlist)\n\n if binwidth is None:\n binwidth = math.sqrt((values[-1] - values[0] + 1) ** 2 / (2 * nquantiles * np.pi))\n\n ## Wilkinson's Algorithm\n stack_locs, stack_count = wilkinson_algorithm(values, binwidth)\n x, y = layout_stacks(stack_locs, stack_count, binwidth, stackratio, rotated)\n\n for (x_i, y_i) in zip(x, y):\n dot = plt.Circle((x_i, y_i), dotsize * binwidth / 2, **plot_kwargs)\n ax.add_patch(dot)\n\n if rotated:\n ax.tick_params(bottom=False, labelbottom=False)\n else:\n ax.tick_params(left=False, labelleft=False)\n\n ax.set_aspect(\"equal\", adjustable=\"box\")\n ax.autoscale()\n\n if backend_show(show):\n plt.show()\n\n return ax\n", "path": "arviz/plots/backends/matplotlib/dotplot.py"}], "after_files": [{"content": "\"\"\"Matplotlib dotplot.\"\"\"\nimport math\nimport warnings\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import _pylab_helpers\n\nfrom ...plot_utils import _scale_fig_size\nfrom . import backend_kwarg_defaults, create_axes_grid, backend_show\nfrom ...plot_utils import plot_point_interval\nfrom ...dotplot import wilkinson_algorithm, layout_stacks\n\n\ndef plot_dot(\n values,\n binwidth,\n dotsize,\n stackratio,\n hdi_prob,\n quartiles,\n rotated,\n dotcolor,\n intervalcolor,\n markersize,\n markercolor,\n marker,\n figsize,\n linewidth,\n point_estimate,\n nquantiles,\n point_interval,\n ax,\n show,\n backend_kwargs,\n plot_kwargs,\n):\n \"\"\"Matplotlib dotplot.\"\"\"\n if backend_kwargs is None:\n backend_kwargs = {}\n\n backend_kwargs = {**backend_kwarg_defaults(), **backend_kwargs}\n\n backend_kwargs.setdefault(\"figsize\", figsize)\n backend_kwargs[\"squeeze\"] = True\n\n (figsize, _, _, _, auto_linewidth, auto_markersize) = _scale_fig_size(figsize, None)\n\n if plot_kwargs is None:\n plot_kwargs = {}\n plot_kwargs.setdefault(\"color\", dotcolor)\n\n if linewidth is None:\n linewidth = auto_linewidth\n\n if markersize is None:\n markersize = auto_markersize\n\n if ax is None:\n fig_manager = _pylab_helpers.Gcf.get_active()\n if fig_manager is not None:\n ax = fig_manager.canvas.figure.gca()\n else:\n _, ax = create_axes_grid(\n 1,\n backend_kwargs=backend_kwargs,\n )\n\n if point_interval:\n ax = plot_point_interval(\n ax,\n values,\n point_estimate,\n hdi_prob,\n quartiles,\n linewidth,\n markersize,\n markercolor,\n marker,\n rotated,\n intervalcolor,\n \"matplotlib\",\n )\n\n if nquantiles > values.shape[0]:\n warnings.warn(\n \"nquantiles must be less than or equal to the number of data points\", UserWarning\n )\n nquantiles = values.shape[0]\n else:\n qlist = np.linspace(1 / (2 * nquantiles), 1 - 1 / (2 * nquantiles), nquantiles)\n values = np.quantile(values, qlist)\n\n if binwidth is None:\n binwidth = math.sqrt((values[-1] - values[0] + 1) ** 2 / (2 * nquantiles * np.pi))\n\n ## Wilkinson's Algorithm\n stack_locs, stack_count = wilkinson_algorithm(values, binwidth)\n x, y = layout_stacks(stack_locs, stack_count, binwidth, stackratio, rotated)\n\n for (x_i, y_i) in zip(x, y):\n dot = plt.Circle((x_i, y_i), dotsize * binwidth / 2, **plot_kwargs)\n ax.add_patch(dot)\n\n if rotated:\n ax.tick_params(bottom=False, labelbottom=False)\n else:\n ax.tick_params(left=False, labelleft=False)\n\n ax.set_aspect(\"equal\", adjustable=\"datalim\")\n ax.autoscale()\n\n if backend_show(show):\n plt.show()\n\n return ax\n", "path": "arviz/plots/backends/matplotlib/dotplot.py"}]} | 1,564 | 127 |
gh_patches_debug_9828 | rasdani/github-patches | git_diff | secdev__scapy-3473 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
L2TP post_build is broken
### Brief description
l2tp.py post_build is supposed to update the length. However, it only does this if current length is None, and the length field is initialized to 0, not None, resulting in the length never being updated.
### Scapy version
2.4.5
### Python version
3.8
### Operating system
Ubuntu 20.04
### Additional environment information
_No response_
### How to reproduce
print( (L2TP(header=['control', 'length'], version=2) / 'blahblah').build() )
### Actual result
b'\xc0\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00blahblah'
### Expected result
b'\xc0\x02\x00\x14\x00\x00\x00\x00\x00\x00\x00\x00blahblah'
### Related resources
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scapy/layers/l2tp.py`
Content:
```
1 # This file is part of Scapy
2 # See http://www.secdev.org/projects/scapy for more information
3 # Copyright (C) Philippe Biondi <[email protected]>
4 # This program is published under a GPLv2 license
5
6 """
7 L2TP (Layer 2 Tunneling Protocol) for VPNs.
8
9 [RFC 2661]
10 """
11
12 import struct
13
14 from scapy.packet import Packet, bind_layers, bind_bottom_up
15 from scapy.fields import BitEnumField, ConditionalField, FlagsField, \
16 PadField, ShortField
17 from scapy.layers.inet import UDP
18 from scapy.layers.ppp import PPP
19
20
21 class L2TP(Packet):
22 name = "L2TP"
23 fields_desc = [
24 FlagsField("hdr", 0, 12, ['res00', 'res01', 'res02', 'res03', 'priority', 'offset', # noqa: E501
25 'res06', 'sequence', 'res08', 'res09', 'length', 'control']), # noqa: E501
26 BitEnumField("version", 2, 4, {2: 'L2TPv2'}),
27
28 ConditionalField(ShortField("len", 0),
29 lambda pkt: pkt.hdr & 'control+length'),
30 ShortField("tunnel_id", 0),
31 ShortField("session_id", 0),
32 ConditionalField(ShortField("ns", 0),
33 lambda pkt: pkt.hdr & 'sequence+control'),
34 ConditionalField(ShortField("nr", 0),
35 lambda pkt: pkt.hdr & 'sequence+control'),
36 ConditionalField(
37 PadField(ShortField("offset", 0), 4, b"\x00"),
38 lambda pkt: not (pkt.hdr & 'control') and pkt.hdr & 'offset'
39 )
40 ]
41
42 def post_build(self, pkt, pay):
43 if self.len is None and self.hdr & 'control+length':
44 tmp_len = len(pkt) + len(pay)
45 pkt = pkt[:2] + struct.pack("!H", tmp_len) + pkt[4:]
46 return pkt + pay
47
48
49 bind_bottom_up(UDP, L2TP, dport=1701)
50 bind_bottom_up(UDP, L2TP, sport=1701)
51 bind_layers(UDP, L2TP, dport=1701, sport=1701)
52 bind_layers(L2TP, PPP,)
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scapy/layers/l2tp.py b/scapy/layers/l2tp.py
--- a/scapy/layers/l2tp.py
+++ b/scapy/layers/l2tp.py
@@ -25,7 +25,7 @@
'res06', 'sequence', 'res08', 'res09', 'length', 'control']), # noqa: E501
BitEnumField("version", 2, 4, {2: 'L2TPv2'}),
- ConditionalField(ShortField("len", 0),
+ ConditionalField(ShortField("len", None),
lambda pkt: pkt.hdr & 'control+length'),
ShortField("tunnel_id", 0),
ShortField("session_id", 0),
| {"golden_diff": "diff --git a/scapy/layers/l2tp.py b/scapy/layers/l2tp.py\n--- a/scapy/layers/l2tp.py\n+++ b/scapy/layers/l2tp.py\n@@ -25,7 +25,7 @@\n 'res06', 'sequence', 'res08', 'res09', 'length', 'control']), # noqa: E501\n BitEnumField(\"version\", 2, 4, {2: 'L2TPv2'}),\n \n- ConditionalField(ShortField(\"len\", 0),\n+ ConditionalField(ShortField(\"len\", None),\n lambda pkt: pkt.hdr & 'control+length'),\n ShortField(\"tunnel_id\", 0),\n ShortField(\"session_id\", 0),\n", "issue": "L2TP post_build is broken\n### Brief description\n\nl2tp.py post_build is supposed to update the length. However, it only does this if current length is None, and the length field is initialized to 0, not None, resulting in the length never being updated. \n\n### Scapy version\n\n2.4.5\n\n### Python version\n\n3.8\n\n### Operating system\n\nUbuntu 20.04\n\n### Additional environment information\n\n_No response_\n\n### How to reproduce\n\nprint( (L2TP(header=['control', 'length'], version=2) / 'blahblah').build() )\r\n\n\n### Actual result\n\nb'\\xc0\\x02\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00blahblah'\n\n### Expected result\n\nb'\\xc0\\x02\\x00\\x14\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00blahblah'\n\n### Related resources\n\n_No response_\n", "before_files": [{"content": "# This file is part of Scapy\n# See http://www.secdev.org/projects/scapy for more information\n# Copyright (C) Philippe Biondi <[email protected]>\n# This program is published under a GPLv2 license\n\n\"\"\"\nL2TP (Layer 2 Tunneling Protocol) for VPNs.\n\n[RFC 2661]\n\"\"\"\n\nimport struct\n\nfrom scapy.packet import Packet, bind_layers, bind_bottom_up\nfrom scapy.fields import BitEnumField, ConditionalField, FlagsField, \\\n PadField, ShortField\nfrom scapy.layers.inet import UDP\nfrom scapy.layers.ppp import PPP\n\n\nclass L2TP(Packet):\n name = \"L2TP\"\n fields_desc = [\n FlagsField(\"hdr\", 0, 12, ['res00', 'res01', 'res02', 'res03', 'priority', 'offset', # noqa: E501\n 'res06', 'sequence', 'res08', 'res09', 'length', 'control']), # noqa: E501\n BitEnumField(\"version\", 2, 4, {2: 'L2TPv2'}),\n\n ConditionalField(ShortField(\"len\", 0),\n lambda pkt: pkt.hdr & 'control+length'),\n ShortField(\"tunnel_id\", 0),\n ShortField(\"session_id\", 0),\n ConditionalField(ShortField(\"ns\", 0),\n lambda pkt: pkt.hdr & 'sequence+control'),\n ConditionalField(ShortField(\"nr\", 0),\n lambda pkt: pkt.hdr & 'sequence+control'),\n ConditionalField(\n PadField(ShortField(\"offset\", 0), 4, b\"\\x00\"),\n lambda pkt: not (pkt.hdr & 'control') and pkt.hdr & 'offset'\n )\n ]\n\n def post_build(self, pkt, pay):\n if self.len is None and self.hdr & 'control+length':\n tmp_len = len(pkt) + len(pay)\n pkt = pkt[:2] + struct.pack(\"!H\", tmp_len) + pkt[4:]\n return pkt + pay\n\n\nbind_bottom_up(UDP, L2TP, dport=1701)\nbind_bottom_up(UDP, L2TP, sport=1701)\nbind_layers(UDP, L2TP, dport=1701, sport=1701)\nbind_layers(L2TP, PPP,)\n", "path": "scapy/layers/l2tp.py"}], "after_files": [{"content": "# This file is part of Scapy\n# See http://www.secdev.org/projects/scapy for more information\n# Copyright (C) Philippe Biondi <[email protected]>\n# This program is published under a GPLv2 license\n\n\"\"\"\nL2TP (Layer 2 Tunneling Protocol) for VPNs.\n\n[RFC 2661]\n\"\"\"\n\nimport struct\n\nfrom scapy.packet import Packet, bind_layers, bind_bottom_up\nfrom scapy.fields import BitEnumField, ConditionalField, FlagsField, \\\n PadField, ShortField\nfrom scapy.layers.inet import UDP\nfrom scapy.layers.ppp import PPP\n\n\nclass L2TP(Packet):\n name = \"L2TP\"\n fields_desc = [\n FlagsField(\"hdr\", 0, 12, ['res00', 'res01', 'res02', 'res03', 'priority', 'offset', # noqa: E501\n 'res06', 'sequence', 'res08', 'res09', 'length', 'control']), # noqa: E501\n BitEnumField(\"version\", 2, 4, {2: 'L2TPv2'}),\n\n ConditionalField(ShortField(\"len\", None),\n lambda pkt: pkt.hdr & 'control+length'),\n ShortField(\"tunnel_id\", 0),\n ShortField(\"session_id\", 0),\n ConditionalField(ShortField(\"ns\", 0),\n lambda pkt: pkt.hdr & 'sequence+control'),\n ConditionalField(ShortField(\"nr\", 0),\n lambda pkt: pkt.hdr & 'sequence+control'),\n ConditionalField(\n PadField(ShortField(\"offset\", 0), 4, b\"\\x00\"),\n lambda pkt: not (pkt.hdr & 'control') and pkt.hdr & 'offset'\n )\n ]\n\n def post_build(self, pkt, pay):\n if self.len is None and self.hdr & 'control+length':\n tmp_len = len(pkt) + len(pay)\n pkt = pkt[:2] + struct.pack(\"!H\", tmp_len) + pkt[4:]\n return pkt + pay\n\n\nbind_bottom_up(UDP, L2TP, dport=1701)\nbind_bottom_up(UDP, L2TP, sport=1701)\nbind_layers(UDP, L2TP, dport=1701, sport=1701)\nbind_layers(L2TP, PPP,)\n", "path": "scapy/layers/l2tp.py"}]} | 1,130 | 173 |
gh_patches_debug_18028 | rasdani/github-patches | git_diff | Mailu__Mailu-1316 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Rainloop Webmail - Authentication fails if you have a special character in your password
In the admin interface, you can define a new password and you can put a special character like `è`.
It works fine with admin interface but it doesn't work at all with the Rainloop webmail. If you try to log in, you will have a message to indicate that the authentication fails, see screenshoot in french:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/admin/mailu/internal/nginx.py`
Content:
```
1 from mailu import models
2 from flask import current_app as app
3
4 import re
5 import urllib
6 import ipaddress
7 import socket
8 import tenacity
9
10
11 SUPPORTED_AUTH_METHODS = ["none", "plain"]
12
13
14 STATUSES = {
15 "authentication": ("Authentication credentials invalid", {
16 "imap": "AUTHENTICATIONFAILED",
17 "smtp": "535 5.7.8",
18 "pop3": "-ERR Authentication failed"
19 }),
20 }
21
22
23 def handle_authentication(headers):
24 """ Handle an HTTP nginx authentication request
25 See: http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html#protocol
26 """
27 method = headers["Auth-Method"]
28 protocol = headers["Auth-Protocol"]
29 # Incoming mail, no authentication
30 if method == "none" and protocol == "smtp":
31 server, port = get_server(headers["Auth-Protocol"], False)
32 return {
33 "Auth-Status": "OK",
34 "Auth-Server": server,
35 "Auth-Port": port
36 }
37 # Authenticated user
38 elif method == "plain":
39 server, port = get_server(headers["Auth-Protocol"], True)
40 user_email = urllib.parse.unquote(headers["Auth-User"])
41 password = urllib.parse.unquote(headers["Auth-Pass"])
42 ip = urllib.parse.unquote(headers["Client-Ip"])
43 user = models.User.query.get(user_email)
44 status = False
45 if user:
46 for token in user.tokens:
47 if (token.check_password(password) and
48 (not token.ip or token.ip == ip)):
49 status = True
50 if user.check_password(password):
51 status = True
52 if status:
53 if protocol == "imap" and not user.enable_imap:
54 status = False
55 elif protocol == "pop3" and not user.enable_pop:
56 status = False
57 if status and user.enabled:
58 return {
59 "Auth-Status": "OK",
60 "Auth-Server": server,
61 "Auth-Port": port
62 }
63 else:
64 status, code = get_status(protocol, "authentication")
65 return {
66 "Auth-Status": status,
67 "Auth-Error-Code": code,
68 "Auth-Wait": 0
69 }
70 # Unexpected
71 return {}
72
73
74 def get_status(protocol, status):
75 """ Return the proper error code depending on the protocol
76 """
77 status, codes = STATUSES[status]
78 return status, codes[protocol]
79
80 def extract_host_port(host_and_port, default_port):
81 host, _, port = re.match('^(.*)(:([0-9]*))?$', host_and_port).groups()
82 return host, int(port) if port else default_port
83
84 def get_server(protocol, authenticated=False):
85 if protocol == "imap":
86 hostname, port = extract_host_port(app.config['IMAP_ADDRESS'], 143)
87 elif protocol == "pop3":
88 hostname, port = extract_host_port(app.config['POP3_ADDRESS'], 110)
89 elif protocol == "smtp":
90 if authenticated:
91 hostname, port = extract_host_port(app.config['AUTHSMTP_ADDRESS'], 10025)
92 else:
93 hostname, port = extract_host_port(app.config['SMTP_ADDRESS'], 25)
94 try:
95 # test if hostname is already resolved to an ip adddress
96 ipaddress.ip_address(hostname)
97 except:
98 # hostname is not an ip address - so we need to resolve it
99 hostname = resolve_hostname(hostname)
100 return hostname, port
101
102 @tenacity.retry(stop=tenacity.stop_after_attempt(100),
103 wait=tenacity.wait_random(min=2, max=5))
104 def resolve_hostname(hostname):
105 """ This function uses system DNS to resolve a hostname.
106 It is capable of retrying in case the host is not immediately available
107 """
108 return socket.gethostbyname(hostname)
109
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/admin/mailu/internal/nginx.py b/core/admin/mailu/internal/nginx.py
--- a/core/admin/mailu/internal/nginx.py
+++ b/core/admin/mailu/internal/nginx.py
@@ -37,8 +37,14 @@
# Authenticated user
elif method == "plain":
server, port = get_server(headers["Auth-Protocol"], True)
- user_email = urllib.parse.unquote(headers["Auth-User"])
- password = urllib.parse.unquote(headers["Auth-Pass"])
+ # According to RFC2616 section 3.7.1 and PEP 3333, HTTP headers should
+ # be ASCII and are generally considered ISO8859-1. However when passing
+ # the password, nginx does not transcode the input UTF string, thus
+ # we need to manually decode.
+ raw_user_email = urllib.parse.unquote(headers["Auth-User"])
+ user_email = raw_user_email.encode("iso8859-1").decode("utf8")
+ raw_password = urllib.parse.unquote(headers["Auth-Pass"])
+ password = raw_password.encode("iso8859-1").decode("utf8")
ip = urllib.parse.unquote(headers["Client-Ip"])
user = models.User.query.get(user_email)
status = False
| {"golden_diff": "diff --git a/core/admin/mailu/internal/nginx.py b/core/admin/mailu/internal/nginx.py\n--- a/core/admin/mailu/internal/nginx.py\n+++ b/core/admin/mailu/internal/nginx.py\n@@ -37,8 +37,14 @@\n # Authenticated user\n elif method == \"plain\":\n server, port = get_server(headers[\"Auth-Protocol\"], True)\n- user_email = urllib.parse.unquote(headers[\"Auth-User\"])\n- password = urllib.parse.unquote(headers[\"Auth-Pass\"])\n+ # According to RFC2616 section 3.7.1 and PEP 3333, HTTP headers should\n+ # be ASCII and are generally considered ISO8859-1. However when passing\n+ # the password, nginx does not transcode the input UTF string, thus\n+ # we need to manually decode.\n+ raw_user_email = urllib.parse.unquote(headers[\"Auth-User\"])\n+ user_email = raw_user_email.encode(\"iso8859-1\").decode(\"utf8\")\n+ raw_password = urllib.parse.unquote(headers[\"Auth-Pass\"])\n+ password = raw_password.encode(\"iso8859-1\").decode(\"utf8\")\n ip = urllib.parse.unquote(headers[\"Client-Ip\"])\n user = models.User.query.get(user_email)\n status = False\n", "issue": "Rainloop Webmail - Authentication fails if you have a special character in your password\nIn the admin interface, you can define a new password and you can put a special character like `\u00e8`.\r\n\r\nIt works fine with admin interface but it doesn't work at all with the Rainloop webmail. If you try to log in, you will have a message to indicate that the authentication fails, see screenshoot in french:\r\n\r\n\r\n\n", "before_files": [{"content": "from mailu import models\nfrom flask import current_app as app\n\nimport re\nimport urllib\nimport ipaddress\nimport socket\nimport tenacity\n\n\nSUPPORTED_AUTH_METHODS = [\"none\", \"plain\"]\n\n\nSTATUSES = {\n \"authentication\": (\"Authentication credentials invalid\", {\n \"imap\": \"AUTHENTICATIONFAILED\",\n \"smtp\": \"535 5.7.8\",\n \"pop3\": \"-ERR Authentication failed\"\n }),\n}\n\n\ndef handle_authentication(headers):\n \"\"\" Handle an HTTP nginx authentication request\n See: http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html#protocol\n \"\"\"\n method = headers[\"Auth-Method\"]\n protocol = headers[\"Auth-Protocol\"]\n # Incoming mail, no authentication\n if method == \"none\" and protocol == \"smtp\":\n server, port = get_server(headers[\"Auth-Protocol\"], False)\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-Port\": port\n }\n # Authenticated user\n elif method == \"plain\":\n server, port = get_server(headers[\"Auth-Protocol\"], True)\n user_email = urllib.parse.unquote(headers[\"Auth-User\"])\n password = urllib.parse.unquote(headers[\"Auth-Pass\"])\n ip = urllib.parse.unquote(headers[\"Client-Ip\"])\n user = models.User.query.get(user_email)\n status = False\n if user:\n for token in user.tokens:\n if (token.check_password(password) and\n (not token.ip or token.ip == ip)):\n status = True\n if user.check_password(password):\n status = True\n if status:\n if protocol == \"imap\" and not user.enable_imap:\n status = False\n elif protocol == \"pop3\" and not user.enable_pop:\n status = False\n if status and user.enabled:\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-Port\": port\n }\n else:\n status, code = get_status(protocol, \"authentication\")\n return {\n \"Auth-Status\": status,\n \"Auth-Error-Code\": code,\n \"Auth-Wait\": 0\n }\n # Unexpected\n return {}\n\n\ndef get_status(protocol, status):\n \"\"\" Return the proper error code depending on the protocol\n \"\"\"\n status, codes = STATUSES[status]\n return status, codes[protocol]\n\ndef extract_host_port(host_and_port, default_port):\n host, _, port = re.match('^(.*)(:([0-9]*))?$', host_and_port).groups()\n return host, int(port) if port else default_port\n\ndef get_server(protocol, authenticated=False):\n if protocol == \"imap\":\n hostname, port = extract_host_port(app.config['IMAP_ADDRESS'], 143)\n elif protocol == \"pop3\":\n hostname, port = extract_host_port(app.config['POP3_ADDRESS'], 110)\n elif protocol == \"smtp\":\n if authenticated:\n hostname, port = extract_host_port(app.config['AUTHSMTP_ADDRESS'], 10025)\n else:\n hostname, port = extract_host_port(app.config['SMTP_ADDRESS'], 25)\n try:\n # test if hostname is already resolved to an ip adddress\n ipaddress.ip_address(hostname)\n except:\n # hostname is not an ip address - so we need to resolve it\n hostname = resolve_hostname(hostname)\n return hostname, port\n\[email protected](stop=tenacity.stop_after_attempt(100),\n wait=tenacity.wait_random(min=2, max=5))\ndef resolve_hostname(hostname):\n \"\"\" This function uses system DNS to resolve a hostname.\n It is capable of retrying in case the host is not immediately available\n \"\"\"\n return socket.gethostbyname(hostname)\n", "path": "core/admin/mailu/internal/nginx.py"}], "after_files": [{"content": "from mailu import models\nfrom flask import current_app as app\n\nimport re\nimport urllib\nimport ipaddress\nimport socket\nimport tenacity\n\n\nSUPPORTED_AUTH_METHODS = [\"none\", \"plain\"]\n\n\nSTATUSES = {\n \"authentication\": (\"Authentication credentials invalid\", {\n \"imap\": \"AUTHENTICATIONFAILED\",\n \"smtp\": \"535 5.7.8\",\n \"pop3\": \"-ERR Authentication failed\"\n }),\n}\n\n\ndef handle_authentication(headers):\n \"\"\" Handle an HTTP nginx authentication request\n See: http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html#protocol\n \"\"\"\n method = headers[\"Auth-Method\"]\n protocol = headers[\"Auth-Protocol\"]\n # Incoming mail, no authentication\n if method == \"none\" and protocol == \"smtp\":\n server, port = get_server(headers[\"Auth-Protocol\"], False)\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-Port\": port\n }\n # Authenticated user\n elif method == \"plain\":\n server, port = get_server(headers[\"Auth-Protocol\"], True)\n # According to RFC2616 section 3.7.1 and PEP 3333, HTTP headers should\n # be ASCII and are generally considered ISO8859-1. However when passing\n # the password, nginx does not transcode the input UTF string, thus\n # we need to manually decode.\n raw_user_email = urllib.parse.unquote(headers[\"Auth-User\"])\n user_email = raw_user_email.encode(\"iso8859-1\").decode(\"utf8\")\n raw_password = urllib.parse.unquote(headers[\"Auth-Pass\"])\n password = raw_password.encode(\"iso8859-1\").decode(\"utf8\")\n ip = urllib.parse.unquote(headers[\"Client-Ip\"])\n user = models.User.query.get(user_email)\n status = False\n if user:\n for token in user.tokens:\n if (token.check_password(password) and\n (not token.ip or token.ip == ip)):\n status = True\n if user.check_password(password):\n status = True\n if status:\n if protocol == \"imap\" and not user.enable_imap:\n status = False\n elif protocol == \"pop3\" and not user.enable_pop:\n status = False\n if status and user.enabled:\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-Port\": port\n }\n else:\n status, code = get_status(protocol, \"authentication\")\n return {\n \"Auth-Status\": status,\n \"Auth-Error-Code\": code,\n \"Auth-Wait\": 0\n }\n # Unexpected\n return {}\n\n\ndef get_status(protocol, status):\n \"\"\" Return the proper error code depending on the protocol\n \"\"\"\n status, codes = STATUSES[status]\n return status, codes[protocol]\n\ndef extract_host_port(host_and_port, default_port):\n host, _, port = re.match('^(.*)(:([0-9]*))?$', host_and_port).groups()\n return host, int(port) if port else default_port\n\ndef get_server(protocol, authenticated=False):\n if protocol == \"imap\":\n hostname, port = extract_host_port(app.config['IMAP_ADDRESS'], 143)\n elif protocol == \"pop3\":\n hostname, port = extract_host_port(app.config['POP3_ADDRESS'], 110)\n elif protocol == \"smtp\":\n if authenticated:\n hostname, port = extract_host_port(app.config['AUTHSMTP_ADDRESS'], 10025)\n else:\n hostname, port = extract_host_port(app.config['SMTP_ADDRESS'], 25)\n try:\n # test if hostname is already resolved to an ip adddress\n ipaddress.ip_address(hostname)\n except:\n # hostname is not an ip address - so we need to resolve it\n hostname = resolve_hostname(hostname)\n return hostname, port\n\[email protected](stop=tenacity.stop_after_attempt(100),\n wait=tenacity.wait_random(min=2, max=5))\ndef resolve_hostname(hostname):\n \"\"\" This function uses system DNS to resolve a hostname.\n It is capable of retrying in case the host is not immediately available\n \"\"\"\n return socket.gethostbyname(hostname)\n", "path": "core/admin/mailu/internal/nginx.py"}]} | 1,481 | 291 |
gh_patches_debug_41077 | rasdani/github-patches | git_diff | conan-io__conan-2495 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
imports/copy question
Suppose I have
```
x86
vc15
bin
opencv_core310.dll
...
```
directory structure in my package. Is it possible to import `opencv_core310.dll` and friends to project's `bin` folder forgetting all the directory structure? I tried
```python
def imports(self):
self.copy("*.dll", "", "bin")
```
but it creates `x86/vc15/bin/opencv_core310.dll` while I would like to get `bin/opencv_core310.dll`
Probably import/copy should have `keep_path` parameter like package/copy does?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/client/importer.py`
Content:
```
1 import calendar
2 import fnmatch
3 import os
4 import time
5
6 from conans import tools
7 from conans.client.file_copier import FileCopier, report_copied_files
8 from conans.client.output import ScopedOutput
9 from conans.errors import ConanException
10 from conans.model.conan_file import get_env_context_manager
11 from conans.model.manifest import FileTreeManifest
12 from conans.util.files import save, md5sum, load
13
14 IMPORTS_MANIFESTS = "conan_imports_manifest.txt"
15
16
17 def undo_imports(current_path, output):
18 manifest_path = os.path.join(current_path, IMPORTS_MANIFESTS)
19 try:
20 manifest_content = load(manifest_path)
21 except:
22 raise ConanException("Cannot load file %s" % manifest_path)
23
24 try:
25 manifest = FileTreeManifest.loads(manifest_content)
26 except:
27 raise ConanException("Wrong manifest file format %s" % manifest_path)
28
29 not_removed = 0
30 files = manifest.files()
31 for filepath in files:
32 if not os.path.exists(filepath):
33 output.warn("File doesn't exist: %s" % filepath)
34 continue
35 try:
36 os.remove(filepath)
37 except:
38 output.error("Cannot remove file (open or busy): %s" % filepath)
39 not_removed += 1
40
41 if not_removed:
42 raise ConanException("Cannot remove %s or more imported files" % not_removed)
43
44 output.success("Removed %s imported files" % (len(files)))
45 try:
46 os.remove(manifest_path)
47 output.success("Removed imports manifest file: %s" % manifest_path)
48 except:
49 raise ConanException("Cannot remove manifest file (open or busy): %s" % manifest_path)
50
51
52 def _report_save_manifest(copied_files, output, dest_folder, manifest_name):
53 report_copied_files(copied_files, output)
54 if copied_files:
55 date = calendar.timegm(time.gmtime())
56 file_dict = {}
57 for f in copied_files:
58 abs_path = os.path.join(dest_folder, f)
59 file_dict[f] = md5sum(abs_path)
60 manifest = FileTreeManifest(date, file_dict)
61 save(os.path.join(dest_folder, manifest_name), str(manifest))
62
63
64 def run_imports(conanfile, dest_folder, output):
65 if not hasattr(conanfile, "imports"):
66 return []
67 file_importer = _FileImporter(conanfile, dest_folder)
68 conanfile.copy = file_importer
69 conanfile.imports_folder = dest_folder
70 with get_env_context_manager(conanfile):
71 with tools.chdir(dest_folder):
72 conanfile.imports()
73 copied_files = file_importer.copied_files
74 import_output = ScopedOutput("%s imports()" % output.scope, output)
75 _report_save_manifest(copied_files, import_output, dest_folder, IMPORTS_MANIFESTS)
76 return copied_files
77
78
79 def run_deploy(conanfile, install_folder, output):
80 deploy_output = ScopedOutput("%s deploy()" % output.scope, output)
81 file_importer = _FileImporter(conanfile, install_folder)
82 package_copied = set()
83
84 # This is necessary to capture FileCopier full destination paths
85 # Maybe could be improved in FileCopier
86 def file_copier(*args, **kwargs):
87 file_copy = FileCopier(conanfile.package_folder, install_folder)
88 copied = file_copy(*args, **kwargs)
89 package_copied.update(copied)
90
91 conanfile.copy_deps = file_importer
92 conanfile.copy = file_copier
93 conanfile.install_folder = install_folder
94 with get_env_context_manager(conanfile):
95 with tools.chdir(install_folder):
96 conanfile.deploy()
97
98 copied_files = file_importer.copied_files
99 copied_files.update(package_copied)
100 _report_save_manifest(copied_files, deploy_output, install_folder, "deploy_manifest.txt")
101
102
103 class _FileImporter(object):
104 """ manages the copy of files, resources, libs from the local store to the user
105 space. E.g.: shared libs, dlls, they will be in the package folder of your
106 configuration in the store. But you dont want to add every package to the
107 system PATH. Those shared libs can be copied to the user folder, close to
108 the exes where they can be found without modifying the path.
109 Useful also for copying other resources as images or data files.
110 It can be also used for Golang projects, in which the packages are always
111 source based and need to be copied to the user folder to be built
112 """
113 def __init__(self, conanfile, dst_folder):
114 self._conanfile = conanfile
115 self._dst_folder = dst_folder
116 self.copied_files = set()
117
118 def __call__(self, pattern, dst="", src="", root_package=None, folder=False,
119 ignore_case=False, excludes=None):
120 """
121 param pattern: an fnmatch file pattern of the files that should be copied. Eg. *.dll
122 param dst: the destination local folder, wrt to current conanfile dir, to which
123 the files will be copied. Eg: "bin"
124 param src: the source folder in which those files will be searched. This folder
125 will be stripped from the dst name. Eg.: lib/Debug/x86
126 param root_package: fnmatch pattern of the package name ("OpenCV", "Boost") from
127 which files will be copied. Default: all packages in deps
128 """
129 if os.path.isabs(dst):
130 real_dst_folder = dst
131 else:
132 real_dst_folder = os.path.normpath(os.path.join(self._dst_folder, dst))
133
134 matching_paths = self._get_folders(root_package)
135 for name, matching_path in matching_paths.items():
136 final_dst_path = os.path.join(real_dst_folder, name) if folder else real_dst_folder
137 file_copier = FileCopier(matching_path, final_dst_path)
138 files = file_copier(pattern, src=src, links=True, ignore_case=ignore_case,
139 excludes=excludes)
140 self.copied_files.update(files)
141
142 def _get_folders(self, pattern):
143 """ given the current deps graph, compute a dict {name: store-path} of
144 each dependency
145 """
146 if not pattern:
147 return {pkg: cpp_info.rootpath for pkg, cpp_info in self._conanfile.deps_cpp_info.dependencies}
148 return {pkg: cpp_info.rootpath for pkg, cpp_info in self._conanfile.deps_cpp_info.dependencies
149 if fnmatch.fnmatch(pkg, pattern)}
150
```
Path: `conans/client/loader_parse.py`
Content:
```
1 import imp
2 import inspect
3 import os
4 import sys
5 import uuid
6
7 from conans.errors import ConanException, NotFoundException
8 from conans.model.conan_file import ConanFile
9 from conans.util.config_parser import ConfigParser
10 from conans.tools import chdir
11 from conans.client.generators import registered_generators
12 from conans.model import Generator
13
14
15 def load_conanfile_class(conanfile_path):
16 loaded, filename = _parse_file(conanfile_path)
17 try:
18 return _parse_module(loaded, filename)
19 except Exception as e: # re-raise with file name
20 raise ConanException("%s: %s" % (conanfile_path, str(e)))
21
22
23 def _parse_module(conanfile_module, filename):
24 """ Parses a python in-memory module, to extract the classes, mainly the main
25 class defining the Recipe, but also process possible existing generators
26 @param conanfile_module: the module to be processed
27 @param consumer: if this is a root node in the hierarchy, the consumer project
28 @return: the main ConanFile class from the module
29 """
30 result = None
31 for name, attr in conanfile_module.__dict__.items():
32 if name[0] == "_":
33 continue
34 if (inspect.isclass(attr) and issubclass(attr, ConanFile) and attr != ConanFile and
35 attr.__dict__["__module__"] == filename):
36 if result is None:
37 result = attr
38 else:
39 raise ConanException("More than 1 conanfile in the file")
40 if (inspect.isclass(attr) and issubclass(attr, Generator) and attr != Generator and
41 attr.__dict__["__module__"] == filename):
42 registered_generators.add(attr.__name__, attr)
43
44 if result is None:
45 raise ConanException("No subclass of ConanFile")
46
47 return result
48
49
50 def _parse_file(conan_file_path):
51 """ From a given path, obtain the in memory python import module
52 """
53
54 if not os.path.exists(conan_file_path):
55 raise NotFoundException("%s not found!" % conan_file_path)
56
57 filename = os.path.splitext(os.path.basename(conan_file_path))[0]
58
59 try:
60 current_dir = os.path.dirname(conan_file_path)
61 sys.path.append(current_dir)
62 old_modules = list(sys.modules.keys())
63 with chdir(current_dir):
64 sys.dont_write_bytecode = True
65 loaded = imp.load_source(filename, conan_file_path)
66 sys.dont_write_bytecode = False
67 # Put all imported files under a new package name
68 module_id = uuid.uuid1()
69 added_modules = set(sys.modules).difference(old_modules)
70 for added in added_modules:
71 module = sys.modules[added]
72 if module:
73 try:
74 folder = os.path.dirname(module.__file__)
75 except AttributeError: # some module doesn't have __file__
76 pass
77 else:
78 if folder.startswith(current_dir):
79 module = sys.modules.pop(added)
80 sys.modules["%s.%s" % (module_id, added)] = module
81 except Exception:
82 import traceback
83 trace = traceback.format_exc().split('\n')
84 raise ConanException("Unable to load conanfile in %s\n%s" % (conan_file_path,
85 '\n'.join(trace[3:])))
86 finally:
87 sys.path.pop()
88
89 return loaded, filename
90
91
92 class ConanFileTextLoader(object):
93 """Parse a conanfile.txt file"""
94
95 def __init__(self, input_text):
96 # Prefer composition over inheritance, the __getattr__ was breaking things
97 self._config_parser = ConfigParser(input_text, ["requires", "generators", "options",
98 "imports"], parse_lines=True)
99
100 @property
101 def requirements(self):
102 """returns a list of requires
103 EX: "OpenCV/2.4.10@phil/stable"
104 """
105 return [r.strip() for r in self._config_parser.requires.splitlines()]
106
107 @property
108 def options(self):
109 return self._config_parser.options
110
111 @property
112 def _import_parameters(self):
113 def _parse_args(param_string):
114 root_package, ignore_case, folder, excludes = None, False, False, None
115 params = param_string.split(",")
116 params = [p.split("=") for p in params if p]
117 for (var, value) in params:
118 var = var.strip()
119 value = value.strip()
120 if var == "root_package":
121 root_package = value
122 elif var == "ignore_case":
123 ignore_case = (value.lower() == "true")
124 elif var == "folder":
125 folder = (value.lower() == "true")
126 elif var == "excludes":
127 excludes = value.split()
128 else:
129 raise Exception("Invalid imports. Unknown argument %s" % var)
130 return root_package, ignore_case, folder, excludes
131
132 def _parse_import(line):
133 pair = line.split("->")
134 source = pair[0].strip().split(',', 1)
135 dest = pair[1].strip()
136 src, pattern = source[0].strip(), source[1].strip()
137 return pattern, dest, src
138
139 ret = []
140 local_install_text = self._config_parser.imports
141 for line in local_install_text.splitlines():
142 # discard blanks, comments, and discard trailing comments
143 line = line.strip()
144 if not line or line.startswith("#"):
145 continue
146 line = line.split("#", 1)[0]
147
148 invalid_line_msg = "Invalid imports line: %s\nEX: OpenCV/lib, * -> ./lib" % line
149 if line.startswith("/") or line.startswith(".."):
150 raise ConanException("%s\n%s" % (invalid_line_msg,
151 "Import's paths can't begin with '/' or '..'"))
152 try:
153 tokens = line.split("@", 1)
154 if len(tokens) > 1:
155 line = tokens[0]
156 params = tokens[1]
157 else:
158 params = ""
159 root_package, ignore_case, folder, excludes = _parse_args(params)
160 pattern, dest, src = _parse_import(line)
161 ret.append((pattern, dest, src, root_package, folder, ignore_case, excludes))
162 except Exception as e:
163 raise ConanException("%s\n%s" % (invalid_line_msg, str(e)))
164 return ret
165
166 @property
167 def generators(self):
168 return self._config_parser.generators.splitlines()
169
170 def imports_method(self, conan_file):
171 parameters = self._import_parameters
172
173 def imports():
174 for import_params in parameters:
175 conan_file.copy(*import_params)
176 return imports
177
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conans/client/importer.py b/conans/client/importer.py
--- a/conans/client/importer.py
+++ b/conans/client/importer.py
@@ -116,7 +116,7 @@
self.copied_files = set()
def __call__(self, pattern, dst="", src="", root_package=None, folder=False,
- ignore_case=False, excludes=None):
+ ignore_case=False, excludes=None, keep_path=True):
"""
param pattern: an fnmatch file pattern of the files that should be copied. Eg. *.dll
param dst: the destination local folder, wrt to current conanfile dir, to which
@@ -136,7 +136,7 @@
final_dst_path = os.path.join(real_dst_folder, name) if folder else real_dst_folder
file_copier = FileCopier(matching_path, final_dst_path)
files = file_copier(pattern, src=src, links=True, ignore_case=ignore_case,
- excludes=excludes)
+ excludes=excludes, keep_path=keep_path)
self.copied_files.update(files)
def _get_folders(self, pattern):
diff --git a/conans/client/loader_parse.py b/conans/client/loader_parse.py
--- a/conans/client/loader_parse.py
+++ b/conans/client/loader_parse.py
@@ -111,7 +111,7 @@
@property
def _import_parameters(self):
def _parse_args(param_string):
- root_package, ignore_case, folder, excludes = None, False, False, None
+ root_package, ignore_case, folder, excludes, keep_path = None, False, False, None, True
params = param_string.split(",")
params = [p.split("=") for p in params if p]
for (var, value) in params:
@@ -125,9 +125,11 @@
folder = (value.lower() == "true")
elif var == "excludes":
excludes = value.split()
+ elif var == "keep_path":
+ keep_path = (value.lower() == "true")
else:
raise Exception("Invalid imports. Unknown argument %s" % var)
- return root_package, ignore_case, folder, excludes
+ return root_package, ignore_case, folder, excludes, keep_path
def _parse_import(line):
pair = line.split("->")
@@ -156,9 +158,9 @@
params = tokens[1]
else:
params = ""
- root_package, ignore_case, folder, excludes = _parse_args(params)
+ root_package, ignore_case, folder, excludes, keep_path = _parse_args(params)
pattern, dest, src = _parse_import(line)
- ret.append((pattern, dest, src, root_package, folder, ignore_case, excludes))
+ ret.append((pattern, dest, src, root_package, folder, ignore_case, excludes, keep_path))
except Exception as e:
raise ConanException("%s\n%s" % (invalid_line_msg, str(e)))
return ret
| {"golden_diff": "diff --git a/conans/client/importer.py b/conans/client/importer.py\n--- a/conans/client/importer.py\n+++ b/conans/client/importer.py\n@@ -116,7 +116,7 @@\n self.copied_files = set()\n \n def __call__(self, pattern, dst=\"\", src=\"\", root_package=None, folder=False,\n- ignore_case=False, excludes=None):\n+ ignore_case=False, excludes=None, keep_path=True):\n \"\"\"\n param pattern: an fnmatch file pattern of the files that should be copied. Eg. *.dll\n param dst: the destination local folder, wrt to current conanfile dir, to which\n@@ -136,7 +136,7 @@\n final_dst_path = os.path.join(real_dst_folder, name) if folder else real_dst_folder\n file_copier = FileCopier(matching_path, final_dst_path)\n files = file_copier(pattern, src=src, links=True, ignore_case=ignore_case,\n- excludes=excludes)\n+ excludes=excludes, keep_path=keep_path)\n self.copied_files.update(files)\n \n def _get_folders(self, pattern):\ndiff --git a/conans/client/loader_parse.py b/conans/client/loader_parse.py\n--- a/conans/client/loader_parse.py\n+++ b/conans/client/loader_parse.py\n@@ -111,7 +111,7 @@\n @property\n def _import_parameters(self):\n def _parse_args(param_string):\n- root_package, ignore_case, folder, excludes = None, False, False, None\n+ root_package, ignore_case, folder, excludes, keep_path = None, False, False, None, True\n params = param_string.split(\",\")\n params = [p.split(\"=\") for p in params if p]\n for (var, value) in params:\n@@ -125,9 +125,11 @@\n folder = (value.lower() == \"true\")\n elif var == \"excludes\":\n excludes = value.split()\n+ elif var == \"keep_path\":\n+ keep_path = (value.lower() == \"true\")\n else:\n raise Exception(\"Invalid imports. Unknown argument %s\" % var)\n- return root_package, ignore_case, folder, excludes\n+ return root_package, ignore_case, folder, excludes, keep_path\n \n def _parse_import(line):\n pair = line.split(\"->\")\n@@ -156,9 +158,9 @@\n params = tokens[1]\n else:\n params = \"\"\n- root_package, ignore_case, folder, excludes = _parse_args(params)\n+ root_package, ignore_case, folder, excludes, keep_path = _parse_args(params)\n pattern, dest, src = _parse_import(line)\n- ret.append((pattern, dest, src, root_package, folder, ignore_case, excludes))\n+ ret.append((pattern, dest, src, root_package, folder, ignore_case, excludes, keep_path))\n except Exception as e:\n raise ConanException(\"%s\\n%s\" % (invalid_line_msg, str(e)))\n return ret\n", "issue": "imports/copy question\nSuppose I have\r\n```\r\nx86\r\n vc15\r\n bin\r\n opencv_core310.dll \r\n ...\r\n```\r\ndirectory structure in my package. Is it possible to import `opencv_core310.dll` and friends to project's `bin` folder forgetting all the directory structure? I tried\r\n```python\r\ndef imports(self):\r\n self.copy(\"*.dll\", \"\", \"bin\")\r\n```\r\nbut it creates `x86/vc15/bin/opencv_core310.dll` while I would like to get `bin/opencv_core310.dll`\r\nProbably import/copy should have `keep_path` parameter like package/copy does?\r\n\r\n\n", "before_files": [{"content": "import calendar\nimport fnmatch\nimport os\nimport time\n\nfrom conans import tools\nfrom conans.client.file_copier import FileCopier, report_copied_files\nfrom conans.client.output import ScopedOutput\nfrom conans.errors import ConanException\nfrom conans.model.conan_file import get_env_context_manager\nfrom conans.model.manifest import FileTreeManifest\nfrom conans.util.files import save, md5sum, load\n\nIMPORTS_MANIFESTS = \"conan_imports_manifest.txt\"\n\n\ndef undo_imports(current_path, output):\n manifest_path = os.path.join(current_path, IMPORTS_MANIFESTS)\n try:\n manifest_content = load(manifest_path)\n except:\n raise ConanException(\"Cannot load file %s\" % manifest_path)\n\n try:\n manifest = FileTreeManifest.loads(manifest_content)\n except:\n raise ConanException(\"Wrong manifest file format %s\" % manifest_path)\n\n not_removed = 0\n files = manifest.files()\n for filepath in files:\n if not os.path.exists(filepath):\n output.warn(\"File doesn't exist: %s\" % filepath)\n continue\n try:\n os.remove(filepath)\n except:\n output.error(\"Cannot remove file (open or busy): %s\" % filepath)\n not_removed += 1\n\n if not_removed:\n raise ConanException(\"Cannot remove %s or more imported files\" % not_removed)\n\n output.success(\"Removed %s imported files\" % (len(files)))\n try:\n os.remove(manifest_path)\n output.success(\"Removed imports manifest file: %s\" % manifest_path)\n except:\n raise ConanException(\"Cannot remove manifest file (open or busy): %s\" % manifest_path)\n\n\ndef _report_save_manifest(copied_files, output, dest_folder, manifest_name):\n report_copied_files(copied_files, output)\n if copied_files:\n date = calendar.timegm(time.gmtime())\n file_dict = {}\n for f in copied_files:\n abs_path = os.path.join(dest_folder, f)\n file_dict[f] = md5sum(abs_path)\n manifest = FileTreeManifest(date, file_dict)\n save(os.path.join(dest_folder, manifest_name), str(manifest))\n\n\ndef run_imports(conanfile, dest_folder, output):\n if not hasattr(conanfile, \"imports\"):\n return []\n file_importer = _FileImporter(conanfile, dest_folder)\n conanfile.copy = file_importer\n conanfile.imports_folder = dest_folder\n with get_env_context_manager(conanfile):\n with tools.chdir(dest_folder):\n conanfile.imports()\n copied_files = file_importer.copied_files\n import_output = ScopedOutput(\"%s imports()\" % output.scope, output)\n _report_save_manifest(copied_files, import_output, dest_folder, IMPORTS_MANIFESTS)\n return copied_files\n\n\ndef run_deploy(conanfile, install_folder, output):\n deploy_output = ScopedOutput(\"%s deploy()\" % output.scope, output)\n file_importer = _FileImporter(conanfile, install_folder)\n package_copied = set()\n\n # This is necessary to capture FileCopier full destination paths\n # Maybe could be improved in FileCopier\n def file_copier(*args, **kwargs):\n file_copy = FileCopier(conanfile.package_folder, install_folder)\n copied = file_copy(*args, **kwargs)\n package_copied.update(copied)\n\n conanfile.copy_deps = file_importer\n conanfile.copy = file_copier\n conanfile.install_folder = install_folder\n with get_env_context_manager(conanfile):\n with tools.chdir(install_folder):\n conanfile.deploy()\n\n copied_files = file_importer.copied_files\n copied_files.update(package_copied)\n _report_save_manifest(copied_files, deploy_output, install_folder, \"deploy_manifest.txt\")\n\n\nclass _FileImporter(object):\n \"\"\" manages the copy of files, resources, libs from the local store to the user\n space. E.g.: shared libs, dlls, they will be in the package folder of your\n configuration in the store. But you dont want to add every package to the\n system PATH. Those shared libs can be copied to the user folder, close to\n the exes where they can be found without modifying the path.\n Useful also for copying other resources as images or data files.\n It can be also used for Golang projects, in which the packages are always\n source based and need to be copied to the user folder to be built\n \"\"\"\n def __init__(self, conanfile, dst_folder):\n self._conanfile = conanfile\n self._dst_folder = dst_folder\n self.copied_files = set()\n\n def __call__(self, pattern, dst=\"\", src=\"\", root_package=None, folder=False,\n ignore_case=False, excludes=None):\n \"\"\"\n param pattern: an fnmatch file pattern of the files that should be copied. Eg. *.dll\n param dst: the destination local folder, wrt to current conanfile dir, to which\n the files will be copied. Eg: \"bin\"\n param src: the source folder in which those files will be searched. This folder\n will be stripped from the dst name. Eg.: lib/Debug/x86\n param root_package: fnmatch pattern of the package name (\"OpenCV\", \"Boost\") from\n which files will be copied. Default: all packages in deps\n \"\"\"\n if os.path.isabs(dst):\n real_dst_folder = dst\n else:\n real_dst_folder = os.path.normpath(os.path.join(self._dst_folder, dst))\n\n matching_paths = self._get_folders(root_package)\n for name, matching_path in matching_paths.items():\n final_dst_path = os.path.join(real_dst_folder, name) if folder else real_dst_folder\n file_copier = FileCopier(matching_path, final_dst_path)\n files = file_copier(pattern, src=src, links=True, ignore_case=ignore_case,\n excludes=excludes)\n self.copied_files.update(files)\n\n def _get_folders(self, pattern):\n \"\"\" given the current deps graph, compute a dict {name: store-path} of\n each dependency\n \"\"\"\n if not pattern:\n return {pkg: cpp_info.rootpath for pkg, cpp_info in self._conanfile.deps_cpp_info.dependencies}\n return {pkg: cpp_info.rootpath for pkg, cpp_info in self._conanfile.deps_cpp_info.dependencies\n if fnmatch.fnmatch(pkg, pattern)}\n", "path": "conans/client/importer.py"}, {"content": "import imp\nimport inspect\nimport os\nimport sys\nimport uuid\n\nfrom conans.errors import ConanException, NotFoundException\nfrom conans.model.conan_file import ConanFile\nfrom conans.util.config_parser import ConfigParser\nfrom conans.tools import chdir\nfrom conans.client.generators import registered_generators\nfrom conans.model import Generator\n\n\ndef load_conanfile_class(conanfile_path):\n loaded, filename = _parse_file(conanfile_path)\n try:\n return _parse_module(loaded, filename)\n except Exception as e: # re-raise with file name\n raise ConanException(\"%s: %s\" % (conanfile_path, str(e)))\n\n\ndef _parse_module(conanfile_module, filename):\n \"\"\" Parses a python in-memory module, to extract the classes, mainly the main\n class defining the Recipe, but also process possible existing generators\n @param conanfile_module: the module to be processed\n @param consumer: if this is a root node in the hierarchy, the consumer project\n @return: the main ConanFile class from the module\n \"\"\"\n result = None\n for name, attr in conanfile_module.__dict__.items():\n if name[0] == \"_\":\n continue\n if (inspect.isclass(attr) and issubclass(attr, ConanFile) and attr != ConanFile and\n attr.__dict__[\"__module__\"] == filename):\n if result is None:\n result = attr\n else:\n raise ConanException(\"More than 1 conanfile in the file\")\n if (inspect.isclass(attr) and issubclass(attr, Generator) and attr != Generator and\n attr.__dict__[\"__module__\"] == filename):\n registered_generators.add(attr.__name__, attr)\n\n if result is None:\n raise ConanException(\"No subclass of ConanFile\")\n\n return result\n\n\ndef _parse_file(conan_file_path):\n \"\"\" From a given path, obtain the in memory python import module\n \"\"\"\n\n if not os.path.exists(conan_file_path):\n raise NotFoundException(\"%s not found!\" % conan_file_path)\n\n filename = os.path.splitext(os.path.basename(conan_file_path))[0]\n\n try:\n current_dir = os.path.dirname(conan_file_path)\n sys.path.append(current_dir)\n old_modules = list(sys.modules.keys())\n with chdir(current_dir):\n sys.dont_write_bytecode = True\n loaded = imp.load_source(filename, conan_file_path)\n sys.dont_write_bytecode = False\n # Put all imported files under a new package name\n module_id = uuid.uuid1()\n added_modules = set(sys.modules).difference(old_modules)\n for added in added_modules:\n module = sys.modules[added]\n if module:\n try:\n folder = os.path.dirname(module.__file__)\n except AttributeError: # some module doesn't have __file__\n pass\n else:\n if folder.startswith(current_dir):\n module = sys.modules.pop(added)\n sys.modules[\"%s.%s\" % (module_id, added)] = module\n except Exception:\n import traceback\n trace = traceback.format_exc().split('\\n')\n raise ConanException(\"Unable to load conanfile in %s\\n%s\" % (conan_file_path,\n '\\n'.join(trace[3:])))\n finally:\n sys.path.pop()\n\n return loaded, filename\n\n\nclass ConanFileTextLoader(object):\n \"\"\"Parse a conanfile.txt file\"\"\"\n\n def __init__(self, input_text):\n # Prefer composition over inheritance, the __getattr__ was breaking things\n self._config_parser = ConfigParser(input_text, [\"requires\", \"generators\", \"options\",\n \"imports\"], parse_lines=True)\n\n @property\n def requirements(self):\n \"\"\"returns a list of requires\n EX: \"OpenCV/2.4.10@phil/stable\"\n \"\"\"\n return [r.strip() for r in self._config_parser.requires.splitlines()]\n\n @property\n def options(self):\n return self._config_parser.options\n\n @property\n def _import_parameters(self):\n def _parse_args(param_string):\n root_package, ignore_case, folder, excludes = None, False, False, None\n params = param_string.split(\",\")\n params = [p.split(\"=\") for p in params if p]\n for (var, value) in params:\n var = var.strip()\n value = value.strip()\n if var == \"root_package\":\n root_package = value\n elif var == \"ignore_case\":\n ignore_case = (value.lower() == \"true\")\n elif var == \"folder\":\n folder = (value.lower() == \"true\")\n elif var == \"excludes\":\n excludes = value.split()\n else:\n raise Exception(\"Invalid imports. Unknown argument %s\" % var)\n return root_package, ignore_case, folder, excludes\n\n def _parse_import(line):\n pair = line.split(\"->\")\n source = pair[0].strip().split(',', 1)\n dest = pair[1].strip()\n src, pattern = source[0].strip(), source[1].strip()\n return pattern, dest, src\n\n ret = []\n local_install_text = self._config_parser.imports\n for line in local_install_text.splitlines():\n # discard blanks, comments, and discard trailing comments\n line = line.strip()\n if not line or line.startswith(\"#\"):\n continue\n line = line.split(\"#\", 1)[0]\n\n invalid_line_msg = \"Invalid imports line: %s\\nEX: OpenCV/lib, * -> ./lib\" % line\n if line.startswith(\"/\") or line.startswith(\"..\"):\n raise ConanException(\"%s\\n%s\" % (invalid_line_msg,\n \"Import's paths can't begin with '/' or '..'\"))\n try:\n tokens = line.split(\"@\", 1)\n if len(tokens) > 1:\n line = tokens[0]\n params = tokens[1]\n else:\n params = \"\"\n root_package, ignore_case, folder, excludes = _parse_args(params)\n pattern, dest, src = _parse_import(line)\n ret.append((pattern, dest, src, root_package, folder, ignore_case, excludes))\n except Exception as e:\n raise ConanException(\"%s\\n%s\" % (invalid_line_msg, str(e)))\n return ret\n\n @property\n def generators(self):\n return self._config_parser.generators.splitlines()\n\n def imports_method(self, conan_file):\n parameters = self._import_parameters\n\n def imports():\n for import_params in parameters:\n conan_file.copy(*import_params)\n return imports\n", "path": "conans/client/loader_parse.py"}], "after_files": [{"content": "import calendar\nimport fnmatch\nimport os\nimport time\n\nfrom conans import tools\nfrom conans.client.file_copier import FileCopier, report_copied_files\nfrom conans.client.output import ScopedOutput\nfrom conans.errors import ConanException\nfrom conans.model.conan_file import get_env_context_manager\nfrom conans.model.manifest import FileTreeManifest\nfrom conans.util.files import save, md5sum, load\n\nIMPORTS_MANIFESTS = \"conan_imports_manifest.txt\"\n\n\ndef undo_imports(current_path, output):\n manifest_path = os.path.join(current_path, IMPORTS_MANIFESTS)\n try:\n manifest_content = load(manifest_path)\n except:\n raise ConanException(\"Cannot load file %s\" % manifest_path)\n\n try:\n manifest = FileTreeManifest.loads(manifest_content)\n except:\n raise ConanException(\"Wrong manifest file format %s\" % manifest_path)\n\n not_removed = 0\n files = manifest.files()\n for filepath in files:\n if not os.path.exists(filepath):\n output.warn(\"File doesn't exist: %s\" % filepath)\n continue\n try:\n os.remove(filepath)\n except:\n output.error(\"Cannot remove file (open or busy): %s\" % filepath)\n not_removed += 1\n\n if not_removed:\n raise ConanException(\"Cannot remove %s or more imported files\" % not_removed)\n\n output.success(\"Removed %s imported files\" % (len(files)))\n try:\n os.remove(manifest_path)\n output.success(\"Removed imports manifest file: %s\" % manifest_path)\n except:\n raise ConanException(\"Cannot remove manifest file (open or busy): %s\" % manifest_path)\n\n\ndef _report_save_manifest(copied_files, output, dest_folder, manifest_name):\n report_copied_files(copied_files, output)\n if copied_files:\n date = calendar.timegm(time.gmtime())\n file_dict = {}\n for f in copied_files:\n abs_path = os.path.join(dest_folder, f)\n file_dict[f] = md5sum(abs_path)\n manifest = FileTreeManifest(date, file_dict)\n save(os.path.join(dest_folder, manifest_name), str(manifest))\n\n\ndef run_imports(conanfile, dest_folder, output):\n if not hasattr(conanfile, \"imports\"):\n return []\n file_importer = _FileImporter(conanfile, dest_folder)\n conanfile.copy = file_importer\n conanfile.imports_folder = dest_folder\n with get_env_context_manager(conanfile):\n with tools.chdir(dest_folder):\n conanfile.imports()\n copied_files = file_importer.copied_files\n import_output = ScopedOutput(\"%s imports()\" % output.scope, output)\n _report_save_manifest(copied_files, import_output, dest_folder, IMPORTS_MANIFESTS)\n return copied_files\n\n\ndef run_deploy(conanfile, install_folder, output):\n deploy_output = ScopedOutput(\"%s deploy()\" % output.scope, output)\n file_importer = _FileImporter(conanfile, install_folder)\n package_copied = set()\n\n # This is necessary to capture FileCopier full destination paths\n # Maybe could be improved in FileCopier\n def file_copier(*args, **kwargs):\n file_copy = FileCopier(conanfile.package_folder, install_folder)\n copied = file_copy(*args, **kwargs)\n package_copied.update(copied)\n\n conanfile.copy_deps = file_importer\n conanfile.copy = file_copier\n conanfile.install_folder = install_folder\n with get_env_context_manager(conanfile):\n with tools.chdir(install_folder):\n conanfile.deploy()\n\n copied_files = file_importer.copied_files\n copied_files.update(package_copied)\n _report_save_manifest(copied_files, deploy_output, install_folder, \"deploy_manifest.txt\")\n\n\nclass _FileImporter(object):\n \"\"\" manages the copy of files, resources, libs from the local store to the user\n space. E.g.: shared libs, dlls, they will be in the package folder of your\n configuration in the store. But you dont want to add every package to the\n system PATH. Those shared libs can be copied to the user folder, close to\n the exes where they can be found without modifying the path.\n Useful also for copying other resources as images or data files.\n It can be also used for Golang projects, in which the packages are always\n source based and need to be copied to the user folder to be built\n \"\"\"\n def __init__(self, conanfile, dst_folder):\n self._conanfile = conanfile\n self._dst_folder = dst_folder\n self.copied_files = set()\n\n def __call__(self, pattern, dst=\"\", src=\"\", root_package=None, folder=False,\n ignore_case=False, excludes=None, keep_path=True):\n \"\"\"\n param pattern: an fnmatch file pattern of the files that should be copied. Eg. *.dll\n param dst: the destination local folder, wrt to current conanfile dir, to which\n the files will be copied. Eg: \"bin\"\n param src: the source folder in which those files will be searched. This folder\n will be stripped from the dst name. Eg.: lib/Debug/x86\n param root_package: fnmatch pattern of the package name (\"OpenCV\", \"Boost\") from\n which files will be copied. Default: all packages in deps\n \"\"\"\n if os.path.isabs(dst):\n real_dst_folder = dst\n else:\n real_dst_folder = os.path.normpath(os.path.join(self._dst_folder, dst))\n\n matching_paths = self._get_folders(root_package)\n for name, matching_path in matching_paths.items():\n final_dst_path = os.path.join(real_dst_folder, name) if folder else real_dst_folder\n file_copier = FileCopier(matching_path, final_dst_path)\n files = file_copier(pattern, src=src, links=True, ignore_case=ignore_case,\n excludes=excludes, keep_path=keep_path)\n self.copied_files.update(files)\n\n def _get_folders(self, pattern):\n \"\"\" given the current deps graph, compute a dict {name: store-path} of\n each dependency\n \"\"\"\n if not pattern:\n return {pkg: cpp_info.rootpath for pkg, cpp_info in self._conanfile.deps_cpp_info.dependencies}\n return {pkg: cpp_info.rootpath for pkg, cpp_info in self._conanfile.deps_cpp_info.dependencies\n if fnmatch.fnmatch(pkg, pattern)}\n", "path": "conans/client/importer.py"}, {"content": "import imp\nimport inspect\nimport os\nimport sys\nimport uuid\n\nfrom conans.errors import ConanException, NotFoundException\nfrom conans.model.conan_file import ConanFile\nfrom conans.util.config_parser import ConfigParser\nfrom conans.tools import chdir\nfrom conans.client.generators import registered_generators\nfrom conans.model import Generator\n\n\ndef load_conanfile_class(conanfile_path):\n loaded, filename = _parse_file(conanfile_path)\n try:\n return _parse_module(loaded, filename)\n except Exception as e: # re-raise with file name\n raise ConanException(\"%s: %s\" % (conanfile_path, str(e)))\n\n\ndef _parse_module(conanfile_module, filename):\n \"\"\" Parses a python in-memory module, to extract the classes, mainly the main\n class defining the Recipe, but also process possible existing generators\n @param conanfile_module: the module to be processed\n @param consumer: if this is a root node in the hierarchy, the consumer project\n @return: the main ConanFile class from the module\n \"\"\"\n result = None\n for name, attr in conanfile_module.__dict__.items():\n if name[0] == \"_\":\n continue\n if (inspect.isclass(attr) and issubclass(attr, ConanFile) and attr != ConanFile and\n attr.__dict__[\"__module__\"] == filename):\n if result is None:\n result = attr\n else:\n raise ConanException(\"More than 1 conanfile in the file\")\n if (inspect.isclass(attr) and issubclass(attr, Generator) and attr != Generator and\n attr.__dict__[\"__module__\"] == filename):\n registered_generators.add(attr.__name__, attr)\n\n if result is None:\n raise ConanException(\"No subclass of ConanFile\")\n\n return result\n\n\ndef _parse_file(conan_file_path):\n \"\"\" From a given path, obtain the in memory python import module\n \"\"\"\n\n if not os.path.exists(conan_file_path):\n raise NotFoundException(\"%s not found!\" % conan_file_path)\n\n filename = os.path.splitext(os.path.basename(conan_file_path))[0]\n\n try:\n current_dir = os.path.dirname(conan_file_path)\n sys.path.append(current_dir)\n old_modules = list(sys.modules.keys())\n with chdir(current_dir):\n sys.dont_write_bytecode = True\n loaded = imp.load_source(filename, conan_file_path)\n sys.dont_write_bytecode = False\n # Put all imported files under a new package name\n module_id = uuid.uuid1()\n added_modules = set(sys.modules).difference(old_modules)\n for added in added_modules:\n module = sys.modules[added]\n if module:\n try:\n folder = os.path.dirname(module.__file__)\n except AttributeError: # some module doesn't have __file__\n pass\n else:\n if folder.startswith(current_dir):\n module = sys.modules.pop(added)\n sys.modules[\"%s.%s\" % (module_id, added)] = module\n except Exception:\n import traceback\n trace = traceback.format_exc().split('\\n')\n raise ConanException(\"Unable to load conanfile in %s\\n%s\" % (conan_file_path,\n '\\n'.join(trace[3:])))\n finally:\n sys.path.pop()\n\n return loaded, filename\n\n\nclass ConanFileTextLoader(object):\n \"\"\"Parse a conanfile.txt file\"\"\"\n\n def __init__(self, input_text):\n # Prefer composition over inheritance, the __getattr__ was breaking things\n self._config_parser = ConfigParser(input_text, [\"requires\", \"generators\", \"options\",\n \"imports\"], parse_lines=True)\n\n @property\n def requirements(self):\n \"\"\"returns a list of requires\n EX: \"OpenCV/2.4.10@phil/stable\"\n \"\"\"\n return [r.strip() for r in self._config_parser.requires.splitlines()]\n\n @property\n def options(self):\n return self._config_parser.options\n\n @property\n def _import_parameters(self):\n def _parse_args(param_string):\n root_package, ignore_case, folder, excludes, keep_path = None, False, False, None, True\n params = param_string.split(\",\")\n params = [p.split(\"=\") for p in params if p]\n for (var, value) in params:\n var = var.strip()\n value = value.strip()\n if var == \"root_package\":\n root_package = value\n elif var == \"ignore_case\":\n ignore_case = (value.lower() == \"true\")\n elif var == \"folder\":\n folder = (value.lower() == \"true\")\n elif var == \"excludes\":\n excludes = value.split()\n elif var == \"keep_path\":\n keep_path = (value.lower() == \"true\")\n else:\n raise Exception(\"Invalid imports. Unknown argument %s\" % var)\n return root_package, ignore_case, folder, excludes, keep_path\n\n def _parse_import(line):\n pair = line.split(\"->\")\n source = pair[0].strip().split(',', 1)\n dest = pair[1].strip()\n src, pattern = source[0].strip(), source[1].strip()\n return pattern, dest, src\n\n ret = []\n local_install_text = self._config_parser.imports\n for line in local_install_text.splitlines():\n # discard blanks, comments, and discard trailing comments\n line = line.strip()\n if not line or line.startswith(\"#\"):\n continue\n line = line.split(\"#\", 1)[0]\n\n invalid_line_msg = \"Invalid imports line: %s\\nEX: OpenCV/lib, * -> ./lib\" % line\n if line.startswith(\"/\") or line.startswith(\"..\"):\n raise ConanException(\"%s\\n%s\" % (invalid_line_msg,\n \"Import's paths can't begin with '/' or '..'\"))\n try:\n tokens = line.split(\"@\", 1)\n if len(tokens) > 1:\n line = tokens[0]\n params = tokens[1]\n else:\n params = \"\"\n root_package, ignore_case, folder, excludes, keep_path = _parse_args(params)\n pattern, dest, src = _parse_import(line)\n ret.append((pattern, dest, src, root_package, folder, ignore_case, excludes, keep_path))\n except Exception as e:\n raise ConanException(\"%s\\n%s\" % (invalid_line_msg, str(e)))\n return ret\n\n @property\n def generators(self):\n return self._config_parser.generators.splitlines()\n\n def imports_method(self, conan_file):\n parameters = self._import_parameters\n\n def imports():\n for import_params in parameters:\n conan_file.copy(*import_params)\n return imports\n", "path": "conans/client/loader_parse.py"}]} | 4,077 | 680 |
gh_patches_debug_15801 | rasdani/github-patches | git_diff | pyca__cryptography-1430 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
OpenSSL's HMAC Context isn't marked as implementing MACContext
It ought to be.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cryptography/hazmat/backends/commoncrypto/hmac.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16 from cryptography import utils
17 from cryptography.exceptions import UnsupportedAlgorithm, _Reasons
18 from cryptography.hazmat.primitives import interfaces
19
20
21 @utils.register_interface(interfaces.HashContext)
22 class _HMACContext(object):
23 def __init__(self, backend, key, algorithm, ctx=None):
24 self.algorithm = algorithm
25 self._backend = backend
26 if ctx is None:
27 ctx = self._backend._ffi.new("CCHmacContext *")
28 try:
29 alg = self._backend._supported_hmac_algorithms[algorithm.name]
30 except KeyError:
31 raise UnsupportedAlgorithm(
32 "{0} is not a supported HMAC hash on this backend.".format(
33 algorithm.name),
34 _Reasons.UNSUPPORTED_HASH
35 )
36
37 self._backend._lib.CCHmacInit(ctx, alg, key, len(key))
38
39 self._ctx = ctx
40 self._key = key
41
42 def copy(self):
43 copied_ctx = self._backend._ffi.new("CCHmacContext *")
44 # CommonCrypto has no APIs for copying HMACs, so we have to copy the
45 # underlying struct.
46 copied_ctx[0] = self._ctx[0]
47 return _HMACContext(
48 self._backend, self._key, self.algorithm, ctx=copied_ctx
49 )
50
51 def update(self, data):
52 self._backend._lib.CCHmacUpdate(self._ctx, data, len(data))
53
54 def finalize(self):
55 buf = self._backend._ffi.new("unsigned char[]",
56 self.algorithm.digest_size)
57 self._backend._lib.CCHmacFinal(self._ctx, buf)
58 return self._backend._ffi.buffer(buf)[:]
59
```
Path: `cryptography/hazmat/backends/openssl/hmac.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16
17 from cryptography import utils
18 from cryptography.exceptions import UnsupportedAlgorithm, _Reasons
19 from cryptography.hazmat.primitives import interfaces
20
21
22 @utils.register_interface(interfaces.HashContext)
23 class _HMACContext(object):
24 def __init__(self, backend, key, algorithm, ctx=None):
25 self.algorithm = algorithm
26 self._backend = backend
27
28 if ctx is None:
29 ctx = self._backend._ffi.new("HMAC_CTX *")
30 self._backend._lib.HMAC_CTX_init(ctx)
31 ctx = self._backend._ffi.gc(
32 ctx, self._backend._lib.HMAC_CTX_cleanup
33 )
34 evp_md = self._backend._lib.EVP_get_digestbyname(
35 algorithm.name.encode('ascii'))
36 if evp_md == self._backend._ffi.NULL:
37 raise UnsupportedAlgorithm(
38 "{0} is not a supported hash on this backend.".format(
39 algorithm.name),
40 _Reasons.UNSUPPORTED_HASH
41 )
42 res = self._backend._lib.Cryptography_HMAC_Init_ex(
43 ctx, key, len(key), evp_md, self._backend._ffi.NULL
44 )
45 assert res != 0
46
47 self._ctx = ctx
48 self._key = key
49
50 def copy(self):
51 copied_ctx = self._backend._ffi.new("HMAC_CTX *")
52 self._backend._lib.HMAC_CTX_init(copied_ctx)
53 copied_ctx = self._backend._ffi.gc(
54 copied_ctx, self._backend._lib.HMAC_CTX_cleanup
55 )
56 res = self._backend._lib.Cryptography_HMAC_CTX_copy(
57 copied_ctx, self._ctx
58 )
59 assert res != 0
60 return _HMACContext(
61 self._backend, self._key, self.algorithm, ctx=copied_ctx
62 )
63
64 def update(self, data):
65 res = self._backend._lib.Cryptography_HMAC_Update(
66 self._ctx, data, len(data)
67 )
68 assert res != 0
69
70 def finalize(self):
71 buf = self._backend._ffi.new("unsigned char[]",
72 self._backend._lib.EVP_MAX_MD_SIZE)
73 outlen = self._backend._ffi.new("unsigned int *")
74 res = self._backend._lib.Cryptography_HMAC_Final(
75 self._ctx, buf, outlen
76 )
77 assert res != 0
78 assert outlen[0] == self.algorithm.digest_size
79 self._backend._lib.HMAC_CTX_cleanup(self._ctx)
80 return self._backend._ffi.buffer(buf)[:outlen[0]]
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cryptography/hazmat/backends/commoncrypto/hmac.py b/cryptography/hazmat/backends/commoncrypto/hmac.py
--- a/cryptography/hazmat/backends/commoncrypto/hmac.py
+++ b/cryptography/hazmat/backends/commoncrypto/hmac.py
@@ -18,6 +18,7 @@
from cryptography.hazmat.primitives import interfaces
[email protected]_interface(interfaces.MACContext)
@utils.register_interface(interfaces.HashContext)
class _HMACContext(object):
def __init__(self, backend, key, algorithm, ctx=None):
diff --git a/cryptography/hazmat/backends/openssl/hmac.py b/cryptography/hazmat/backends/openssl/hmac.py
--- a/cryptography/hazmat/backends/openssl/hmac.py
+++ b/cryptography/hazmat/backends/openssl/hmac.py
@@ -19,6 +19,7 @@
from cryptography.hazmat.primitives import interfaces
[email protected]_interface(interfaces.MACContext)
@utils.register_interface(interfaces.HashContext)
class _HMACContext(object):
def __init__(self, backend, key, algorithm, ctx=None):
| {"golden_diff": "diff --git a/cryptography/hazmat/backends/commoncrypto/hmac.py b/cryptography/hazmat/backends/commoncrypto/hmac.py\n--- a/cryptography/hazmat/backends/commoncrypto/hmac.py\n+++ b/cryptography/hazmat/backends/commoncrypto/hmac.py\n@@ -18,6 +18,7 @@\n from cryptography.hazmat.primitives import interfaces\n \n \[email protected]_interface(interfaces.MACContext)\n @utils.register_interface(interfaces.HashContext)\n class _HMACContext(object):\n def __init__(self, backend, key, algorithm, ctx=None):\ndiff --git a/cryptography/hazmat/backends/openssl/hmac.py b/cryptography/hazmat/backends/openssl/hmac.py\n--- a/cryptography/hazmat/backends/openssl/hmac.py\n+++ b/cryptography/hazmat/backends/openssl/hmac.py\n@@ -19,6 +19,7 @@\n from cryptography.hazmat.primitives import interfaces\n \n \[email protected]_interface(interfaces.MACContext)\n @utils.register_interface(interfaces.HashContext)\n class _HMACContext(object):\n def __init__(self, backend, key, algorithm, ctx=None):\n", "issue": "OpenSSL's HMAC Context isn't marked as implementing MACContext\nIt ought to be.\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom cryptography import utils\nfrom cryptography.exceptions import UnsupportedAlgorithm, _Reasons\nfrom cryptography.hazmat.primitives import interfaces\n\n\[email protected]_interface(interfaces.HashContext)\nclass _HMACContext(object):\n def __init__(self, backend, key, algorithm, ctx=None):\n self.algorithm = algorithm\n self._backend = backend\n if ctx is None:\n ctx = self._backend._ffi.new(\"CCHmacContext *\")\n try:\n alg = self._backend._supported_hmac_algorithms[algorithm.name]\n except KeyError:\n raise UnsupportedAlgorithm(\n \"{0} is not a supported HMAC hash on this backend.\".format(\n algorithm.name),\n _Reasons.UNSUPPORTED_HASH\n )\n\n self._backend._lib.CCHmacInit(ctx, alg, key, len(key))\n\n self._ctx = ctx\n self._key = key\n\n def copy(self):\n copied_ctx = self._backend._ffi.new(\"CCHmacContext *\")\n # CommonCrypto has no APIs for copying HMACs, so we have to copy the\n # underlying struct.\n copied_ctx[0] = self._ctx[0]\n return _HMACContext(\n self._backend, self._key, self.algorithm, ctx=copied_ctx\n )\n\n def update(self, data):\n self._backend._lib.CCHmacUpdate(self._ctx, data, len(data))\n\n def finalize(self):\n buf = self._backend._ffi.new(\"unsigned char[]\",\n self.algorithm.digest_size)\n self._backend._lib.CCHmacFinal(self._ctx, buf)\n return self._backend._ffi.buffer(buf)[:]\n", "path": "cryptography/hazmat/backends/commoncrypto/hmac.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\n\nfrom cryptography import utils\nfrom cryptography.exceptions import UnsupportedAlgorithm, _Reasons\nfrom cryptography.hazmat.primitives import interfaces\n\n\[email protected]_interface(interfaces.HashContext)\nclass _HMACContext(object):\n def __init__(self, backend, key, algorithm, ctx=None):\n self.algorithm = algorithm\n self._backend = backend\n\n if ctx is None:\n ctx = self._backend._ffi.new(\"HMAC_CTX *\")\n self._backend._lib.HMAC_CTX_init(ctx)\n ctx = self._backend._ffi.gc(\n ctx, self._backend._lib.HMAC_CTX_cleanup\n )\n evp_md = self._backend._lib.EVP_get_digestbyname(\n algorithm.name.encode('ascii'))\n if evp_md == self._backend._ffi.NULL:\n raise UnsupportedAlgorithm(\n \"{0} is not a supported hash on this backend.\".format(\n algorithm.name),\n _Reasons.UNSUPPORTED_HASH\n )\n res = self._backend._lib.Cryptography_HMAC_Init_ex(\n ctx, key, len(key), evp_md, self._backend._ffi.NULL\n )\n assert res != 0\n\n self._ctx = ctx\n self._key = key\n\n def copy(self):\n copied_ctx = self._backend._ffi.new(\"HMAC_CTX *\")\n self._backend._lib.HMAC_CTX_init(copied_ctx)\n copied_ctx = self._backend._ffi.gc(\n copied_ctx, self._backend._lib.HMAC_CTX_cleanup\n )\n res = self._backend._lib.Cryptography_HMAC_CTX_copy(\n copied_ctx, self._ctx\n )\n assert res != 0\n return _HMACContext(\n self._backend, self._key, self.algorithm, ctx=copied_ctx\n )\n\n def update(self, data):\n res = self._backend._lib.Cryptography_HMAC_Update(\n self._ctx, data, len(data)\n )\n assert res != 0\n\n def finalize(self):\n buf = self._backend._ffi.new(\"unsigned char[]\",\n self._backend._lib.EVP_MAX_MD_SIZE)\n outlen = self._backend._ffi.new(\"unsigned int *\")\n res = self._backend._lib.Cryptography_HMAC_Final(\n self._ctx, buf, outlen\n )\n assert res != 0\n assert outlen[0] == self.algorithm.digest_size\n self._backend._lib.HMAC_CTX_cleanup(self._ctx)\n return self._backend._ffi.buffer(buf)[:outlen[0]]\n", "path": "cryptography/hazmat/backends/openssl/hmac.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom cryptography import utils\nfrom cryptography.exceptions import UnsupportedAlgorithm, _Reasons\nfrom cryptography.hazmat.primitives import interfaces\n\n\[email protected]_interface(interfaces.MACContext)\[email protected]_interface(interfaces.HashContext)\nclass _HMACContext(object):\n def __init__(self, backend, key, algorithm, ctx=None):\n self.algorithm = algorithm\n self._backend = backend\n if ctx is None:\n ctx = self._backend._ffi.new(\"CCHmacContext *\")\n try:\n alg = self._backend._supported_hmac_algorithms[algorithm.name]\n except KeyError:\n raise UnsupportedAlgorithm(\n \"{0} is not a supported HMAC hash on this backend.\".format(\n algorithm.name),\n _Reasons.UNSUPPORTED_HASH\n )\n\n self._backend._lib.CCHmacInit(ctx, alg, key, len(key))\n\n self._ctx = ctx\n self._key = key\n\n def copy(self):\n copied_ctx = self._backend._ffi.new(\"CCHmacContext *\")\n # CommonCrypto has no APIs for copying HMACs, so we have to copy the\n # underlying struct.\n copied_ctx[0] = self._ctx[0]\n return _HMACContext(\n self._backend, self._key, self.algorithm, ctx=copied_ctx\n )\n\n def update(self, data):\n self._backend._lib.CCHmacUpdate(self._ctx, data, len(data))\n\n def finalize(self):\n buf = self._backend._ffi.new(\"unsigned char[]\",\n self.algorithm.digest_size)\n self._backend._lib.CCHmacFinal(self._ctx, buf)\n return self._backend._ffi.buffer(buf)[:]\n", "path": "cryptography/hazmat/backends/commoncrypto/hmac.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\n\nfrom cryptography import utils\nfrom cryptography.exceptions import UnsupportedAlgorithm, _Reasons\nfrom cryptography.hazmat.primitives import interfaces\n\n\[email protected]_interface(interfaces.MACContext)\[email protected]_interface(interfaces.HashContext)\nclass _HMACContext(object):\n def __init__(self, backend, key, algorithm, ctx=None):\n self.algorithm = algorithm\n self._backend = backend\n\n if ctx is None:\n ctx = self._backend._ffi.new(\"HMAC_CTX *\")\n self._backend._lib.HMAC_CTX_init(ctx)\n ctx = self._backend._ffi.gc(\n ctx, self._backend._lib.HMAC_CTX_cleanup\n )\n evp_md = self._backend._lib.EVP_get_digestbyname(\n algorithm.name.encode('ascii'))\n if evp_md == self._backend._ffi.NULL:\n raise UnsupportedAlgorithm(\n \"{0} is not a supported hash on this backend.\".format(\n algorithm.name),\n _Reasons.UNSUPPORTED_HASH\n )\n res = self._backend._lib.Cryptography_HMAC_Init_ex(\n ctx, key, len(key), evp_md, self._backend._ffi.NULL\n )\n assert res != 0\n\n self._ctx = ctx\n self._key = key\n\n def copy(self):\n copied_ctx = self._backend._ffi.new(\"HMAC_CTX *\")\n self._backend._lib.HMAC_CTX_init(copied_ctx)\n copied_ctx = self._backend._ffi.gc(\n copied_ctx, self._backend._lib.HMAC_CTX_cleanup\n )\n res = self._backend._lib.Cryptography_HMAC_CTX_copy(\n copied_ctx, self._ctx\n )\n assert res != 0\n return _HMACContext(\n self._backend, self._key, self.algorithm, ctx=copied_ctx\n )\n\n def update(self, data):\n res = self._backend._lib.Cryptography_HMAC_Update(\n self._ctx, data, len(data)\n )\n assert res != 0\n\n def finalize(self):\n buf = self._backend._ffi.new(\"unsigned char[]\",\n self._backend._lib.EVP_MAX_MD_SIZE)\n outlen = self._backend._ffi.new(\"unsigned int *\")\n res = self._backend._lib.Cryptography_HMAC_Final(\n self._ctx, buf, outlen\n )\n assert res != 0\n assert outlen[0] == self.algorithm.digest_size\n self._backend._lib.HMAC_CTX_cleanup(self._ctx)\n return self._backend._ffi.buffer(buf)[:outlen[0]]\n", "path": "cryptography/hazmat/backends/openssl/hmac.py"}]} | 1,741 | 252 |
gh_patches_debug_27881 | rasdani/github-patches | git_diff | searxng__searxng-2073 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: bing sometimes missing first result
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
Repository: https://github.com/searxng/searxng
Branch: master
Version: 2022.12.30-647a0aa9
**How did you install SearXNG?**
script, also occurs in some public instances
**What happened?**
Bing is sometimes missing the first result.
**How To Reproduce**
`!bi pontiac aztek`
No Wikipedia article returned, 9 results are listed.
`!bi pontiac aztek wiki`
Multiple Wikipedia articles are returned, none of which are the English article for the Pontiac Aztek. (The Simple English and Scots articles appear.) 9 results are listed.
**Expected behavior**
The expected first result is the Wikipedia article for Pontiac Aztek. Bing.com returns 10 results on the first page, so SearXNG should return 10 results on its first page.
**Screenshots & Logs**


**Additional context**
It seems consistent for specific search terms, but it doesn't always happen. My suspicion is that when the first result is fancy, such as a Wikipedia infobox, SearXNG doesn't see it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/engines/bing.py`
Content:
```
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 # lint: pylint
3 """Bing (Web)
4
5 - https://github.com/searx/searx/issues/2019#issuecomment-648227442
6 """
7
8 import re
9 from urllib.parse import urlencode, urlparse, parse_qs
10 from lxml import html
11 from searx.utils import eval_xpath, extract_text, eval_xpath_list, match_language
12 from searx.network import multi_requests, Request
13
14 about = {
15 "website": 'https://www.bing.com',
16 "wikidata_id": 'Q182496',
17 "official_api_documentation": 'https://www.microsoft.com/en-us/bing/apis/bing-web-search-api',
18 "use_official_api": False,
19 "require_api_key": False,
20 "results": 'HTML',
21 }
22
23 # engine dependent config
24 categories = ['general', 'web']
25 paging = True
26 time_range_support = False
27 safesearch = False
28 send_accept_language_header = True
29 supported_languages_url = 'https://www.bing.com/account/general'
30 language_aliases = {}
31
32 # search-url
33 base_url = 'https://www.bing.com/'
34
35 # initial query: https://www.bing.com/search?q=foo&search=&form=QBLH
36 inital_query = 'search?{query}&search=&form=QBLH'
37
38 # following queries: https://www.bing.com/search?q=foo&search=&first=11&FORM=PERE
39 page_query = 'search?{query}&search=&first={offset}&FORM=PERE'
40
41
42 def _get_offset_from_pageno(pageno):
43 return (pageno - 1) * 10 + 1
44
45
46 def request(query, params):
47
48 offset = _get_offset_from_pageno(params.get('pageno', 1))
49
50 # logger.debug("params['pageno'] --> %s", params.get('pageno'))
51 # logger.debug(" offset --> %s", offset)
52
53 search_string = page_query
54 if offset == 1:
55 search_string = inital_query
56
57 if params['language'] == 'all':
58 lang = 'EN'
59 else:
60 lang = match_language(params['language'], supported_languages, language_aliases)
61
62 query = 'language:{} {}'.format(lang.split('-')[0].upper(), query)
63
64 search_path = search_string.format(query=urlencode({'q': query}), offset=offset)
65
66 if offset > 1:
67 referer = base_url + inital_query.format(query=urlencode({'q': query}))
68 params['headers']['Referer'] = referer
69 logger.debug("headers.Referer --> %s", referer)
70
71 params['url'] = base_url + search_path
72 params['headers']['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'
73 return params
74
75
76 def response(resp):
77
78 results = []
79 result_len = 0
80
81 dom = html.fromstring(resp.text)
82
83 # parse results again if nothing is found yet
84
85 url_to_resolve = []
86 url_to_resolve_index = []
87 for i, result in enumerate(eval_xpath_list(dom, '//li[@class="b_algo"]')):
88
89 link = eval_xpath(result, './/h2/a')[0]
90 url = link.attrib.get('href')
91 title = extract_text(link)
92 content = extract_text(eval_xpath(result, './/p'))
93
94 # get the real URL either using the URL shown to user or following the Bing URL
95 if url.startswith('https://www.bing.com/ck/a?'):
96 url_cite = extract_text(eval_xpath(result, './/div[@class="b_attribution"]/cite'))
97 # Bing can shorten the URL either at the end or in the middle of the string
98 if (
99 url_cite.startswith('https://')
100 and '…' not in url_cite
101 and '...' not in url_cite
102 and '›' not in url_cite
103 ):
104 # no need for an additional HTTP request
105 url = url_cite
106 else:
107 # resolve the URL with an additional HTTP request
108 url_to_resolve.append(url.replace('&ntb=1', '&ntb=F'))
109 url_to_resolve_index.append(i)
110 url = None # remove the result if the HTTP Bing redirect raise an exception
111
112 # append result
113 results.append({'url': url, 'title': title, 'content': content})
114
115 # resolve all Bing redirections in parallel
116 request_list = [
117 Request.get(u, allow_redirects=False, headers=resp.search_params['headers']) for u in url_to_resolve
118 ]
119 response_list = multi_requests(request_list)
120 for i, redirect_response in enumerate(response_list):
121 if not isinstance(redirect_response, Exception):
122 results[url_to_resolve_index[i]]['url'] = redirect_response.headers['location']
123
124 # get number_of_results
125 try:
126 result_len_container = "".join(eval_xpath(dom, '//span[@class="sb_count"]//text()'))
127 if "-" in result_len_container:
128
129 # Remove the part "from-to" for paginated request ...
130 result_len_container = result_len_container[result_len_container.find("-") * 2 + 2 :]
131
132 result_len_container = re.sub('[^0-9]', '', result_len_container)
133
134 if len(result_len_container) > 0:
135 result_len = int(result_len_container)
136
137 except Exception as e: # pylint: disable=broad-except
138 logger.debug('result error :\n%s', e)
139
140 if result_len and _get_offset_from_pageno(resp.search_params.get("pageno", 0)) > result_len:
141 return []
142
143 results.append({'number_of_results': result_len})
144 return results
145
146
147 # get supported languages from their site
148 def _fetch_supported_languages(resp):
149
150 lang_tags = set()
151
152 dom = html.fromstring(resp.text)
153 lang_links = eval_xpath(dom, '//div[@id="language-section"]//li')
154
155 for _li in lang_links:
156
157 href = eval_xpath(_li, './/@href')[0]
158 (_scheme, _netloc, _path, _params, query, _fragment) = urlparse(href)
159 query = parse_qs(query, keep_blank_values=True)
160
161 # fmt: off
162 setlang = query.get('setlang', [None, ])[0]
163 # example: 'mn-Cyrl-MN' --> '['mn', 'Cyrl-MN']
164 lang, nation = (setlang.split('-', maxsplit=1) + [None,])[:2] # fmt: skip
165 # fmt: on
166
167 tag = lang + '-' + nation if nation else lang
168 lang_tags.add(tag)
169
170 return list(lang_tags)
171
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/searx/engines/bing.py b/searx/engines/bing.py
--- a/searx/engines/bing.py
+++ b/searx/engines/bing.py
@@ -4,6 +4,7 @@
- https://github.com/searx/searx/issues/2019#issuecomment-648227442
"""
+# pylint: disable=too-many-branches
import re
from urllib.parse import urlencode, urlparse, parse_qs
@@ -74,7 +75,6 @@
def response(resp):
-
results = []
result_len = 0
@@ -84,12 +84,20 @@
url_to_resolve = []
url_to_resolve_index = []
- for i, result in enumerate(eval_xpath_list(dom, '//li[@class="b_algo"]')):
+ for i, result in enumerate(eval_xpath_list(dom, '//li[contains(@class, "b_algo")]')):
link = eval_xpath(result, './/h2/a')[0]
url = link.attrib.get('href')
title = extract_text(link)
- content = extract_text(eval_xpath(result, './/p'))
+
+ # Make sure that the element is free of <a href> links and <span class='algoSlug_icon'>
+ content = eval_xpath(result, '(.//p)[1]')
+ for p in content:
+ for e in p.xpath('.//a'):
+ e.getparent().remove(e)
+ for e in p.xpath('.//span[@class="algoSlug_icon"]'):
+ e.getparent().remove(e)
+ content = extract_text(content)
# get the real URL either using the URL shown to user or following the Bing URL
if url.startswith('https://www.bing.com/ck/a?'):
| {"golden_diff": "diff --git a/searx/engines/bing.py b/searx/engines/bing.py\n--- a/searx/engines/bing.py\n+++ b/searx/engines/bing.py\n@@ -4,6 +4,7 @@\n \n - https://github.com/searx/searx/issues/2019#issuecomment-648227442\n \"\"\"\n+# pylint: disable=too-many-branches\n \n import re\n from urllib.parse import urlencode, urlparse, parse_qs\n@@ -74,7 +75,6 @@\n \n \n def response(resp):\n-\n results = []\n result_len = 0\n \n@@ -84,12 +84,20 @@\n \n url_to_resolve = []\n url_to_resolve_index = []\n- for i, result in enumerate(eval_xpath_list(dom, '//li[@class=\"b_algo\"]')):\n+ for i, result in enumerate(eval_xpath_list(dom, '//li[contains(@class, \"b_algo\")]')):\n \n link = eval_xpath(result, './/h2/a')[0]\n url = link.attrib.get('href')\n title = extract_text(link)\n- content = extract_text(eval_xpath(result, './/p'))\n+\n+ # Make sure that the element is free of <a href> links and <span class='algoSlug_icon'>\n+ content = eval_xpath(result, '(.//p)[1]')\n+ for p in content:\n+ for e in p.xpath('.//a'):\n+ e.getparent().remove(e)\n+ for e in p.xpath('.//span[@class=\"algoSlug_icon\"]'):\n+ e.getparent().remove(e)\n+ content = extract_text(content)\n \n # get the real URL either using the URL shown to user or following the Bing URL\n if url.startswith('https://www.bing.com/ck/a?'):\n", "issue": "Bug: bing sometimes missing first result\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\nRepository: https://github.com/searxng/searxng\r\nBranch: master\r\nVersion: 2022.12.30-647a0aa9\r\n\r\n\r\n**How did you install SearXNG?**\r\nscript, also occurs in some public instances\r\n**What happened?**\r\nBing is sometimes missing the first result.\r\n\r\n**How To Reproduce**\r\n`!bi pontiac aztek`\r\n\r\nNo Wikipedia article returned, 9 results are listed.\r\n\r\n`!bi pontiac aztek wiki`\r\n\r\nMultiple Wikipedia articles are returned, none of which are the English article for the Pontiac Aztek. (The Simple English and Scots articles appear.) 9 results are listed.\r\n\r\n**Expected behavior**\r\nThe expected first result is the Wikipedia article for Pontiac Aztek. Bing.com returns 10 results on the first page, so SearXNG should return 10 results on its first page.\r\n\r\n**Screenshots & Logs**\r\n\r\n\r\n\r\n**Additional context**\r\nIt seems consistent for specific search terms, but it doesn't always happen. My suspicion is that when the first result is fancy, such as a Wikipedia infobox, SearXNG doesn't see it.\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Bing (Web)\n\n- https://github.com/searx/searx/issues/2019#issuecomment-648227442\n\"\"\"\n\nimport re\nfrom urllib.parse import urlencode, urlparse, parse_qs\nfrom lxml import html\nfrom searx.utils import eval_xpath, extract_text, eval_xpath_list, match_language\nfrom searx.network import multi_requests, Request\n\nabout = {\n \"website\": 'https://www.bing.com',\n \"wikidata_id\": 'Q182496',\n \"official_api_documentation\": 'https://www.microsoft.com/en-us/bing/apis/bing-web-search-api',\n \"use_official_api\": False,\n \"require_api_key\": False,\n \"results\": 'HTML',\n}\n\n# engine dependent config\ncategories = ['general', 'web']\npaging = True\ntime_range_support = False\nsafesearch = False\nsend_accept_language_header = True\nsupported_languages_url = 'https://www.bing.com/account/general'\nlanguage_aliases = {}\n\n# search-url\nbase_url = 'https://www.bing.com/'\n\n# initial query: https://www.bing.com/search?q=foo&search=&form=QBLH\ninital_query = 'search?{query}&search=&form=QBLH'\n\n# following queries: https://www.bing.com/search?q=foo&search=&first=11&FORM=PERE\npage_query = 'search?{query}&search=&first={offset}&FORM=PERE'\n\n\ndef _get_offset_from_pageno(pageno):\n return (pageno - 1) * 10 + 1\n\n\ndef request(query, params):\n\n offset = _get_offset_from_pageno(params.get('pageno', 1))\n\n # logger.debug(\"params['pageno'] --> %s\", params.get('pageno'))\n # logger.debug(\" offset --> %s\", offset)\n\n search_string = page_query\n if offset == 1:\n search_string = inital_query\n\n if params['language'] == 'all':\n lang = 'EN'\n else:\n lang = match_language(params['language'], supported_languages, language_aliases)\n\n query = 'language:{} {}'.format(lang.split('-')[0].upper(), query)\n\n search_path = search_string.format(query=urlencode({'q': query}), offset=offset)\n\n if offset > 1:\n referer = base_url + inital_query.format(query=urlencode({'q': query}))\n params['headers']['Referer'] = referer\n logger.debug(\"headers.Referer --> %s\", referer)\n\n params['url'] = base_url + search_path\n params['headers']['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'\n return params\n\n\ndef response(resp):\n\n results = []\n result_len = 0\n\n dom = html.fromstring(resp.text)\n\n # parse results again if nothing is found yet\n\n url_to_resolve = []\n url_to_resolve_index = []\n for i, result in enumerate(eval_xpath_list(dom, '//li[@class=\"b_algo\"]')):\n\n link = eval_xpath(result, './/h2/a')[0]\n url = link.attrib.get('href')\n title = extract_text(link)\n content = extract_text(eval_xpath(result, './/p'))\n\n # get the real URL either using the URL shown to user or following the Bing URL\n if url.startswith('https://www.bing.com/ck/a?'):\n url_cite = extract_text(eval_xpath(result, './/div[@class=\"b_attribution\"]/cite'))\n # Bing can shorten the URL either at the end or in the middle of the string\n if (\n url_cite.startswith('https://')\n and '\u2026' not in url_cite\n and '...' not in url_cite\n and '\u203a' not in url_cite\n ):\n # no need for an additional HTTP request\n url = url_cite\n else:\n # resolve the URL with an additional HTTP request\n url_to_resolve.append(url.replace('&ntb=1', '&ntb=F'))\n url_to_resolve_index.append(i)\n url = None # remove the result if the HTTP Bing redirect raise an exception\n\n # append result\n results.append({'url': url, 'title': title, 'content': content})\n\n # resolve all Bing redirections in parallel\n request_list = [\n Request.get(u, allow_redirects=False, headers=resp.search_params['headers']) for u in url_to_resolve\n ]\n response_list = multi_requests(request_list)\n for i, redirect_response in enumerate(response_list):\n if not isinstance(redirect_response, Exception):\n results[url_to_resolve_index[i]]['url'] = redirect_response.headers['location']\n\n # get number_of_results\n try:\n result_len_container = \"\".join(eval_xpath(dom, '//span[@class=\"sb_count\"]//text()'))\n if \"-\" in result_len_container:\n\n # Remove the part \"from-to\" for paginated request ...\n result_len_container = result_len_container[result_len_container.find(\"-\") * 2 + 2 :]\n\n result_len_container = re.sub('[^0-9]', '', result_len_container)\n\n if len(result_len_container) > 0:\n result_len = int(result_len_container)\n\n except Exception as e: # pylint: disable=broad-except\n logger.debug('result error :\\n%s', e)\n\n if result_len and _get_offset_from_pageno(resp.search_params.get(\"pageno\", 0)) > result_len:\n return []\n\n results.append({'number_of_results': result_len})\n return results\n\n\n# get supported languages from their site\ndef _fetch_supported_languages(resp):\n\n lang_tags = set()\n\n dom = html.fromstring(resp.text)\n lang_links = eval_xpath(dom, '//div[@id=\"language-section\"]//li')\n\n for _li in lang_links:\n\n href = eval_xpath(_li, './/@href')[0]\n (_scheme, _netloc, _path, _params, query, _fragment) = urlparse(href)\n query = parse_qs(query, keep_blank_values=True)\n\n # fmt: off\n setlang = query.get('setlang', [None, ])[0]\n # example: 'mn-Cyrl-MN' --> '['mn', 'Cyrl-MN']\n lang, nation = (setlang.split('-', maxsplit=1) + [None,])[:2] # fmt: skip\n # fmt: on\n\n tag = lang + '-' + nation if nation else lang\n lang_tags.add(tag)\n\n return list(lang_tags)\n", "path": "searx/engines/bing.py"}], "after_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Bing (Web)\n\n- https://github.com/searx/searx/issues/2019#issuecomment-648227442\n\"\"\"\n# pylint: disable=too-many-branches\n\nimport re\nfrom urllib.parse import urlencode, urlparse, parse_qs\nfrom lxml import html\nfrom searx.utils import eval_xpath, extract_text, eval_xpath_list, match_language\nfrom searx.network import multi_requests, Request\n\nabout = {\n \"website\": 'https://www.bing.com',\n \"wikidata_id\": 'Q182496',\n \"official_api_documentation\": 'https://www.microsoft.com/en-us/bing/apis/bing-web-search-api',\n \"use_official_api\": False,\n \"require_api_key\": False,\n \"results\": 'HTML',\n}\n\n# engine dependent config\ncategories = ['general', 'web']\npaging = True\ntime_range_support = False\nsafesearch = False\nsend_accept_language_header = True\nsupported_languages_url = 'https://www.bing.com/account/general'\nlanguage_aliases = {}\n\n# search-url\nbase_url = 'https://www.bing.com/'\n\n# initial query: https://www.bing.com/search?q=foo&search=&form=QBLH\ninital_query = 'search?{query}&search=&form=QBLH'\n\n# following queries: https://www.bing.com/search?q=foo&search=&first=11&FORM=PERE\npage_query = 'search?{query}&search=&first={offset}&FORM=PERE'\n\n\ndef _get_offset_from_pageno(pageno):\n return (pageno - 1) * 10 + 1\n\n\ndef request(query, params):\n\n offset = _get_offset_from_pageno(params.get('pageno', 1))\n\n # logger.debug(\"params['pageno'] --> %s\", params.get('pageno'))\n # logger.debug(\" offset --> %s\", offset)\n\n search_string = page_query\n if offset == 1:\n search_string = inital_query\n\n if params['language'] == 'all':\n lang = 'EN'\n else:\n lang = match_language(params['language'], supported_languages, language_aliases)\n\n query = 'language:{} {}'.format(lang.split('-')[0].upper(), query)\n\n search_path = search_string.format(query=urlencode({'q': query}), offset=offset)\n\n if offset > 1:\n referer = base_url + inital_query.format(query=urlencode({'q': query}))\n params['headers']['Referer'] = referer\n logger.debug(\"headers.Referer --> %s\", referer)\n\n params['url'] = base_url + search_path\n params['headers']['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'\n return params\n\n\ndef response(resp):\n results = []\n result_len = 0\n\n dom = html.fromstring(resp.text)\n\n # parse results again if nothing is found yet\n\n url_to_resolve = []\n url_to_resolve_index = []\n for i, result in enumerate(eval_xpath_list(dom, '//li[contains(@class, \"b_algo\")]')):\n\n link = eval_xpath(result, './/h2/a')[0]\n url = link.attrib.get('href')\n title = extract_text(link)\n\n # Make sure that the element is free of <a href> links and <span class='algoSlug_icon'>\n content = eval_xpath(result, '(.//p)[1]')\n for p in content:\n for e in p.xpath('.//a'):\n e.getparent().remove(e)\n for e in p.xpath('.//span[@class=\"algoSlug_icon\"]'):\n e.getparent().remove(e)\n content = extract_text(content)\n\n # get the real URL either using the URL shown to user or following the Bing URL\n if url.startswith('https://www.bing.com/ck/a?'):\n url_cite = extract_text(eval_xpath(result, './/div[@class=\"b_attribution\"]/cite'))\n # Bing can shorten the URL either at the end or in the middle of the string\n if (\n url_cite.startswith('https://')\n and '\u2026' not in url_cite\n and '...' not in url_cite\n and '\u203a' not in url_cite\n ):\n # no need for an additional HTTP request\n url = url_cite\n else:\n # resolve the URL with an additional HTTP request\n url_to_resolve.append(url.replace('&ntb=1', '&ntb=F'))\n url_to_resolve_index.append(i)\n url = None # remove the result if the HTTP Bing redirect raise an exception\n\n # append result\n results.append({'url': url, 'title': title, 'content': content})\n\n # resolve all Bing redirections in parallel\n request_list = [\n Request.get(u, allow_redirects=False, headers=resp.search_params['headers']) for u in url_to_resolve\n ]\n response_list = multi_requests(request_list)\n for i, redirect_response in enumerate(response_list):\n if not isinstance(redirect_response, Exception):\n results[url_to_resolve_index[i]]['url'] = redirect_response.headers['location']\n\n # get number_of_results\n try:\n result_len_container = \"\".join(eval_xpath(dom, '//span[@class=\"sb_count\"]//text()'))\n if \"-\" in result_len_container:\n\n # Remove the part \"from-to\" for paginated request ...\n result_len_container = result_len_container[result_len_container.find(\"-\") * 2 + 2 :]\n\n result_len_container = re.sub('[^0-9]', '', result_len_container)\n\n if len(result_len_container) > 0:\n result_len = int(result_len_container)\n\n except Exception as e: # pylint: disable=broad-except\n logger.debug('result error :\\n%s', e)\n\n if result_len and _get_offset_from_pageno(resp.search_params.get(\"pageno\", 0)) > result_len:\n return []\n\n results.append({'number_of_results': result_len})\n return results\n\n\n# get supported languages from their site\ndef _fetch_supported_languages(resp):\n\n lang_tags = set()\n\n dom = html.fromstring(resp.text)\n lang_links = eval_xpath(dom, '//div[@id=\"language-section\"]//li')\n\n for _li in lang_links:\n\n href = eval_xpath(_li, './/@href')[0]\n (_scheme, _netloc, _path, _params, query, _fragment) = urlparse(href)\n query = parse_qs(query, keep_blank_values=True)\n\n # fmt: off\n setlang = query.get('setlang', [None, ])[0]\n # example: 'mn-Cyrl-MN' --> '['mn', 'Cyrl-MN']\n lang, nation = (setlang.split('-', maxsplit=1) + [None,])[:2] # fmt: skip\n # fmt: on\n\n tag = lang + '-' + nation if nation else lang\n lang_tags.add(tag)\n\n return list(lang_tags)\n", "path": "searx/engines/bing.py"}]} | 2,577 | 403 |
gh_patches_debug_24623 | rasdani/github-patches | git_diff | opsdroid__opsdroid-13 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Manage logging properly
When a function calls `subprocess.Popen()` the logging seems to reset to default and print to `stdout` and `stderror`.
This is probably because logging hasn't been configured properly. The `opsdroid` object should probably handle this as it is accessible almost everywhere.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opsdroid/loader.py`
Content:
```
1 """Class for loading in modules to OpsDroid."""
2
3 import logging
4 import os
5 import shutil
6 import subprocess
7 import importlib
8 import pip
9 import yaml
10 from opsdroid.const import (
11 DEFAULT_GIT_URL, MODULES_DIRECTORY, DEFAULT_MODULE_BRANCH)
12
13
14 def import_module(config):
15 """Import module namespace as variable and return it."""
16 try:
17 module = importlib.import_module(
18 config["path"] + "." + config["name"])
19 logging.debug("Loading " + config["type"] + ": " + config["name"])
20 return module
21 except ImportError as error:
22 logging.error("Failed to load " + config["type"] +
23 " " + config["name"])
24 logging.error(error)
25 return None
26
27
28 def check_cache(config):
29 """Remove module if 'no-cache' set in config."""
30 if "no-cache" in config \
31 and config["no-cache"] \
32 and os.path.isdir(config["install_path"]):
33 logging.debug("'no-cache' set, removing " + config["install_path"])
34 shutil.rmtree(config["install_path"])
35
36
37 def build_module_path(path_type, config):
38 """Generate the module path from name and type."""
39 if path_type == "import":
40 return MODULES_DIRECTORY + "." + config["type"] + "." + config["name"]
41 elif path_type == "install":
42 return MODULES_DIRECTORY + "/" + config["type"] + "/" + config["name"]
43
44
45 def git_clone(git_url, install_path, branch):
46 """Clone a git repo to a location and wait for finish."""
47 process = subprocess.Popen(["git", "clone", "-b", branch,
48 git_url, install_path], shell=False,
49 stdout=subprocess.PIPE,
50 stderr=subprocess.PIPE)
51 process.wait()
52
53
54 class Loader:
55 """Class to load in config and modules."""
56
57 def __init__(self, opsdroid):
58 """Setup object with opsdroid instance."""
59 self.opsdroid = opsdroid
60 logging.debug("Loaded loader")
61
62 def load_config_file(self, config_path):
63 """Load a yaml config file from path."""
64 if not os.path.isfile(config_path):
65 self.opsdroid.critical("Config file " + config_path +
66 " not found", 1)
67
68 try:
69 with open(config_path, 'r') as stream:
70 return yaml.load(stream)
71 except yaml.YAMLError as error:
72 self.opsdroid.critical(error, 1)
73 except FileNotFoundError as error:
74 self.opsdroid.critical(str(error), 1)
75
76 def load_config(self, config):
77 """Load all module types based on config."""
78 logging.debug("Loading modules from config")
79
80 if 'databases' in config.keys():
81 self.opsdroid.start_databases(
82 self._load_modules('database', config['databases']))
83 else:
84 logging.warning("No databases in configuration")
85
86 if 'skills' in config.keys():
87 self._setup_modules(
88 self._load_modules('skill', config['skills'])
89 )
90 else:
91 self.opsdroid.critical(
92 "No skills in configuration, at least 1 required", 1)
93
94 if 'connectors' in config.keys():
95 self.opsdroid.start_connectors(
96 self._load_modules('connector', config['connectors']))
97 else:
98 self.opsdroid.critical(
99 "No connectors in configuration, at least 1 required", 1)
100
101 def _load_modules(self, modules_type, modules):
102 """Install and load modules."""
103 logging.debug("Loading " + modules_type + " modules")
104 loaded_modules = []
105
106 # Create modules directory if doesn't exist
107 if not os.path.isdir(MODULES_DIRECTORY):
108 os.makedirs(MODULES_DIRECTORY)
109
110 for module_name in modules.keys():
111
112 # Set up module config
113 config = modules[module_name]
114 config = {} if config is None else config
115 config["name"] = module_name
116 config["type"] = modules_type
117 config["path"] = build_module_path("import", config)
118 config["install_path"] = build_module_path("install", config)
119 if "branch" not in config:
120 config["branch"] = DEFAULT_MODULE_BRANCH
121
122 # Remove module for reinstall if no-cache set
123 check_cache(config)
124
125 # Install module
126 self._install_module(config)
127
128 # Import module
129 module = import_module(config)
130 if module is not None:
131 loaded_modules.append({
132 "module": module,
133 "config": config})
134
135 return loaded_modules
136
137 def _setup_modules(self, modules):
138 """Call the setup function on the passed in modules."""
139 for module in modules:
140 module["module"].setup(self.opsdroid)
141
142 def _install_module(self, config):
143 # pylint: disable=R0201
144 """Install a module."""
145 logging.debug("Installing " + config["name"])
146
147 if os.path.isdir(config["install_path"]):
148 # TODO Allow for updating or reinstalling of modules
149 logging.debug("Module " + config["name"] +
150 " already installed, skipping")
151 else:
152 if config is not None and "repo" in config:
153 git_url = config["repo"]
154 else:
155 git_url = DEFAULT_GIT_URL + config["type"] + \
156 "-" + config["name"] + ".git"
157
158 if any(prefix in git_url for prefix in ["http", "https", "ssh"]):
159 # TODO Test if url or ssh path exists
160 # TODO Handle github authentication
161 git_clone(git_url, config["install_path"], config["branch"])
162 else:
163 if os.path.isdir(git_url):
164 git_clone(git_url, config["install_path"],
165 config["branch"])
166 else:
167 logging.debug("Could not find local git repo " + git_url)
168
169 if os.path.isdir(config["install_path"]):
170 logging.debug("Installed " + config["name"] +
171 " to " + config["install_path"])
172 else:
173 logging.debug("Install of " + config["name"] + " failed ")
174
175 # Install module dependancies
176 if os.path.isfile(config["install_path"] + "/requirements.txt"):
177 pip.main(["install", "-r", config["install_path"] +
178 "/requirements.txt"])
179
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opsdroid/loader.py b/opsdroid/loader.py
--- a/opsdroid/loader.py
+++ b/opsdroid/loader.py
@@ -5,7 +5,6 @@
import shutil
import subprocess
import importlib
-import pip
import yaml
from opsdroid.const import (
DEFAULT_GIT_URL, MODULES_DIRECTORY, DEFAULT_MODULE_BRANCH)
@@ -51,6 +50,19 @@
process.wait()
+def pip_install_deps(requirements_path):
+ """Pip install a requirements.txt file and wait for finish."""
+ process = subprocess.Popen(["pip", "install", "-r", requirements_path],
+ shell=False,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE)
+ for output in process.communicate():
+ if output != "":
+ for line in output.splitlines():
+ logging.debug(str(line).strip())
+ process.wait()
+
+
class Loader:
"""Class to load in config and modules."""
@@ -174,5 +186,4 @@
# Install module dependancies
if os.path.isfile(config["install_path"] + "/requirements.txt"):
- pip.main(["install", "-r", config["install_path"] +
- "/requirements.txt"])
+ pip_install_deps(config["install_path"] + "/requirements.txt")
| {"golden_diff": "diff --git a/opsdroid/loader.py b/opsdroid/loader.py\n--- a/opsdroid/loader.py\n+++ b/opsdroid/loader.py\n@@ -5,7 +5,6 @@\n import shutil\n import subprocess\n import importlib\n-import pip\n import yaml\n from opsdroid.const import (\n DEFAULT_GIT_URL, MODULES_DIRECTORY, DEFAULT_MODULE_BRANCH)\n@@ -51,6 +50,19 @@\n process.wait()\n \n \n+def pip_install_deps(requirements_path):\n+ \"\"\"Pip install a requirements.txt file and wait for finish.\"\"\"\n+ process = subprocess.Popen([\"pip\", \"install\", \"-r\", requirements_path],\n+ shell=False,\n+ stdout=subprocess.PIPE,\n+ stderr=subprocess.PIPE)\n+ for output in process.communicate():\n+ if output != \"\":\n+ for line in output.splitlines():\n+ logging.debug(str(line).strip())\n+ process.wait()\n+\n+\n class Loader:\n \"\"\"Class to load in config and modules.\"\"\"\n \n@@ -174,5 +186,4 @@\n \n # Install module dependancies\n if os.path.isfile(config[\"install_path\"] + \"/requirements.txt\"):\n- pip.main([\"install\", \"-r\", config[\"install_path\"] +\n- \"/requirements.txt\"])\n+ pip_install_deps(config[\"install_path\"] + \"/requirements.txt\")\n", "issue": "Manage logging properly\nWhen a function calls `subprocess.Popen()` the logging seems to reset to default and print to `stdout` and `stderror`. \n\nThis is probably because logging hasn't been configured properly. The `opsdroid` object should probably handle this as it is accessible almost everywhere.\n\n", "before_files": [{"content": "\"\"\"Class for loading in modules to OpsDroid.\"\"\"\n\nimport logging\nimport os\nimport shutil\nimport subprocess\nimport importlib\nimport pip\nimport yaml\nfrom opsdroid.const import (\n DEFAULT_GIT_URL, MODULES_DIRECTORY, DEFAULT_MODULE_BRANCH)\n\n\ndef import_module(config):\n \"\"\"Import module namespace as variable and return it.\"\"\"\n try:\n module = importlib.import_module(\n config[\"path\"] + \".\" + config[\"name\"])\n logging.debug(\"Loading \" + config[\"type\"] + \": \" + config[\"name\"])\n return module\n except ImportError as error:\n logging.error(\"Failed to load \" + config[\"type\"] +\n \" \" + config[\"name\"])\n logging.error(error)\n return None\n\n\ndef check_cache(config):\n \"\"\"Remove module if 'no-cache' set in config.\"\"\"\n if \"no-cache\" in config \\\n and config[\"no-cache\"] \\\n and os.path.isdir(config[\"install_path\"]):\n logging.debug(\"'no-cache' set, removing \" + config[\"install_path\"])\n shutil.rmtree(config[\"install_path\"])\n\n\ndef build_module_path(path_type, config):\n \"\"\"Generate the module path from name and type.\"\"\"\n if path_type == \"import\":\n return MODULES_DIRECTORY + \".\" + config[\"type\"] + \".\" + config[\"name\"]\n elif path_type == \"install\":\n return MODULES_DIRECTORY + \"/\" + config[\"type\"] + \"/\" + config[\"name\"]\n\n\ndef git_clone(git_url, install_path, branch):\n \"\"\"Clone a git repo to a location and wait for finish.\"\"\"\n process = subprocess.Popen([\"git\", \"clone\", \"-b\", branch,\n git_url, install_path], shell=False,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE)\n process.wait()\n\n\nclass Loader:\n \"\"\"Class to load in config and modules.\"\"\"\n\n def __init__(self, opsdroid):\n \"\"\"Setup object with opsdroid instance.\"\"\"\n self.opsdroid = opsdroid\n logging.debug(\"Loaded loader\")\n\n def load_config_file(self, config_path):\n \"\"\"Load a yaml config file from path.\"\"\"\n if not os.path.isfile(config_path):\n self.opsdroid.critical(\"Config file \" + config_path +\n \" not found\", 1)\n\n try:\n with open(config_path, 'r') as stream:\n return yaml.load(stream)\n except yaml.YAMLError as error:\n self.opsdroid.critical(error, 1)\n except FileNotFoundError as error:\n self.opsdroid.critical(str(error), 1)\n\n def load_config(self, config):\n \"\"\"Load all module types based on config.\"\"\"\n logging.debug(\"Loading modules from config\")\n\n if 'databases' in config.keys():\n self.opsdroid.start_databases(\n self._load_modules('database', config['databases']))\n else:\n logging.warning(\"No databases in configuration\")\n\n if 'skills' in config.keys():\n self._setup_modules(\n self._load_modules('skill', config['skills'])\n )\n else:\n self.opsdroid.critical(\n \"No skills in configuration, at least 1 required\", 1)\n\n if 'connectors' in config.keys():\n self.opsdroid.start_connectors(\n self._load_modules('connector', config['connectors']))\n else:\n self.opsdroid.critical(\n \"No connectors in configuration, at least 1 required\", 1)\n\n def _load_modules(self, modules_type, modules):\n \"\"\"Install and load modules.\"\"\"\n logging.debug(\"Loading \" + modules_type + \" modules\")\n loaded_modules = []\n\n # Create modules directory if doesn't exist\n if not os.path.isdir(MODULES_DIRECTORY):\n os.makedirs(MODULES_DIRECTORY)\n\n for module_name in modules.keys():\n\n # Set up module config\n config = modules[module_name]\n config = {} if config is None else config\n config[\"name\"] = module_name\n config[\"type\"] = modules_type\n config[\"path\"] = build_module_path(\"import\", config)\n config[\"install_path\"] = build_module_path(\"install\", config)\n if \"branch\" not in config:\n config[\"branch\"] = DEFAULT_MODULE_BRANCH\n\n # Remove module for reinstall if no-cache set\n check_cache(config)\n\n # Install module\n self._install_module(config)\n\n # Import module\n module = import_module(config)\n if module is not None:\n loaded_modules.append({\n \"module\": module,\n \"config\": config})\n\n return loaded_modules\n\n def _setup_modules(self, modules):\n \"\"\"Call the setup function on the passed in modules.\"\"\"\n for module in modules:\n module[\"module\"].setup(self.opsdroid)\n\n def _install_module(self, config):\n # pylint: disable=R0201\n \"\"\"Install a module.\"\"\"\n logging.debug(\"Installing \" + config[\"name\"])\n\n if os.path.isdir(config[\"install_path\"]):\n # TODO Allow for updating or reinstalling of modules\n logging.debug(\"Module \" + config[\"name\"] +\n \" already installed, skipping\")\n else:\n if config is not None and \"repo\" in config:\n git_url = config[\"repo\"]\n else:\n git_url = DEFAULT_GIT_URL + config[\"type\"] + \\\n \"-\" + config[\"name\"] + \".git\"\n\n if any(prefix in git_url for prefix in [\"http\", \"https\", \"ssh\"]):\n # TODO Test if url or ssh path exists\n # TODO Handle github authentication\n git_clone(git_url, config[\"install_path\"], config[\"branch\"])\n else:\n if os.path.isdir(git_url):\n git_clone(git_url, config[\"install_path\"],\n config[\"branch\"])\n else:\n logging.debug(\"Could not find local git repo \" + git_url)\n\n if os.path.isdir(config[\"install_path\"]):\n logging.debug(\"Installed \" + config[\"name\"] +\n \" to \" + config[\"install_path\"])\n else:\n logging.debug(\"Install of \" + config[\"name\"] + \" failed \")\n\n # Install module dependancies\n if os.path.isfile(config[\"install_path\"] + \"/requirements.txt\"):\n pip.main([\"install\", \"-r\", config[\"install_path\"] +\n \"/requirements.txt\"])\n", "path": "opsdroid/loader.py"}], "after_files": [{"content": "\"\"\"Class for loading in modules to OpsDroid.\"\"\"\n\nimport logging\nimport os\nimport shutil\nimport subprocess\nimport importlib\nimport yaml\nfrom opsdroid.const import (\n DEFAULT_GIT_URL, MODULES_DIRECTORY, DEFAULT_MODULE_BRANCH)\n\n\ndef import_module(config):\n \"\"\"Import module namespace as variable and return it.\"\"\"\n try:\n module = importlib.import_module(\n config[\"path\"] + \".\" + config[\"name\"])\n logging.debug(\"Loading \" + config[\"type\"] + \": \" + config[\"name\"])\n return module\n except ImportError as error:\n logging.error(\"Failed to load \" + config[\"type\"] +\n \" \" + config[\"name\"])\n logging.error(error)\n return None\n\n\ndef check_cache(config):\n \"\"\"Remove module if 'no-cache' set in config.\"\"\"\n if \"no-cache\" in config \\\n and config[\"no-cache\"] \\\n and os.path.isdir(config[\"install_path\"]):\n logging.debug(\"'no-cache' set, removing \" + config[\"install_path\"])\n shutil.rmtree(config[\"install_path\"])\n\n\ndef build_module_path(path_type, config):\n \"\"\"Generate the module path from name and type.\"\"\"\n if path_type == \"import\":\n return MODULES_DIRECTORY + \".\" + config[\"type\"] + \".\" + config[\"name\"]\n elif path_type == \"install\":\n return MODULES_DIRECTORY + \"/\" + config[\"type\"] + \"/\" + config[\"name\"]\n\n\ndef git_clone(git_url, install_path, branch):\n \"\"\"Clone a git repo to a location and wait for finish.\"\"\"\n process = subprocess.Popen([\"git\", \"clone\", \"-b\", branch,\n git_url, install_path], shell=False,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE)\n process.wait()\n\n\ndef pip_install_deps(requirements_path):\n \"\"\"Pip install a requirements.txt file and wait for finish.\"\"\"\n process = subprocess.Popen([\"pip\", \"install\", \"-r\", requirements_path],\n shell=False,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE)\n for output in process.communicate():\n if output != \"\":\n for line in output.splitlines():\n logging.debug(str(line).strip())\n process.wait()\n\n\nclass Loader:\n \"\"\"Class to load in config and modules.\"\"\"\n\n def __init__(self, opsdroid):\n \"\"\"Setup object with opsdroid instance.\"\"\"\n self.opsdroid = opsdroid\n logging.debug(\"Loaded loader\")\n\n def load_config_file(self, config_path):\n \"\"\"Load a yaml config file from path.\"\"\"\n if not os.path.isfile(config_path):\n self.opsdroid.critical(\"Config file \" + config_path +\n \" not found\", 1)\n\n try:\n with open(config_path, 'r') as stream:\n return yaml.load(stream)\n except yaml.YAMLError as error:\n self.opsdroid.critical(error, 1)\n except FileNotFoundError as error:\n self.opsdroid.critical(str(error), 1)\n\n def load_config(self, config):\n \"\"\"Load all module types based on config.\"\"\"\n logging.debug(\"Loading modules from config\")\n\n if 'databases' in config.keys():\n self.opsdroid.start_databases(\n self._load_modules('database', config['databases']))\n else:\n logging.warning(\"No databases in configuration\")\n\n if 'skills' in config.keys():\n self._setup_modules(\n self._load_modules('skill', config['skills'])\n )\n else:\n self.opsdroid.critical(\n \"No skills in configuration, at least 1 required\", 1)\n\n if 'connectors' in config.keys():\n self.opsdroid.start_connectors(\n self._load_modules('connector', config['connectors']))\n else:\n self.opsdroid.critical(\n \"No connectors in configuration, at least 1 required\", 1)\n\n def _load_modules(self, modules_type, modules):\n \"\"\"Install and load modules.\"\"\"\n logging.debug(\"Loading \" + modules_type + \" modules\")\n loaded_modules = []\n\n # Create modules directory if doesn't exist\n if not os.path.isdir(MODULES_DIRECTORY):\n os.makedirs(MODULES_DIRECTORY)\n\n for module_name in modules.keys():\n\n # Set up module config\n config = modules[module_name]\n config = {} if config is None else config\n config[\"name\"] = module_name\n config[\"type\"] = modules_type\n config[\"path\"] = build_module_path(\"import\", config)\n config[\"install_path\"] = build_module_path(\"install\", config)\n if \"branch\" not in config:\n config[\"branch\"] = DEFAULT_MODULE_BRANCH\n\n # Remove module for reinstall if no-cache set\n check_cache(config)\n\n # Install module\n self._install_module(config)\n\n # Import module\n module = import_module(config)\n if module is not None:\n loaded_modules.append({\n \"module\": module,\n \"config\": config})\n\n return loaded_modules\n\n def _setup_modules(self, modules):\n \"\"\"Call the setup function on the passed in modules.\"\"\"\n for module in modules:\n module[\"module\"].setup(self.opsdroid)\n\n def _install_module(self, config):\n # pylint: disable=R0201\n \"\"\"Install a module.\"\"\"\n logging.debug(\"Installing \" + config[\"name\"])\n\n if os.path.isdir(config[\"install_path\"]):\n # TODO Allow for updating or reinstalling of modules\n logging.debug(\"Module \" + config[\"name\"] +\n \" already installed, skipping\")\n else:\n if config is not None and \"repo\" in config:\n git_url = config[\"repo\"]\n else:\n git_url = DEFAULT_GIT_URL + config[\"type\"] + \\\n \"-\" + config[\"name\"] + \".git\"\n\n if any(prefix in git_url for prefix in [\"http\", \"https\", \"ssh\"]):\n # TODO Test if url or ssh path exists\n # TODO Handle github authentication\n git_clone(git_url, config[\"install_path\"], config[\"branch\"])\n else:\n if os.path.isdir(git_url):\n git_clone(git_url, config[\"install_path\"],\n config[\"branch\"])\n else:\n logging.debug(\"Could not find local git repo \" + git_url)\n\n if os.path.isdir(config[\"install_path\"]):\n logging.debug(\"Installed \" + config[\"name\"] +\n \" to \" + config[\"install_path\"])\n else:\n logging.debug(\"Install of \" + config[\"name\"] + \" failed \")\n\n # Install module dependancies\n if os.path.isfile(config[\"install_path\"] + \"/requirements.txt\"):\n pip_install_deps(config[\"install_path\"] + \"/requirements.txt\")\n", "path": "opsdroid/loader.py"}]} | 2,095 | 288 |
gh_patches_debug_7376 | rasdani/github-patches | git_diff | feast-dev__feast-2107 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
requested_features are not passed to online_read() from passthrough_provider
`OnlineStore.online_read()` has a parameter `requested_features`. Similarly, `PassthroughProvider.online_read()` also has a parameter `requested_features`.
`PassthroughProvider.online_read()` calls `OnlineStore.online_read()` but doesn't pass on the `requested_features` variable.
I am writing a new OnlineStore where I need the `requested_features` in `OnlineStore.online_read()`.
## Issue Detected
In [feast/infra/passthrough_provider.py#L90](https://github.com/feast-dev/feast/blob/2d3cea1d1485d0af1a35a7f30abfed475b58973a/sdk/python/feast/infra/passthrough_provider.py#L90)
```
def online_read(
self,
config: RepoConfig,
table: Union[FeatureTable, FeatureView],
entity_keys: List[EntityKeyProto],
requested_features: List[str] = None,
) -> List[Tuple[Optional[datetime], Optional[Dict[str, ValueProto]]]]:
set_usage_attribute("provider", self.__class__.__name__)
result = self.online_store.online_read(config, table, entity_keys) # should also pass requested_features
return result
```
- Version: 0.15.1
- Platform: all
- Subsystem: all
## Possible Solution
In [feast/infra/passthrough_provider.py#L90](https://github.com/feast-dev/feast/blob/2d3cea1d1485d0af1a35a7f30abfed475b58973a/sdk/python/feast/infra/passthrough_provider.py#L90)
`result = self.online_store.online_read(config, table, entity_keys)`
change to
`result = self.online_store.online_read(config, table, entity_keys, requested_features)`
Note: This won't break any of the current tests since none of the Online store implementation uses requested_features in online_read() method.
If it was as per design, can someone explain why the variable `requested_features` was not passed on?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sdk/python/feast/infra/passthrough_provider.py`
Content:
```
1 from datetime import datetime
2 from typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union
3
4 import pandas
5 import pyarrow as pa
6 from tqdm import tqdm
7
8 from feast.entity import Entity
9 from feast.feature_table import FeatureTable
10 from feast.feature_view import FeatureView
11 from feast.infra.offline_stores.offline_store import RetrievalJob
12 from feast.infra.offline_stores.offline_utils import get_offline_store_from_config
13 from feast.infra.online_stores.helpers import get_online_store_from_config
14 from feast.infra.provider import (
15 Provider,
16 _convert_arrow_to_proto,
17 _get_column_names,
18 _run_field_mapping,
19 )
20 from feast.protos.feast.types.EntityKey_pb2 import EntityKey as EntityKeyProto
21 from feast.protos.feast.types.Value_pb2 import Value as ValueProto
22 from feast.registry import Registry
23 from feast.repo_config import RepoConfig
24 from feast.usage import RatioSampler, log_exceptions_and_usage, set_usage_attribute
25
26 DEFAULT_BATCH_SIZE = 10_000
27
28
29 class PassthroughProvider(Provider):
30 """
31 The Passthrough provider delegates all operations to the underlying online and offline stores.
32 """
33
34 def __init__(self, config: RepoConfig):
35 super().__init__(config)
36
37 self.repo_config = config
38 self.offline_store = get_offline_store_from_config(config.offline_store)
39 self.online_store = get_online_store_from_config(config.online_store)
40
41 def update_infra(
42 self,
43 project: str,
44 tables_to_delete: Sequence[Union[FeatureTable, FeatureView]],
45 tables_to_keep: Sequence[Union[FeatureTable, FeatureView]],
46 entities_to_delete: Sequence[Entity],
47 entities_to_keep: Sequence[Entity],
48 partial: bool,
49 ):
50 set_usage_attribute("provider", self.__class__.__name__)
51 self.online_store.update(
52 config=self.repo_config,
53 tables_to_delete=tables_to_delete,
54 tables_to_keep=tables_to_keep,
55 entities_to_keep=entities_to_keep,
56 entities_to_delete=entities_to_delete,
57 partial=partial,
58 )
59
60 def teardown_infra(
61 self,
62 project: str,
63 tables: Sequence[Union[FeatureTable, FeatureView]],
64 entities: Sequence[Entity],
65 ) -> None:
66 set_usage_attribute("provider", self.__class__.__name__)
67 self.online_store.teardown(self.repo_config, tables, entities)
68
69 def online_write_batch(
70 self,
71 config: RepoConfig,
72 table: Union[FeatureTable, FeatureView],
73 data: List[
74 Tuple[EntityKeyProto, Dict[str, ValueProto], datetime, Optional[datetime]]
75 ],
76 progress: Optional[Callable[[int], Any]],
77 ) -> None:
78 set_usage_attribute("provider", self.__class__.__name__)
79 self.online_store.online_write_batch(config, table, data, progress)
80
81 @log_exceptions_and_usage(sampler=RatioSampler(ratio=0.001))
82 def online_read(
83 self,
84 config: RepoConfig,
85 table: Union[FeatureTable, FeatureView],
86 entity_keys: List[EntityKeyProto],
87 requested_features: List[str] = None,
88 ) -> List[Tuple[Optional[datetime], Optional[Dict[str, ValueProto]]]]:
89 set_usage_attribute("provider", self.__class__.__name__)
90 result = self.online_store.online_read(config, table, entity_keys)
91
92 return result
93
94 def ingest_df(
95 self, feature_view: FeatureView, entities: List[Entity], df: pandas.DataFrame,
96 ):
97 set_usage_attribute("provider", self.__class__.__name__)
98 table = pa.Table.from_pandas(df)
99
100 if feature_view.batch_source.field_mapping is not None:
101 table = _run_field_mapping(table, feature_view.batch_source.field_mapping)
102
103 join_keys = [entity.join_key for entity in entities]
104 rows_to_write = _convert_arrow_to_proto(table, feature_view, join_keys)
105
106 self.online_write_batch(
107 self.repo_config, feature_view, rows_to_write, progress=None
108 )
109
110 def materialize_single_feature_view(
111 self,
112 config: RepoConfig,
113 feature_view: FeatureView,
114 start_date: datetime,
115 end_date: datetime,
116 registry: Registry,
117 project: str,
118 tqdm_builder: Callable[[int], tqdm],
119 ) -> None:
120 set_usage_attribute("provider", self.__class__.__name__)
121
122 entities = []
123 for entity_name in feature_view.entities:
124 entities.append(registry.get_entity(entity_name, project))
125
126 (
127 join_key_columns,
128 feature_name_columns,
129 event_timestamp_column,
130 created_timestamp_column,
131 ) = _get_column_names(feature_view, entities)
132
133 offline_job = self.offline_store.pull_latest_from_table_or_query(
134 config=config,
135 data_source=feature_view.batch_source,
136 join_key_columns=join_key_columns,
137 feature_name_columns=feature_name_columns,
138 event_timestamp_column=event_timestamp_column,
139 created_timestamp_column=created_timestamp_column,
140 start_date=start_date,
141 end_date=end_date,
142 )
143
144 table = offline_job.to_arrow()
145
146 if feature_view.batch_source.field_mapping is not None:
147 table = _run_field_mapping(table, feature_view.batch_source.field_mapping)
148
149 join_keys = [entity.join_key for entity in entities]
150
151 with tqdm_builder(table.num_rows) as pbar:
152 for batch in table.to_batches(DEFAULT_BATCH_SIZE):
153 rows_to_write = _convert_arrow_to_proto(batch, feature_view, join_keys)
154 self.online_write_batch(
155 self.repo_config,
156 feature_view,
157 rows_to_write,
158 lambda x: pbar.update(x),
159 )
160
161 def get_historical_features(
162 self,
163 config: RepoConfig,
164 feature_views: List[FeatureView],
165 feature_refs: List[str],
166 entity_df: Union[pandas.DataFrame, str],
167 registry: Registry,
168 project: str,
169 full_feature_names: bool,
170 ) -> RetrievalJob:
171 set_usage_attribute("provider", self.__class__.__name__)
172
173 job = self.offline_store.get_historical_features(
174 config=config,
175 feature_views=feature_views,
176 feature_refs=feature_refs,
177 entity_df=entity_df,
178 registry=registry,
179 project=project,
180 full_feature_names=full_feature_names,
181 )
182 return job
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sdk/python/feast/infra/passthrough_provider.py b/sdk/python/feast/infra/passthrough_provider.py
--- a/sdk/python/feast/infra/passthrough_provider.py
+++ b/sdk/python/feast/infra/passthrough_provider.py
@@ -87,7 +87,9 @@
requested_features: List[str] = None,
) -> List[Tuple[Optional[datetime], Optional[Dict[str, ValueProto]]]]:
set_usage_attribute("provider", self.__class__.__name__)
- result = self.online_store.online_read(config, table, entity_keys)
+ result = self.online_store.online_read(
+ config, table, entity_keys, requested_features
+ )
return result
| {"golden_diff": "diff --git a/sdk/python/feast/infra/passthrough_provider.py b/sdk/python/feast/infra/passthrough_provider.py\n--- a/sdk/python/feast/infra/passthrough_provider.py\n+++ b/sdk/python/feast/infra/passthrough_provider.py\n@@ -87,7 +87,9 @@\n requested_features: List[str] = None,\n ) -> List[Tuple[Optional[datetime], Optional[Dict[str, ValueProto]]]]:\n set_usage_attribute(\"provider\", self.__class__.__name__)\n- result = self.online_store.online_read(config, table, entity_keys)\n+ result = self.online_store.online_read(\n+ config, table, entity_keys, requested_features\n+ )\n \n return result\n", "issue": "requested_features are not passed to online_read() from passthrough_provider\n`OnlineStore.online_read()` has a parameter `requested_features`. Similarly, `PassthroughProvider.online_read()` also has a parameter `requested_features`.\r\n`PassthroughProvider.online_read()` calls `OnlineStore.online_read()` but doesn't pass on the `requested_features` variable.\r\nI am writing a new OnlineStore where I need the `requested_features` in `OnlineStore.online_read()`.\r\n\r\n## Issue Detected \r\nIn [feast/infra/passthrough_provider.py#L90](https://github.com/feast-dev/feast/blob/2d3cea1d1485d0af1a35a7f30abfed475b58973a/sdk/python/feast/infra/passthrough_provider.py#L90)\r\n```\r\ndef online_read(\r\n self,\r\n config: RepoConfig,\r\n table: Union[FeatureTable, FeatureView],\r\n entity_keys: List[EntityKeyProto],\r\n requested_features: List[str] = None,\r\n ) -> List[Tuple[Optional[datetime], Optional[Dict[str, ValueProto]]]]:\r\n set_usage_attribute(\"provider\", self.__class__.__name__)\r\n result = self.online_store.online_read(config, table, entity_keys) # should also pass requested_features\r\n\r\n return result\r\n```\r\n\r\n- Version: 0.15.1\r\n- Platform: all\r\n- Subsystem: all\r\n\r\n## Possible Solution\r\n\r\nIn [feast/infra/passthrough_provider.py#L90](https://github.com/feast-dev/feast/blob/2d3cea1d1485d0af1a35a7f30abfed475b58973a/sdk/python/feast/infra/passthrough_provider.py#L90)\r\n`result = self.online_store.online_read(config, table, entity_keys)` \r\nchange to \r\n`result = self.online_store.online_read(config, table, entity_keys, requested_features)`\r\n\r\nNote: This won't break any of the current tests since none of the Online store implementation uses requested_features in online_read() method.\r\n\r\nIf it was as per design, can someone explain why the variable `requested_features` was not passed on? \r\n\n", "before_files": [{"content": "from datetime import datetime\nfrom typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union\n\nimport pandas\nimport pyarrow as pa\nfrom tqdm import tqdm\n\nfrom feast.entity import Entity\nfrom feast.feature_table import FeatureTable\nfrom feast.feature_view import FeatureView\nfrom feast.infra.offline_stores.offline_store import RetrievalJob\nfrom feast.infra.offline_stores.offline_utils import get_offline_store_from_config\nfrom feast.infra.online_stores.helpers import get_online_store_from_config\nfrom feast.infra.provider import (\n Provider,\n _convert_arrow_to_proto,\n _get_column_names,\n _run_field_mapping,\n)\nfrom feast.protos.feast.types.EntityKey_pb2 import EntityKey as EntityKeyProto\nfrom feast.protos.feast.types.Value_pb2 import Value as ValueProto\nfrom feast.registry import Registry\nfrom feast.repo_config import RepoConfig\nfrom feast.usage import RatioSampler, log_exceptions_and_usage, set_usage_attribute\n\nDEFAULT_BATCH_SIZE = 10_000\n\n\nclass PassthroughProvider(Provider):\n \"\"\"\n The Passthrough provider delegates all operations to the underlying online and offline stores.\n \"\"\"\n\n def __init__(self, config: RepoConfig):\n super().__init__(config)\n\n self.repo_config = config\n self.offline_store = get_offline_store_from_config(config.offline_store)\n self.online_store = get_online_store_from_config(config.online_store)\n\n def update_infra(\n self,\n project: str,\n tables_to_delete: Sequence[Union[FeatureTable, FeatureView]],\n tables_to_keep: Sequence[Union[FeatureTable, FeatureView]],\n entities_to_delete: Sequence[Entity],\n entities_to_keep: Sequence[Entity],\n partial: bool,\n ):\n set_usage_attribute(\"provider\", self.__class__.__name__)\n self.online_store.update(\n config=self.repo_config,\n tables_to_delete=tables_to_delete,\n tables_to_keep=tables_to_keep,\n entities_to_keep=entities_to_keep,\n entities_to_delete=entities_to_delete,\n partial=partial,\n )\n\n def teardown_infra(\n self,\n project: str,\n tables: Sequence[Union[FeatureTable, FeatureView]],\n entities: Sequence[Entity],\n ) -> None:\n set_usage_attribute(\"provider\", self.__class__.__name__)\n self.online_store.teardown(self.repo_config, tables, entities)\n\n def online_write_batch(\n self,\n config: RepoConfig,\n table: Union[FeatureTable, FeatureView],\n data: List[\n Tuple[EntityKeyProto, Dict[str, ValueProto], datetime, Optional[datetime]]\n ],\n progress: Optional[Callable[[int], Any]],\n ) -> None:\n set_usage_attribute(\"provider\", self.__class__.__name__)\n self.online_store.online_write_batch(config, table, data, progress)\n\n @log_exceptions_and_usage(sampler=RatioSampler(ratio=0.001))\n def online_read(\n self,\n config: RepoConfig,\n table: Union[FeatureTable, FeatureView],\n entity_keys: List[EntityKeyProto],\n requested_features: List[str] = None,\n ) -> List[Tuple[Optional[datetime], Optional[Dict[str, ValueProto]]]]:\n set_usage_attribute(\"provider\", self.__class__.__name__)\n result = self.online_store.online_read(config, table, entity_keys)\n\n return result\n\n def ingest_df(\n self, feature_view: FeatureView, entities: List[Entity], df: pandas.DataFrame,\n ):\n set_usage_attribute(\"provider\", self.__class__.__name__)\n table = pa.Table.from_pandas(df)\n\n if feature_view.batch_source.field_mapping is not None:\n table = _run_field_mapping(table, feature_view.batch_source.field_mapping)\n\n join_keys = [entity.join_key for entity in entities]\n rows_to_write = _convert_arrow_to_proto(table, feature_view, join_keys)\n\n self.online_write_batch(\n self.repo_config, feature_view, rows_to_write, progress=None\n )\n\n def materialize_single_feature_view(\n self,\n config: RepoConfig,\n feature_view: FeatureView,\n start_date: datetime,\n end_date: datetime,\n registry: Registry,\n project: str,\n tqdm_builder: Callable[[int], tqdm],\n ) -> None:\n set_usage_attribute(\"provider\", self.__class__.__name__)\n\n entities = []\n for entity_name in feature_view.entities:\n entities.append(registry.get_entity(entity_name, project))\n\n (\n join_key_columns,\n feature_name_columns,\n event_timestamp_column,\n created_timestamp_column,\n ) = _get_column_names(feature_view, entities)\n\n offline_job = self.offline_store.pull_latest_from_table_or_query(\n config=config,\n data_source=feature_view.batch_source,\n join_key_columns=join_key_columns,\n feature_name_columns=feature_name_columns,\n event_timestamp_column=event_timestamp_column,\n created_timestamp_column=created_timestamp_column,\n start_date=start_date,\n end_date=end_date,\n )\n\n table = offline_job.to_arrow()\n\n if feature_view.batch_source.field_mapping is not None:\n table = _run_field_mapping(table, feature_view.batch_source.field_mapping)\n\n join_keys = [entity.join_key for entity in entities]\n\n with tqdm_builder(table.num_rows) as pbar:\n for batch in table.to_batches(DEFAULT_BATCH_SIZE):\n rows_to_write = _convert_arrow_to_proto(batch, feature_view, join_keys)\n self.online_write_batch(\n self.repo_config,\n feature_view,\n rows_to_write,\n lambda x: pbar.update(x),\n )\n\n def get_historical_features(\n self,\n config: RepoConfig,\n feature_views: List[FeatureView],\n feature_refs: List[str],\n entity_df: Union[pandas.DataFrame, str],\n registry: Registry,\n project: str,\n full_feature_names: bool,\n ) -> RetrievalJob:\n set_usage_attribute(\"provider\", self.__class__.__name__)\n\n job = self.offline_store.get_historical_features(\n config=config,\n feature_views=feature_views,\n feature_refs=feature_refs,\n entity_df=entity_df,\n registry=registry,\n project=project,\n full_feature_names=full_feature_names,\n )\n return job\n", "path": "sdk/python/feast/infra/passthrough_provider.py"}], "after_files": [{"content": "from datetime import datetime\nfrom typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union\n\nimport pandas\nimport pyarrow as pa\nfrom tqdm import tqdm\n\nfrom feast.entity import Entity\nfrom feast.feature_table import FeatureTable\nfrom feast.feature_view import FeatureView\nfrom feast.infra.offline_stores.offline_store import RetrievalJob\nfrom feast.infra.offline_stores.offline_utils import get_offline_store_from_config\nfrom feast.infra.online_stores.helpers import get_online_store_from_config\nfrom feast.infra.provider import (\n Provider,\n _convert_arrow_to_proto,\n _get_column_names,\n _run_field_mapping,\n)\nfrom feast.protos.feast.types.EntityKey_pb2 import EntityKey as EntityKeyProto\nfrom feast.protos.feast.types.Value_pb2 import Value as ValueProto\nfrom feast.registry import Registry\nfrom feast.repo_config import RepoConfig\nfrom feast.usage import RatioSampler, log_exceptions_and_usage, set_usage_attribute\n\nDEFAULT_BATCH_SIZE = 10_000\n\n\nclass PassthroughProvider(Provider):\n \"\"\"\n The Passthrough provider delegates all operations to the underlying online and offline stores.\n \"\"\"\n\n def __init__(self, config: RepoConfig):\n super().__init__(config)\n\n self.repo_config = config\n self.offline_store = get_offline_store_from_config(config.offline_store)\n self.online_store = get_online_store_from_config(config.online_store)\n\n def update_infra(\n self,\n project: str,\n tables_to_delete: Sequence[Union[FeatureTable, FeatureView]],\n tables_to_keep: Sequence[Union[FeatureTable, FeatureView]],\n entities_to_delete: Sequence[Entity],\n entities_to_keep: Sequence[Entity],\n partial: bool,\n ):\n set_usage_attribute(\"provider\", self.__class__.__name__)\n self.online_store.update(\n config=self.repo_config,\n tables_to_delete=tables_to_delete,\n tables_to_keep=tables_to_keep,\n entities_to_keep=entities_to_keep,\n entities_to_delete=entities_to_delete,\n partial=partial,\n )\n\n def teardown_infra(\n self,\n project: str,\n tables: Sequence[Union[FeatureTable, FeatureView]],\n entities: Sequence[Entity],\n ) -> None:\n set_usage_attribute(\"provider\", self.__class__.__name__)\n self.online_store.teardown(self.repo_config, tables, entities)\n\n def online_write_batch(\n self,\n config: RepoConfig,\n table: Union[FeatureTable, FeatureView],\n data: List[\n Tuple[EntityKeyProto, Dict[str, ValueProto], datetime, Optional[datetime]]\n ],\n progress: Optional[Callable[[int], Any]],\n ) -> None:\n set_usage_attribute(\"provider\", self.__class__.__name__)\n self.online_store.online_write_batch(config, table, data, progress)\n\n @log_exceptions_and_usage(sampler=RatioSampler(ratio=0.001))\n def online_read(\n self,\n config: RepoConfig,\n table: Union[FeatureTable, FeatureView],\n entity_keys: List[EntityKeyProto],\n requested_features: List[str] = None,\n ) -> List[Tuple[Optional[datetime], Optional[Dict[str, ValueProto]]]]:\n set_usage_attribute(\"provider\", self.__class__.__name__)\n result = self.online_store.online_read(\n config, table, entity_keys, requested_features\n )\n\n return result\n\n def ingest_df(\n self, feature_view: FeatureView, entities: List[Entity], df: pandas.DataFrame,\n ):\n set_usage_attribute(\"provider\", self.__class__.__name__)\n table = pa.Table.from_pandas(df)\n\n if feature_view.batch_source.field_mapping is not None:\n table = _run_field_mapping(table, feature_view.batch_source.field_mapping)\n\n join_keys = [entity.join_key for entity in entities]\n rows_to_write = _convert_arrow_to_proto(table, feature_view, join_keys)\n\n self.online_write_batch(\n self.repo_config, feature_view, rows_to_write, progress=None\n )\n\n def materialize_single_feature_view(\n self,\n config: RepoConfig,\n feature_view: FeatureView,\n start_date: datetime,\n end_date: datetime,\n registry: Registry,\n project: str,\n tqdm_builder: Callable[[int], tqdm],\n ) -> None:\n set_usage_attribute(\"provider\", self.__class__.__name__)\n\n entities = []\n for entity_name in feature_view.entities:\n entities.append(registry.get_entity(entity_name, project))\n\n (\n join_key_columns,\n feature_name_columns,\n event_timestamp_column,\n created_timestamp_column,\n ) = _get_column_names(feature_view, entities)\n\n offline_job = self.offline_store.pull_latest_from_table_or_query(\n config=config,\n data_source=feature_view.batch_source,\n join_key_columns=join_key_columns,\n feature_name_columns=feature_name_columns,\n event_timestamp_column=event_timestamp_column,\n created_timestamp_column=created_timestamp_column,\n start_date=start_date,\n end_date=end_date,\n )\n\n table = offline_job.to_arrow()\n\n if feature_view.batch_source.field_mapping is not None:\n table = _run_field_mapping(table, feature_view.batch_source.field_mapping)\n\n join_keys = [entity.join_key for entity in entities]\n\n with tqdm_builder(table.num_rows) as pbar:\n for batch in table.to_batches(DEFAULT_BATCH_SIZE):\n rows_to_write = _convert_arrow_to_proto(batch, feature_view, join_keys)\n self.online_write_batch(\n self.repo_config,\n feature_view,\n rows_to_write,\n lambda x: pbar.update(x),\n )\n\n def get_historical_features(\n self,\n config: RepoConfig,\n feature_views: List[FeatureView],\n feature_refs: List[str],\n entity_df: Union[pandas.DataFrame, str],\n registry: Registry,\n project: str,\n full_feature_names: bool,\n ) -> RetrievalJob:\n set_usage_attribute(\"provider\", self.__class__.__name__)\n\n job = self.offline_store.get_historical_features(\n config=config,\n feature_views=feature_views,\n feature_refs=feature_refs,\n entity_df=entity_df,\n registry=registry,\n project=project,\n full_feature_names=full_feature_names,\n )\n return job\n", "path": "sdk/python/feast/infra/passthrough_provider.py"}]} | 2,539 | 161 |
gh_patches_debug_23552 | rasdani/github-patches | git_diff | dotkom__onlineweb4-1914 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
G Suite group syncer does not respect group syncer whitelist
## What kind of an issue is this?
- [x] Bug report
## What is the expected behaviour?
If a group is not in the OW4_GSUITE_SETTINGS groups list, it should not be affected by changes to the group.
## What is the current behaviour?
If a group membership is changed, the group is synced, and people will be removed from the group if not using their online mail.
## How do you reproduce this problem?
Add or remove a person to a group not set up for G Suite syncing (in settings)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/gsuite/mail_syncer/main.py`
Content:
```
1 import logging
2
3 from apps.gsuite.mail_syncer.utils import (get_excess_groups_for_user, get_excess_users_in_g_suite,
4 get_g_suite_users_for_group,
5 get_missing_g_suite_group_names_for_user,
6 get_missing_ow4_users_for_g_suite,
7 get_ow4_users_for_group,
8 insert_ow4_user_into_g_suite_group,
9 remove_g_suite_user_from_group)
10
11 logger = logging.getLogger(__name__)
12
13
14 def insert_ow4_users_into_g_suite(domain, group_name, missing_users, suppress_http_errors=False):
15 """
16 Inserts a list of OW4 users into a G Suite group.
17 :param domain: The domain in which to insert a user into a group.
18 :type domain: str
19 :param group_name: The name of the group to insert the user into.
20 :type group_name: str
21 :param missing_users: A list of the missing users to be inserted into said group.
22 :type missing_users: list
23 """
24 for missing_user in missing_users:
25 insert_ow4_user_into_g_suite_group(domain, group_name, missing_user, suppress_http_errors=suppress_http_errors)
26
27
28 def remove_excess_g_suite_users(domain, group_name, g_suite_excess_users, suppress_http_errors=False):
29 """
30 Removes excess users from a G Suite group.
31 :param domain: The domain in which to remove a user from a group.
32 :type domain: str
33 :param group_name: The name of the group to remove the users from.
34 :type group_name: str
35 :param g_suite_excess_users: A list of the excess users to be removed from said group.
36 :type g_suite_excess_users: list
37 """
38 logger.info("Cleaning G Suite group '{group}'.".format(group=group_name),
39 extra={'group': group_name, 'excess_users': g_suite_excess_users})
40
41 for excess_user in g_suite_excess_users:
42 resp = remove_g_suite_user_from_group(domain, group_name, excess_user,
43 suppress_http_errors=suppress_http_errors)
44 logger.debug('Response from cleaning {group_name}: {resp}'.format(group_name=group_name, resp=resp))
45
46
47 def insert_ow4_user_into_groups(domain, user, group_names, suppress_http_errors=False):
48 """
49 Inserts a single OW4 user into a G Suite group.
50 :param domain: The domain in which to insert a user into a group.
51 :type domain: str
52 :param user: The user to update group memberships for.
53 :type user: apps.authentication.models.OnlineUser
54 :param group_names: A list of group names to insert the user into.
55 :type group_names: list
56 :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.
57 :type suppress_http_errors: bool
58 """
59 groups = ["{group}@{domain}".format(group=group_name, domain=domain) for group_name in group_names]
60 if groups:
61 logger.info('Inserting {user} into some new G Suite groups.'.format(user=user),
62 extra={'new_groups': group_names, 'user': user})
63 for group in groups:
64 insert_ow4_user_into_g_suite_group(domain, group, user, suppress_http_errors=suppress_http_errors)
65
66
67 def cleanup_groups_for_user(domain, user, suppress_http_errors=False):
68 """
69 Finds excess groups for a OW4 user, and removes the user from said groups.
70 :param domain: The domain in which to find a users excess group memberships.
71 :type domain: str
72 :param user: The user to remove excess group memberships for.
73 :type user: apps.authentication.models.OnlineUser
74 :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.
75 :type suppress_http_errors: bool
76 """
77 excess_groups = get_excess_groups_for_user(domain, user)
78 if excess_groups:
79 logger.debug('Removing "{user}" from some G Suite groups.'.format(user=user),
80 extra={'user': user, 'excess_groups': excess_groups})
81 for group in excess_groups:
82 remove_g_suite_user_from_group(domain, group, user.online_mail, suppress_http_errors=suppress_http_errors)
83
84
85 def update_g_suite_user(domain, ow4_user, suppress_http_errors=False):
86 """
87 Finds missing and excess groups and adds and removes the user to/from them, respectively.
88 :param domain: The domain in which to update a users group memberships.
89 :type domain: str
90 :param ow4_user: The user to update group memberships for.
91 :type ow4_user: apps.authentication.models.OnlineUser
92 :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.
93 :type suppress_http_errors: bool
94 """
95 cleanup_groups_for_user(domain, ow4_user, suppress_http_errors=suppress_http_errors)
96 insert_ow4_user_into_groups(domain, ow4_user, get_missing_g_suite_group_names_for_user(domain, ow4_user),
97 suppress_http_errors=suppress_http_errors)
98
99
100 def update_g_suite_group(domain, group_name, suppress_http_errors=False):
101 """
102 Finds missing and excess users and adds and removes the users to/from them, respectively.
103 :param domain: The domain in which to find a group's user lists.
104 :type domain: str
105 :param group_name: The name of the group to get group membership status for.
106 :type group_name: str
107 :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.
108 :type suppress_http_errors: bool
109 """
110 g_suite_users = get_g_suite_users_for_group(domain, group_name, suppress_http_errors=suppress_http_errors)
111 ow4_users = get_ow4_users_for_group(group_name)
112
113 excess_users = get_excess_users_in_g_suite(g_suite_users, ow4_users)
114 missing_users = get_missing_ow4_users_for_g_suite(g_suite_users, ow4_users)
115
116 # @ToDo: Look into bulk updates
117 insert_ow4_users_into_g_suite(domain, group_name, missing_users, suppress_http_errors=suppress_http_errors)
118 remove_excess_g_suite_users(domain, group_name, excess_users, suppress_http_errors=suppress_http_errors)
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/gsuite/mail_syncer/main.py b/apps/gsuite/mail_syncer/main.py
--- a/apps/gsuite/mail_syncer/main.py
+++ b/apps/gsuite/mail_syncer/main.py
@@ -1,5 +1,7 @@
import logging
+from django.conf import settings
+
from apps.gsuite.mail_syncer.utils import (get_excess_groups_for_user, get_excess_users_in_g_suite,
get_g_suite_users_for_group,
get_missing_g_suite_group_names_for_user,
@@ -107,6 +109,12 @@
:param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.
:type suppress_http_errors: bool
"""
+
+ if group_name.lower() not in settings.OW4_GSUITE_SYNC.get('GROUPS', {}).keys():
+ logger.debug('Not running group syncer for group {} - group syncing not enabled for this group'
+ .format(group_name))
+ return
+
g_suite_users = get_g_suite_users_for_group(domain, group_name, suppress_http_errors=suppress_http_errors)
ow4_users = get_ow4_users_for_group(group_name)
| {"golden_diff": "diff --git a/apps/gsuite/mail_syncer/main.py b/apps/gsuite/mail_syncer/main.py\n--- a/apps/gsuite/mail_syncer/main.py\n+++ b/apps/gsuite/mail_syncer/main.py\n@@ -1,5 +1,7 @@\n import logging\n \n+from django.conf import settings\n+\n from apps.gsuite.mail_syncer.utils import (get_excess_groups_for_user, get_excess_users_in_g_suite,\n get_g_suite_users_for_group,\n get_missing_g_suite_group_names_for_user,\n@@ -107,6 +109,12 @@\n :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.\n :type suppress_http_errors: bool\n \"\"\"\n+\n+ if group_name.lower() not in settings.OW4_GSUITE_SYNC.get('GROUPS', {}).keys():\n+ logger.debug('Not running group syncer for group {} - group syncing not enabled for this group'\n+ .format(group_name))\n+ return\n+\n g_suite_users = get_g_suite_users_for_group(domain, group_name, suppress_http_errors=suppress_http_errors)\n ow4_users = get_ow4_users_for_group(group_name)\n", "issue": "G Suite group syncer does not respect group syncer whitelist\n## What kind of an issue is this?\r\n\r\n- [x] Bug report\r\n\r\n## What is the expected behaviour?\r\n\r\nIf a group is not in the OW4_GSUITE_SETTINGS groups list, it should not be affected by changes to the group.\r\n\r\n## What is the current behaviour?\r\n\r\nIf a group membership is changed, the group is synced, and people will be removed from the group if not using their online mail.\r\n\r\n\r\n## How do you reproduce this problem? \r\n\r\nAdd or remove a person to a group not set up for G Suite syncing (in settings)\r\n\n", "before_files": [{"content": "import logging\n\nfrom apps.gsuite.mail_syncer.utils import (get_excess_groups_for_user, get_excess_users_in_g_suite,\n get_g_suite_users_for_group,\n get_missing_g_suite_group_names_for_user,\n get_missing_ow4_users_for_g_suite,\n get_ow4_users_for_group,\n insert_ow4_user_into_g_suite_group,\n remove_g_suite_user_from_group)\n\nlogger = logging.getLogger(__name__)\n\n\ndef insert_ow4_users_into_g_suite(domain, group_name, missing_users, suppress_http_errors=False):\n \"\"\"\n Inserts a list of OW4 users into a G Suite group.\n :param domain: The domain in which to insert a user into a group.\n :type domain: str\n :param group_name: The name of the group to insert the user into.\n :type group_name: str\n :param missing_users: A list of the missing users to be inserted into said group.\n :type missing_users: list\n \"\"\"\n for missing_user in missing_users:\n insert_ow4_user_into_g_suite_group(domain, group_name, missing_user, suppress_http_errors=suppress_http_errors)\n\n\ndef remove_excess_g_suite_users(domain, group_name, g_suite_excess_users, suppress_http_errors=False):\n \"\"\"\n Removes excess users from a G Suite group.\n :param domain: The domain in which to remove a user from a group.\n :type domain: str\n :param group_name: The name of the group to remove the users from.\n :type group_name: str\n :param g_suite_excess_users: A list of the excess users to be removed from said group.\n :type g_suite_excess_users: list\n \"\"\"\n logger.info(\"Cleaning G Suite group '{group}'.\".format(group=group_name),\n extra={'group': group_name, 'excess_users': g_suite_excess_users})\n\n for excess_user in g_suite_excess_users:\n resp = remove_g_suite_user_from_group(domain, group_name, excess_user,\n suppress_http_errors=suppress_http_errors)\n logger.debug('Response from cleaning {group_name}: {resp}'.format(group_name=group_name, resp=resp))\n\n\ndef insert_ow4_user_into_groups(domain, user, group_names, suppress_http_errors=False):\n \"\"\"\n Inserts a single OW4 user into a G Suite group.\n :param domain: The domain in which to insert a user into a group.\n :type domain: str\n :param user: The user to update group memberships for.\n :type user: apps.authentication.models.OnlineUser\n :param group_names: A list of group names to insert the user into.\n :type group_names: list\n :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.\n :type suppress_http_errors: bool\n \"\"\"\n groups = [\"{group}@{domain}\".format(group=group_name, domain=domain) for group_name in group_names]\n if groups:\n logger.info('Inserting {user} into some new G Suite groups.'.format(user=user),\n extra={'new_groups': group_names, 'user': user})\n for group in groups:\n insert_ow4_user_into_g_suite_group(domain, group, user, suppress_http_errors=suppress_http_errors)\n\n\ndef cleanup_groups_for_user(domain, user, suppress_http_errors=False):\n \"\"\"\n Finds excess groups for a OW4 user, and removes the user from said groups.\n :param domain: The domain in which to find a users excess group memberships.\n :type domain: str\n :param user: The user to remove excess group memberships for.\n :type user: apps.authentication.models.OnlineUser\n :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.\n :type suppress_http_errors: bool\n \"\"\"\n excess_groups = get_excess_groups_for_user(domain, user)\n if excess_groups:\n logger.debug('Removing \"{user}\" from some G Suite groups.'.format(user=user),\n extra={'user': user, 'excess_groups': excess_groups})\n for group in excess_groups:\n remove_g_suite_user_from_group(domain, group, user.online_mail, suppress_http_errors=suppress_http_errors)\n\n\ndef update_g_suite_user(domain, ow4_user, suppress_http_errors=False):\n \"\"\"\n Finds missing and excess groups and adds and removes the user to/from them, respectively.\n :param domain: The domain in which to update a users group memberships.\n :type domain: str\n :param ow4_user: The user to update group memberships for.\n :type ow4_user: apps.authentication.models.OnlineUser\n :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.\n :type suppress_http_errors: bool\n \"\"\"\n cleanup_groups_for_user(domain, ow4_user, suppress_http_errors=suppress_http_errors)\n insert_ow4_user_into_groups(domain, ow4_user, get_missing_g_suite_group_names_for_user(domain, ow4_user),\n suppress_http_errors=suppress_http_errors)\n\n\ndef update_g_suite_group(domain, group_name, suppress_http_errors=False):\n \"\"\"\n Finds missing and excess users and adds and removes the users to/from them, respectively.\n :param domain: The domain in which to find a group's user lists.\n :type domain: str\n :param group_name: The name of the group to get group membership status for.\n :type group_name: str\n :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.\n :type suppress_http_errors: bool\n \"\"\"\n g_suite_users = get_g_suite_users_for_group(domain, group_name, suppress_http_errors=suppress_http_errors)\n ow4_users = get_ow4_users_for_group(group_name)\n\n excess_users = get_excess_users_in_g_suite(g_suite_users, ow4_users)\n missing_users = get_missing_ow4_users_for_g_suite(g_suite_users, ow4_users)\n\n # @ToDo: Look into bulk updates\n insert_ow4_users_into_g_suite(domain, group_name, missing_users, suppress_http_errors=suppress_http_errors)\n remove_excess_g_suite_users(domain, group_name, excess_users, suppress_http_errors=suppress_http_errors)\n", "path": "apps/gsuite/mail_syncer/main.py"}], "after_files": [{"content": "import logging\n\nfrom django.conf import settings\n\nfrom apps.gsuite.mail_syncer.utils import (get_excess_groups_for_user, get_excess_users_in_g_suite,\n get_g_suite_users_for_group,\n get_missing_g_suite_group_names_for_user,\n get_missing_ow4_users_for_g_suite,\n get_ow4_users_for_group,\n insert_ow4_user_into_g_suite_group,\n remove_g_suite_user_from_group)\n\nlogger = logging.getLogger(__name__)\n\n\ndef insert_ow4_users_into_g_suite(domain, group_name, missing_users, suppress_http_errors=False):\n \"\"\"\n Inserts a list of OW4 users into a G Suite group.\n :param domain: The domain in which to insert a user into a group.\n :type domain: str\n :param group_name: The name of the group to insert the user into.\n :type group_name: str\n :param missing_users: A list of the missing users to be inserted into said group.\n :type missing_users: list\n \"\"\"\n for missing_user in missing_users:\n insert_ow4_user_into_g_suite_group(domain, group_name, missing_user, suppress_http_errors=suppress_http_errors)\n\n\ndef remove_excess_g_suite_users(domain, group_name, g_suite_excess_users, suppress_http_errors=False):\n \"\"\"\n Removes excess users from a G Suite group.\n :param domain: The domain in which to remove a user from a group.\n :type domain: str\n :param group_name: The name of the group to remove the users from.\n :type group_name: str\n :param g_suite_excess_users: A list of the excess users to be removed from said group.\n :type g_suite_excess_users: list\n \"\"\"\n logger.info(\"Cleaning G Suite group '{group}'.\".format(group=group_name),\n extra={'group': group_name, 'excess_users': g_suite_excess_users})\n\n for excess_user in g_suite_excess_users:\n resp = remove_g_suite_user_from_group(domain, group_name, excess_user,\n suppress_http_errors=suppress_http_errors)\n logger.debug('Response from cleaning {group_name}: {resp}'.format(group_name=group_name, resp=resp))\n\n\ndef insert_ow4_user_into_groups(domain, user, group_names, suppress_http_errors=False):\n \"\"\"\n Inserts a single OW4 user into a G Suite group.\n :param domain: The domain in which to insert a user into a group.\n :type domain: str\n :param user: The user to update group memberships for.\n :type user: apps.authentication.models.OnlineUser\n :param group_names: A list of group names to insert the user into.\n :type group_names: list\n :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.\n :type suppress_http_errors: bool\n \"\"\"\n groups = [\"{group}@{domain}\".format(group=group_name, domain=domain) for group_name in group_names]\n if groups:\n logger.info('Inserting {user} into some new G Suite groups.'.format(user=user),\n extra={'new_groups': group_names, 'user': user})\n for group in groups:\n insert_ow4_user_into_g_suite_group(domain, group, user, suppress_http_errors=suppress_http_errors)\n\n\ndef cleanup_groups_for_user(domain, user, suppress_http_errors=False):\n \"\"\"\n Finds excess groups for a OW4 user, and removes the user from said groups.\n :param domain: The domain in which to find a users excess group memberships.\n :type domain: str\n :param user: The user to remove excess group memberships for.\n :type user: apps.authentication.models.OnlineUser\n :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.\n :type suppress_http_errors: bool\n \"\"\"\n excess_groups = get_excess_groups_for_user(domain, user)\n if excess_groups:\n logger.debug('Removing \"{user}\" from some G Suite groups.'.format(user=user),\n extra={'user': user, 'excess_groups': excess_groups})\n for group in excess_groups:\n remove_g_suite_user_from_group(domain, group, user.online_mail, suppress_http_errors=suppress_http_errors)\n\n\ndef update_g_suite_user(domain, ow4_user, suppress_http_errors=False):\n \"\"\"\n Finds missing and excess groups and adds and removes the user to/from them, respectively.\n :param domain: The domain in which to update a users group memberships.\n :type domain: str\n :param ow4_user: The user to update group memberships for.\n :type ow4_user: apps.authentication.models.OnlineUser\n :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.\n :type suppress_http_errors: bool\n \"\"\"\n cleanup_groups_for_user(domain, ow4_user, suppress_http_errors=suppress_http_errors)\n insert_ow4_user_into_groups(domain, ow4_user, get_missing_g_suite_group_names_for_user(domain, ow4_user),\n suppress_http_errors=suppress_http_errors)\n\n\ndef update_g_suite_group(domain, group_name, suppress_http_errors=False):\n \"\"\"\n Finds missing and excess users and adds and removes the users to/from them, respectively.\n :param domain: The domain in which to find a group's user lists.\n :type domain: str\n :param group_name: The name of the group to get group membership status for.\n :type group_name: str\n :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.\n :type suppress_http_errors: bool\n \"\"\"\n\n if group_name.lower() not in settings.OW4_GSUITE_SYNC.get('GROUPS', {}).keys():\n logger.debug('Not running group syncer for group {} - group syncing not enabled for this group'\n .format(group_name))\n return\n\n g_suite_users = get_g_suite_users_for_group(domain, group_name, suppress_http_errors=suppress_http_errors)\n ow4_users = get_ow4_users_for_group(group_name)\n\n excess_users = get_excess_users_in_g_suite(g_suite_users, ow4_users)\n missing_users = get_missing_ow4_users_for_g_suite(g_suite_users, ow4_users)\n\n # @ToDo: Look into bulk updates\n insert_ow4_users_into_g_suite(domain, group_name, missing_users, suppress_http_errors=suppress_http_errors)\n remove_excess_g_suite_users(domain, group_name, excess_users, suppress_http_errors=suppress_http_errors)\n", "path": "apps/gsuite/mail_syncer/main.py"}]} | 1,971 | 251 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.